Exams > Amazon > AWS Certified Database - Specialty
AWS Certified Database - Specialty
Page 11 out of 27 pages Questions 101-110 out of 262 questions
Question#101

A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries run.
Amazon CloudWatch metrics indicate that the instance requires more I/O capacity.
Which actions can a database specialist perform to resolve this issue? (Choose two.)

  • A. Restart the application tool used to run queries.
  • B. Change to a database instance class with higher throughput.
  • C. Convert from Single-AZ to Multi-AZ.
  • D. Increase the I/O parameter in Amazon RDS Enhanced Monitoring.
  • E. Convert from General Purpose to Provisioned IOPS (PIOPS).
Discover Answer Hide Answer

BD

Question#102

A company has an AWS CloudFormation template written in JSON that is used to launch new Amazon RDS for MySQL DB instances. The security team has asked a database specialist to ensure that the master password is automatically rotated every 30 days for all new DB instances that are launched using the template.
What is the MOST operationally efficient solution to meet these requirements?

  • A. Save the password in an Amazon S3 object. Encrypt the S3 object with an AWS KMS key. Set the KMS key to be rotated every 30 days by setting the EnableKeyRotation property to true. Use a CloudFormation custom resource to read the S3 object to extract the password.
  • B. Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30.
  • C. Modify the CloudFormation template to use the AWS KMS key as the database password. Configure an Amazon EventBridge rule to invoke the KMS API to rotate the key every 30 days by setting the ScheduleExpression parameter to ***/30***.
  • D. Integrate the Amazon RDS for MySQL DB instances with AWS IAM and centrally manage the master database user password.
Discover Answer Hide Answer

C

Question#103

A startup company is building a new application to allow users to visualize their on-premises and cloud networking components. The company expects billions of components to be stored and requires responses in milliseconds. The application should be able to identify:
✑ The networks and routes affected if a particular component fails.
✑ The networks that have redundant routes between them.
✑ The networks that do not have redundant routes between them.
✑ The fastest path between two networks.
Which database engine meets these requirements?

  • A. Amazon Aurora MySQL
  • B. Amazon Neptune
  • C. Amazon ElastiCache for Redis
  • D. Amazon DynamoDB
Discover Answer Hide Answer

B

Question#104

An online retail company is planning a multi-day flash sale that must support processing of up to 5,000 orders per second. The number of orders and exact schedule for the sale will vary each day. During the sale, approximately 10,000 concurrent users will look at the deals before buying items. Outside of the sale, the traffic volume is very low. The acceptable performance for read/write queries should be under 25 ms. Order items are about 2 KB in size and have a unique identifier. The company requires the most cost-effective solution that will automatically scale and is highly available.
Which solution meets these requirements?

  • A. Amazon DynamoDB with on-demand capacity mode
  • B. Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled
  • C. Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs)
  • D. Amazon Aurora with one writer node and two cross-Region Aurora Replicas
Discover Answer Hide Answer

C

Question#105

A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This application is very popular and the company expects a tenfold increase in the user base in next few months. The application experiences more traffic during the morning and evening hours.
This application has two parts:
✑ An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from users.
✑ A third-party customer relationship management (CRM) component used by customer care representatives. The CRM uses queries to access booking data.
A database specialist needs to design a cost-effective database solution to handle this workload.
Which solution meets these requirements?

  • A. Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.
  • B. Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
  • C. Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.
  • D. Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.
Discover Answer Hide Answer

A

Question#106

The Development team recently executed a database script containing several data definition language (DDL) and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release accidentally deleted thousands of rows from an important table and broke some application functionality.
This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect
WHERE clause filtering the wrong set of rows.
The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal.
How can the Database Specialist accomplish this?

  • A. Quickly rewind the DB cluster to a point in time before the release using Backtrack.
  • B. Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.
  • C. Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.
  • D. Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database.
Discover Answer Hide Answer

D

Question#107

A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)

  • A. Review the stack drift before modifying the template
  • B. Create and review a change set before applying it
  • C. Export the database resources as stack outputs
  • D. Define the database resources in a nested stack
  • E. Set a stack policy for the database resources
Discover Answer Hide Answer

AD

Question#108

A manufacturing company's website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)

  • A. Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
  • B. Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
  • C. Edit and enable Aurora DB cluster cache management in parameter groups.
  • D. Set TCP keepalive parameters to a high value.
  • E. Set JDBC connection string timeout variables to a low value.
  • F. Set Java DNS caching timeouts to a high value.
Discover Answer Hide Answer

ABC

Question#109

A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS
KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.
Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?

  • A. Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
  • B. Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.
  • C. Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.
  • D. Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.
Discover Answer Hide Answer

A
Reference:
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html

Question#110

A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six- node Aurora DB cluster is appropriate for the peak workload.
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?

  • A. Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.
  • B. Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.
  • C. Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.
  • D. Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.
Discover Answer Hide Answer

D

chevron rightPrevious Nextchevron right