Exams > Amazon > AWS Certified Database - Specialty
AWS Certified Database - Specialty
Page 4 out of 27 pages Questions 31-40 out of 262 questions
Question#31

A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.
How should a Database Specialist ensure DynamoDB can handle the increased traffic?

  • A. Ensure the table is always provisioned to meet peak needs
  • B. Allow burst capacity to handle the additional load
  • C. Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
  • D. Preprovision additional capacity for the known peaks and then reduce the capacity after the event
Discover Answer Hide Answer

B

Question#32

A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live.
What change should the Database Specialist make to enable the migration?

  • A. Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
  • B. Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
  • C. Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
  • D. Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)
Discover Answer Hide Answer

A
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-import-data/

Question#33

A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime.
Which solution would meet these requirements?

  • A. Create a snapshot of the old databases and restore the snapshot with the required storage
  • B. Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS
  • C. Create a new database using native backup and restore
  • D. Create a new read replica and make it the primary by terminating the existing primary
Discover Answer Hide Answer

A

Question#34

A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.
Which step should be taken to troubleshoot this issue?

  • A. Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine's IP address
  • B. Ensure that the RDS DB instance's subnet group includes a public subnet to allow the Developer to connect
  • C. Ensure that the RDS DB instance has not reached its maximum connections limit
  • D. Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections
Discover Answer Hide Answer

B

Question#35

A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system patches are applied during the Amazon RDS- specified maintenance window.
What is the MOST cost-effective action that should be taken to avoid downtime?

  • A. Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB
  • B. Enable cross-Region read replicas and direct read traffic to them when Amazon RDS is down
  • C. Enable a read replica and direct read traffic to it when Amazon RDS is down
  • D. Enable an Amazon RDS for MySQL Multi-AZ configuration
Discover Answer Hide Answer

C

Question#36

A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQL DB instance. Immediately after creating the read replica, users that query it report slow response times.
What could be causing these slow response times?

  • A. New volumes created from snapshots load lazily in the background
  • B. Long-running statements on the master
  • C. Insufficient resources on the master
  • D. Overload of a single replication thread by excessive writes on the master
Discover Answer Hide Answer

B

Question#37

A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hard-coded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned.
Which solution will enable this change?

  • A. Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack's mappings.
  • B. Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
  • C. Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.
  • D. Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
Discover Answer Hide Answer

B

Question#38

A retail company with its main office in New York and another office in Tokyo plans to build a database solution on AWS. The company's main workload consists of a mission-critical application that updates its application data in a data store. The team at the Tokyo office is building dashboards with complex analytical queries using the application data. The dashboards will be used to make buying decisions, so they need to have access to the application data in less than 1 second.
Which solution meets these requirements?

  • A. Use an Amazon RDS DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Create an Amazon ElastiCache cluster in the ap-northeast-1 Region to cache application data from the replica to generate the dashboards.
  • B. Use an Amazon DynamoDB global table in the us-east-1 Region with replication into the ap-northeast-1 Region. Use Amazon QuickSight for displaying dashboard results.
  • C. Use an Amazon RDS for MySQL DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Have the dashboard application read from the read replica.
  • D. Use an Amazon Aurora global database. Deploy the writer instance in the us-east-1 Region and the replica in the ap-northeast-1 Region. Have the dashboard application read from the replica ap-northeast-1 Region.
Discover Answer Hide Answer

D

Question#39

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.
Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

  • A. Update the log_connections parameter in the default parameter group
  • B. Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance
  • C. Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days
  • D. Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days
  • E. Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file
Discover Answer Hide Answer

AE
Reference:
https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/

Question#40

A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load data from Amazon S3 into the Neptune DB cluster using the
Neptune bulk loader API. The Database Specialist receives the following error:
`Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your
S3 configuration.`
Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)

  • A. Check that Amazon S3 has an IAM role granting read access to Neptune
  • B. Check that an Amazon S3 VPC endpoint exists
  • C. Check that a Neptune VPC endpoint exists
  • D. Check that Amazon EC2 has an IAM role granting read access to Amazon S3
  • E. Check that Neptune has an IAM role granting read access to Amazon S3
Discover Answer Hide Answer

BD
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-could-not-connect-endpoint-url/

chevron rightPrevious Nextchevron right