Exams > Amazon > AWS Certified Database - Specialty
AWS Certified Database - Specialty
Page 12 out of 27 pages Questions 111-120 out of 262 questions
Question#111

An online advertising website uses an Amazon DynamoDB table with on-demand capacity mode as its data store. The website also has a DynamoDB Accelerator
(DAX) cluster in the same VPC as its web application server. The application needs to perform infrequent writes and many strongly consistent reads from the data store by querying the DAX cluster.
During a performance audit, a systems administrator notices that the application can look up items by using the DAX cluster. However, the QueryCacheHits metric for the DAX cluster consistently shows 0 while the QueryCacheMisses metric continuously keeps growing in Amazon CloudWatch.
What is the MOST likely reason for this occurrence?

  • A. A VPC endpoint was not added to access DynamoDB.
  • B. Strongly consistent reads are always passed through DAX to DynamoDB.
  • C. DynamoDB is scaling due to a burst in traffic, resulting in degraded performance.
  • D. A VPC endpoint was not added to access CloudWatch.
Discover Answer Hide Answer

B
Reference:
https://github.com/aws/aws-sdk-java/issues/1983

Question#112

A financial company recently launched a portfolio management solution. The backend of the application is powered by Amazon Aurora with MySQL compatibility.
The company requires an RTO of 5 minutes and an RPO of 5 minutes. A database specialist must configure an efficient disaster recovery solution with minimal replication lag.
Which approach should the database specialist take to meet these requirements?

  • A. Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.
  • B. Configure an Amazon Aurora global database and add a different AWS Region.
  • C. Configure a binlog and create a replica in a different AWS Region.
  • D. Configure a cross-Region read replica.
Discover Answer Hide Answer

D

Question#113

A company hosts an internal file-sharing application running on Amazon EC2 instances in VPC_A. This application is backed by an Amazon ElastiCache cluster, which is in VPC_B and peered with VPC_A. The company migrates its application instances from VPC_A to VPC_B. Logs indicate that the file-sharing application no longer can connect to the ElastiCache cluster.
What should a database specialist do to resolve this issue?

  • A. Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.
  • B. Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.
  • C. Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC_B's CIDR blocks from the ElastiCache cluster.
  • D. Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances' security group to the ElastiCache cluster.
Discover Answer Hide Answer

C

Question#114

A database specialist must load 25 GB of data files from a company's on-premises storage to an Amazon Neptune database.
Which approach to load the data is FASTEST?

  • A. Upload the data to Amazon S3 and use the Loader command to load the data from Amazon S3 into the Neptune database.
  • B. Write a utility to read the data from the on-premises storage and run INSERT statements in a loop to load the data into the Neptune database.
  • C. Use the AWS CLI to load the data directly from the on-premises storage into the Neptune database.
  • D. Use AWS DataSync to load the data directly from the on-premises storage into the Neptune database.
Discover Answer Hide Answer

C
Reference:
https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load.html

Question#115

A finance company needs to make sure that its MySQL database backups are available for the most recent 90 days. All of the MySQL databases are hosted on
Amazon RDS for MySQL DB instances. A database specialist must implement a solution that meets the backup retention requirement with the least possible development effort.
Which approach should the database specialist take?

  • A. Use AWS Backup to build a backup plan for the required retention period. Assign the DB instances to the backup plan.
  • B. Modify the DB instances to enable the automated backup option. Select the required backup retention period.
  • C. Automate a daily cron job on an Amazon EC2 instance to create MySQL dumps, transfer to Amazon S3, and implement an S3 Lifecycle policy to meet the retention requirement.
  • D. Use AWS Lambda to schedule a daily manual snapshot of the DB instances. Delete snapshots that exceed the retention requirement.
Discover Answer Hide Answer

A

Question#116

An online advertising company uses an Amazon DynamoDb table as its data store. The table has Amazon DynamoDB Streams enabled and has a global secondary index on one of the keys. The table is encrypted using an AWS Key Management Service (AWS KMS) customer managed key.
The company has decided to expand its operations globally and wants to replicate the database in a different AWS Region by using DynamoDB global tables.
Upon review, an administrator notices the following:
✑ No role with the dynamodb: CreateGlobalTable permission exists in the account.
✑ An empty table with the same name exists in the new Region where replication is desired.
✑ A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.
Which configurations will block the creation of a global table or the creation of a replica in the new Region? (Choose two.)

  • A. A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired.
  • B. An empty table with the same name exists in the Region where replication is desired.
  • C. No role with the dynamodb:CreateGlobalTable permission exists in the account.
  • D. DynamoDB Streams is enabled for the table.
  • E. The table is encrypted using a KMS customer managed key.
Discover Answer Hide Answer

AD

Question#117

A large automobile company is migrating the database of a critical financial application to Amazon DynamoDB. The company's risk and compliance policy requires that every change in the database be recorded as a log entry for audits. The system is anticipating more than 500,000 log entries each minute. Log entries should be stored in batches of at least 100,000 records in each file in Apache Parquet format.
How should a database specialist implement these requirements with DynamoDB?

  • A. Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon S3 object.
  • B. Create a backup plan in AWS Backup to back up the DynamoDB table once a day. Create an AWS Lambda function that restores the backup in another table and compares both tables for changes. Generate the log entries and write them to an Amazon S3 object.
  • C. Enable AWS CloudTrail logs on the table. Create an AWS Lambda function that reads the log files once an hour and filters DynamoDB API actions. Write the filtered log files to Amazon S3.
  • D. Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon Kinesis Data Firehose delivery stream with buffering and Amazon S3 as the destination.
Discover Answer Hide Answer

D

Question#118

A company released a mobile game that quickly grew to 10 million daily active users in North America. The game's backend is hosted on AWS and makes extensive use of an Amazon DynamoDB table that is configured with a TTL attribute.
When an item is added or updated, its TTL is set to the current epoch time plus 600 seconds. The game logic relies on old data being purged so that it can calculate rewards points accurately. Occasionally, items are read from the table that are several hours past their TTL expiry.
How should a database specialist fix this issue?

  • A. Use a client library that supports the TTL functionality for DynamoDB.
  • B. Include a query filter expression to ignore items with an expired TTL.
  • C. Set the ConsistentRead parameter to true when querying the table.
  • D. Create a local secondary index on the TTL attribute.
Discover Answer Hide Answer

A

Question#119

A development team at an international gaming company is experimenting with Amazon DynamoDB to store in-game events for three mobile games. The most popular game hosts a maximum of 500,000 concurrent users, and the least popular game hosts a maximum of 10,000 concurrent users. The average size of an event is 20 KB, and the average user session produces one event each second. Each event is tagged with a time in milliseconds and a globally unique identifier.
The lead developer created a single DynamoDB table for the events with the following schema:
✑ Partition key: game name
✑ Sort key: event identifier
✑ Local secondary index: player identifier
✑ Event time
The tests were successful in a small-scale development environment. However, when deployed to production, new events stopped being added to the table and the logs show DynamoDB failures with the ItemCollectionSizeLimitExceededException error code.
Which design change should a database specialist recommend to the development team?

  • A. Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.
  • B. Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.
  • C. Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.
  • D. Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.
Discover Answer Hide Answer

C

Question#120

An ecommerce company recently migrated one of its SQL Server databases to an Amazon RDS for SQL Server Enterprise Edition DB instance. The company expects a spike in read traffic due to an upcoming sale. A database specialist must create a read replica of the DB instance to serve the anticipated read traffic.
Which actions should the database specialist take before creating the read replica? (Choose two.)

  • A. Identify a potential downtime window and stop the application calls to the source DB instance.
  • B. Ensure that automatic backups are enabled for the source DB instance.
  • C. Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups.
  • D. Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM).
  • E. Modify the read replica parameter group setting and set the value to 1.
Discover Answer Hide Answer

BD

chevron rightPrevious Nextchevron right