Exams > Amazon > AWS Certified Database - Specialty
AWS Certified Database - Specialty
Page 10 out of 27 pages Questions 91-100 out of 262 questions
Question#91

A retail company manages a web application that stores data in an Amazon DynamoDB table. The company is undergoing account consolidation efforts. A database engineer needs to migrate the DynamoDB table from the current AWS account to a new AWS account.
Which strategy meets these requirements with the LEAST amount of administrative work?

  • A. Use AWS Glue to crawl the data in the DynamoDB table. Create a job using an available blueprint to export the data to Amazon S3. Import the data from the S3 file to a DynamoDB table in the new account.
  • B. Create an AWS Lambda function to scan the items of the DynamoDB table in the current account and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items of a DynamoDB table in the new account.
  • C. Use AWS Data Pipeline in the current account to export the data from the DynamoDB table to a file in Amazon S3. Use Data Pipeline to import the data from the S3 file to a DynamoDB table in the new account.
  • D. Configure Amazon DynamoDB Streams for the DynamoDB table in the current account. Create an AWS Lambda function to read from the stream and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items to a DynamoDB table in the new account.
Discover Answer Hide Answer

C

Question#92

A company uses the Amazon DynamoDB table contractDB in us-east-1 for its contract system with the following schema: orderID (primary key) timestamp (sort key) contract (map) createdBy (string) customerEmail (string)
After a problem in production, the operations team has asked a database specialist to provide an IAM policy to read items from the database to debug the application. In addition, the developer is not allowed to access the value of the customerEmail field to stay compliant.
Which IAM policy should the database specialist use to achieve these requirements?
A.

B.

C.

D.

Discover Answer Hide Answer

A

Question#93

A company has an application that uses an Amazon DynamoDB table to store user data. Every morning, a single-threaded process calls the DynamoDB API Scan operation to scan the entire table and generate a critical start-of-day report for management. A successful marketing campaign recently doubled the number of items in the table, and now the process takes too long to run and the report is not generated in time.
A database specialist needs to improve the performance of the process. The database specialist notes that, when the process is running, 15% of the table's provisioned read capacity units (RCUs) are being used.
What should the database specialist do?

  • A. Enable auto scaling for the DynamoDB table.
  • B. Use four threads and parallel DynamoDB API Scan operations.
  • C. Double the table's provisioned RCUs.
  • D. Set the Limit and Offset parameters before every call to the API.
Discover Answer Hide Answer

B

Question#94

A company is building a software as a service application. As part of the new user sign-on workflow, a Python script invokes the CreateTable operation using the
Amazon DynamoDB API. After the call returns, the script attempts to call PutItem.
Occasionally, the PutItem request fails with a ResourceNotFoundException error, which causes the workflow to fail. The development team has confirmed that the same table name is used in the two API calls.
How should a database specialist fix this issue?

  • A. Add an allow statement for the dynamodb:PutItem action in a policy attached to the role used by the application creating the table.
  • B. Set the StreamEnabled property of the StreamSpecification parameter to true, then call PutItem.
  • C. Change the application to call DescribeTable periodically until the TableStatus is ACTIVE, then call PutItem.
  • D. Add a ConditionExpression parameter in the PutItem request.
Discover Answer Hide Answer

D
Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html

Question#95

To meet new data compliance requirements, a company needs to keep critical data durably stored and readily accessible for 7 years. Data that is more than 1 year old is considered archival data and must automatically be moved out of the Amazon Aurora MySQL DB cluster every week. On average, around 10 GB of new data is added to the database every month. A database specialist must choose the most operationally efficient solution to migrate the archival data to
Amazon S3.
Which solution meets these requirements?

  • A. Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.
  • B. Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
  • C. Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
  • D. Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.
Discover Answer Hide Answer

C

Question#96

A company developed a new application that is deployed on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances use the security group named sg-application-servers. The company needs a database to store the data from the application and decides to use an Amazon RDS for MySQL DB instance. The DB instance is deployed in a private DB subnet.
What is the MOST restrictive configuration for the DB instance security group?

  • A. Only allow incoming traffic from the sg-application-servers security group on port 3306.
  • B. Only allow incoming traffic from the sg-application-servers security group on port 443.
  • C. Only allow incoming traffic from the subnet of the application servers on port 3306.
  • D. Only allow incoming traffic from the subnet of the application servers on port 443.
Discover Answer Hide Answer

B

Question#97

A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company's network bandwidth is available.
How should the company perform this data load?

  • A. Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
  • B. Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
  • C. Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
  • D. Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
Discover Answer Hide Answer

C

Question#98

A company migrated one of its business-critical database workloads to an Amazon Aurora Multi-AZ DB cluster. The company requires a very low RTO and needs to improve the application recovery time after database failovers.
Which approach meets these requirements?

  • A. Set the max_connections parameter to 16,000 in the instance-level parameter group.
  • B. Modify the client connection timeout to 300 seconds.
  • C. Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.
  • D. Enable the query cache at the instance level.
Discover Answer Hide Answer

D

Question#99

A company is using an Amazon RDS for MySQL DB instance for its internal applications. A security audit shows that the DB instance is not encrypted at rest. The company's application team needs to encrypt the DB instance.
What should the team do to meet this requirement?

  • A. Stop the DB instance and modify it to enable encryption. Apply this setting immediately without waiting for the next scheduled RDS maintenance window.
  • B. Stop the DB instance and create an encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
  • C. Stop the DB instance and create a snapshot. Copy the snapshot into another encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
  • D. Create an encrypted read replica of the DB instance. Promote the read replica to master. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
Discover Answer Hide Answer

A
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

Question#100

A database specialist must create nightly backups of an Amazon DynamoDB table in a mission-critical workload as part of a disaster recovery strategy.
Which backup methodology should the database specialist use to MINIMIZE management overhead?

  • A. Install the AWS CLI on an Amazon EC2 instance. Write a CLI command that creates a backup of the DynamoDB table. Create a scheduled job or task that runs the command on a nightly basis.
  • B. Create an AWS Lambda function that creates a backup of the DynamoDB table. Create an Amazon CloudWatch Events rule that runs the Lambda function on a nightly basis.
  • C. Create a backup plan using AWS Backup, specify a backup frequency of every 24 hours, and give the plan a nightly backup window.
  • D. Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.
Discover Answer Hide Answer

D
On-demand backup allows you to create full backups of your Amazon DynamoDB table for data archiving, helping you meet your corporate and governmental regulatory requirements. You can back up tables from a few megabytes to hundreds of terabytes of data, with no impact on performance and availability to your production applications. Backups process in seconds regardless of the size of your tables, so you do not have to worry about backup schedules or long-running processes. In addition, all backups are automatically encrypted, cataloged, easily discoverable, and retained until explicitly deleted.

chevron rightPrevious Nextchevron right