Exams > Amazon > AWS Certified Database - Specialty
AWS Certified Database - Specialty
Page 5 out of 27 pages Questions 41-50 out of 262 questions
Question#41

A database specialist manages a critical Amazon RDS for MySQL DB instance for a company. The data stored daily could vary from .01% to 10% of the current database size. The database specialist needs to ensure that the DB instance storage grows as needed.
What is the MOST operationally efficient and cost-effective solution?

  • A. Configure RDS Storage Auto Scaling.
  • B. Configure RDS instance Auto Scaling.
  • C. Modify the DB instance allocated storage to meet the forecasted requirements.
  • D. Monitor the Amazon CloudWatch FreeStorageSpace metric daily and add storage as required.
Discover Answer Hide Answer

B

Question#42

A company is due for renewing its database license. The company wants to migrate its 80 TB transactional database system from on-premises to the AWS Cloud.
The migration should incur the least possible downtime on the downstream database applications. The company's network infrastructure has limited network bandwidth that is shared with other applications.
Which solution should a database specialist use for a timely migration?

  • A. Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Use AWS DMS to migrate change data capture (CDC) data from the source database to Amazon S3. Use a second AWS DMS task to migrate all the S3 data to the target database.
  • B. Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Periodically perform incremental backups of the source database to be shipped in another Snowball Edge appliance to handle syncing change data capture (CDC) data from the source to the target database.
  • C. Use AWS DMS to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS DMS to handle syncing change data capture (CDC) data from the source to the target database.
  • D. Use the AWS Schema Conversion Tool (AWS SCT) to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS SCT to handle syncing change data capture (CDC) data from the source to the target database.
Discover Answer Hide Answer

D

Question#43

A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replica. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests.
Which should the database specialist do to allow the database team to create the test tables?

  • A. Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.
  • B. Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.
  • C. Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.
  • D. Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.
Discover Answer Hide Answer

A

Question#44

A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and
OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster.
Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?

  • A. Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.
  • B. Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.
  • C. Create additional readers to cater to the different scenarios.
  • D. Use custom endpoints to satisfy the different workloads.
Discover Answer Hide Answer

B

Question#45

Developers have requested a new Amazon Redshift cluster so they can load new third-party marketing data. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message:
`Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied.`
The developers need to load this data soon, so a database specialist must act quickly to solve this issue.
What is the MOST secure solution?

  • A. Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.
  • B. Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.
  • C. Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.
  • D. Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.
Discover Answer Hide Answer

D

Question#46

A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes.
Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?

  • A. Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.
  • B. Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.
  • C. Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.
  • D. Create an AWS Backup plan and assign the DynamoDB table as a resource.
Discover Answer Hide Answer

C

Question#47

A small startup company is looking to migrate a 4 TB on-premises MySQL database to AWS using an Amazon RDS for MySQL DB instance.
Which strategy would allow for a successful migration with the LEAST amount of downtime?

  • A. Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance utilizing the MySQL utilities running on an Amazon EC2 instance. Immediately point the application to the DB instance.
  • B. Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into the EC2 instance and restore it into the EC2 MySQL instance. Use AWS DMS to migrate data into a new RDS for MySQL DB instance. Point the application to the DB instance.
  • C. Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into an Amazon S3 bucket and import the snapshot into a new RDS for MySQL DB instance using the MySQL utilities running on an EC2 instance. Point the application to the DB instance.
  • D. Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance using the MySQL utilities running on an Amazon EC2 instance. Establish replication into the new DB instance using MySQL replication. Stop application access to the on-premises MySQL server and let the remaining transactions replicate over. Point the application to the DB instance.
Discover Answer Hide Answer

B

Question#48

A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed.
Which solution meets these requirements?

  • A. Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.
  • B. Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.
  • C. Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.
  • D. Change the DB clusters to the burstable instance family.
Discover Answer Hide Answer

D

Question#49

A database specialist is building a system that uses a static vendor dataset of postal codes and related territory information that is less than 1 GB in size. The dataset is loaded into the application's cache at start up. The company needs to store this data in a way that provides the lowest cost with a low application startup time.
Which approach will meet these requirements?

  • A. Use an Amazon RDS DB instance. Shut down the instance once the data has been read.
  • B. Use Amazon Aurora Serverless. Allow the service to spin resources up and down, as needed.
  • C. Use Amazon DynamoDB in on-demand capacity mode.
  • D. Use Amazon S3 and load the data from flat files.
Discover Answer Hide Answer

A

Question#50

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.
How can this solution be implemented?

  • A. Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.
  • B. Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.
  • C. Use the AWS CLI to update the DynamoDB table and modify the partition key.
  • D. Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.
Discover Answer Hide Answer

D

chevron rightPrevious Nextchevron right