Exams > Amazon > AWS Certified Database - Specialty
AWS Certified Database - Specialty
Page 1 out of 27 pages Questions 1-10 out of 262 questions
Question#1

A retail company is about to migrate its online and mobile store to AWS. The company's CEO has strategic plans to grow the brand globally. A Database
Specialist has been challenged to provide predictable read and write database performance with minimal operational overhead.
What should the Database Specialist do to meet these requirements?

  • A. Use Amazon DynamoDB global tables to synchronize transactions
  • B. Use Amazon EMR to copy the orders table data across Regions
  • C. Use Amazon Aurora Global Database to synchronize all transactions
  • D. Use Amazon DynamoDB Streams to replicate all DynamoDB transactions and sync them
Discover Answer Hide Answer

A
Reference:
https://aws.amazon.com/dynamodb/

Question#2

A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema
Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on- premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.
Which approach has the least risk and the highest likelihood of a successful data transfer?

  • A. Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.
  • B. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.
  • C. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.
  • D. Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multi-port upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.
Discover Answer Hide Answer

C

Question#3

A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB cluster. The company's Database Specialist discovered that the Oracle database is storing 100 GB of large binary objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of
500 MB with an average LOB size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication instances.
How should the Database Specialist optimize the database migration using AWS DMS?

  • A. Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together
  • B. Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs
  • C. Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs
  • D. Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together
Discover Answer Hide Answer

C

Question#4

A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The Database Specialist has restored the latest backup to a new table.
To prepare the new table with identical settings, which steps should be performed? (Choose two.)

  • A. Re-create global secondary indexes in the new table
  • B. Define IAM policies for access to the new table
  • C. Define the TTL settings
  • D. Encrypt the table from the AWS Management Console or use the update-table command
  • E. Set the provisioned read and write capacity
Discover Answer Hide Answer

AE
Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html

Question#5

A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.
Which process should the Database Specialist recommend to meet these requirements?

  • A. Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.
  • B. Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.
  • C. Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.
  • D. Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.
Discover Answer Hide Answer

C
Reference:
https://aws.amazon.com/blogs/mt/aws-cloudformation-signed-sealed-and-deployed/

Question#6

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs.
What should the company do to address this space constraint issue?

  • A. Log in to the host and run the rm $PGDATA/pg_logs/* command
  • B. Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted
  • C. Create a ticket with AWS Support to have the logs deleted
  • D. Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs
Discover Answer Hide Answer

B

Question#7

A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost-effective and able to handle unpredictable application traffic.
What should a Database Specialist recommend for this user?

  • A. Create an Amazon DynamoDB table with provisioned capacity mode
  • B. Create an Amazon DocumentDB cluster
  • C. Create an Amazon DynamoDB table with on-demand capacity mode
  • D. Create an Amazon Aurora Serverless DB cluster
Discover Answer Hide Answer

C
Reference:
https://aws.amazon.com/dynamodb/

Question#8

A gaming company is designing a mobile gaming app that will be accessed by many users across the globe. The company wants to have replication and full support for multi-master writes. The company also wants to ensure low latency and consistent performance for app users.
Which solution meets these requirements?

  • A. Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling
  • B. Use Amazon Aurora for storage and enable cross-Region Aurora Replicas
  • C. Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache
  • D. Use Amazon Neptune for storage
Discover Answer Hide Answer

A
Reference:
https://aws.amazon.com/blogs/database/how-to-use-amazon-dynamodb-global-tables-to-power-multiregion-architectures/

Question#9

A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas.
How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?

  • A. Set the TCP keepalive parameters low
  • B. Call the AWS CLI failover-db-cluster command
  • C. Enable Enhanced Monitoring on the DB cluster
  • D. Start a database activity stream on the DB cluster
Discover Answer Hide Answer

B

Question#10

A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.
Which approach should the Database Specialist take?

  • A. Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.
  • B. Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.
  • C. Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.
  • D. Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.
Discover Answer Hide Answer

D

chevron rightPrevious Nextchevron right