Exams > Google > Professional Data Engineer: Professional Data Engineer on Google Cloud Platform
Professional Data Engineer: Professional Data Engineer on Google Cloud Platform
Page 8 out of 21 pages Questions 71-80 out of 205 questions
Question#71

You are managing a Cloud Dataproc cluster. You need to make a job run faster while minimizing costs, without losing work in progress on your clusters. What should you do?

  • A. Increase the cluster size with more non-preemptible workers.
  • B. Increase the cluster size with preemptible worker nodes, and configure them to forcefully decommission.
  • C. Increase the cluster size with preemptible worker nodes, and use Cloud Stackdriver to trigger a script to preserve work.
  • D. Increase the cluster size with preemptible worker nodes, and configure them to use graceful decommissioning.
Discover Answer Hide Answer

D
Reference:
https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/flex

Question#72

You work for a shipping company that uses handheld scanners to read shipping labels. Your company has strict data privacy standards that require scanners to only transmit tracking numbers when events are sent to Kafka topics. A recent software update caused the scanners to accidentally transmit recipients' personally identifiable information (PII) to analytics systems, which violates user privacy rules. You want to quickly build a scalable solution using cloud-native managed services to prevent exposure of PII to the analytics systems. What should you do?

  • A. Create an authorized view in BigQuery to restrict access to tables with sensitive data.
  • B. Install a third-party data validation tool on Compute Engine virtual machines to check the incoming data for sensitive information.
  • C. Use Cloud Logging to analyze the data passed through the total pipeline to identify transactions that may contain sensitive information.
  • D. Build a Cloud Function that reads the topics and makes a call to the Cloud Data Loss Prevention (Cloud DLP) API. Use the tagging and confidence levels to either pass or quarantine the data in a bucket for review.
Discover Answer Hide Answer

D

Question#73

You have developed three data processing jobs. One executes a Cloud Dataflow pipeline that transforms data uploaded to Cloud Storage and writes results to
BigQuery. The second ingests data from on-premises servers and uploads it to Cloud Storage. The third is a Cloud Dataflow pipeline that gets information from third-party data providers and uploads the information to Cloud Storage. You need to be able to schedule and monitor the execution of these three workflows and manually execute them when needed. What should you do?

  • A. Create a Direct Acyclic Graph in Cloud Composer to schedule and monitor the jobs.
  • B. Use Stackdriver Monitoring and set up an alert with a Webhook notification to trigger the jobs.
  • C. Develop an App Engine application to schedule and request the status of the jobs using GCP API calls.
  • D. Set up cron jobs in a Compute Engine instance to schedule and monitor the pipelines using GCP API calls.
Discover Answer Hide Answer

D

Question#74

You have Cloud Functions written in Node.js that pull messages from Cloud Pub/Sub and send the data to BigQuery. You observe that the message processing rate on the Pub/Sub topic is orders of magnitude higher than anticipated, but there is no error logged in Cloud Logging. What are the two most likely causes of this problem? (Choose two.)

  • A. Publisher throughput quota is too small.
  • B. Total outstanding messages exceed the 10-MB maximum.
  • C. Error handling in the subscriber code is not handling run-time errors properly.
  • D. The subscriber code cannot keep up with the messages.
  • E. The subscriber code does not acknowledge the messages that it pulls.
Discover Answer Hide Answer

CE

Question#75

You are creating a new pipeline in Google Cloud to stream IoT data from Cloud Pub/Sub through Cloud Dataflow to BigQuery. While previewing the data, you notice that roughly 2% of the data appears to be corrupt. You need to modify the Cloud Dataflow pipeline to filter out this corrupt data. What should you do?

  • A. Add a SideInput that returns a Boolean if the element is corrupt.
  • B. Add a ParDo transform in Cloud Dataflow to discard corrupt elements.
  • C. Add a Partition transform in Cloud Dataflow to separate valid data from corrupt data.
  • D. Add a GroupByKey transform in Cloud Dataflow to group all of the valid data together and discard the rest.
Discover Answer Hide Answer

B

Question#76

You have historical data covering the last three years in BigQuery and a data pipeline that delivers new data to BigQuery daily. You have noticed that when the
Data Science team runs a query filtered on a date column and limited to 30`"90 days of data, the query scans the entire table. You also noticed that your bill is increasing more quickly than you expected. You want to resolve the issue as cost-effectively as possible while maintaining the ability to conduct SQL queries.
What should you do?

  • A. Re-create the tables using DDL. Partition the tables by a column containing a TIMESTAMP or DATE Type.
  • B. Recommend that the Data Science team export the table to a CSV file on Cloud Storage and use Cloud Datalab to explore the data by reading the files directly.
  • C. Modify your pipeline to maintain the last 30ג€"90 days of data in one table and the longer history in a different table to minimize full table scans over the entire history.
  • D. Write an Apache Beam pipeline that creates a BigQuery table per day. Recommend that the Data Science team use wildcards on the table name suffixes to select the data they need.
Discover Answer Hide Answer

A

Question#77

You operate a logistics company, and you want to improve event delivery reliability for vehicle-based sensors. You operate small data centers around the world to capture these events, but leased lines that provide connectivity from your event collection infrastructure to your event processing infrastructure are unreliable, with unpredictable latency. You want to address this issue in the most cost-effective way. What should you do?

  • A. Deploy small Kafka clusters in your data centers to buffer events.
  • B. Have the data acquisition devices publish data to Cloud Pub/Sub.
  • C. Establish a Cloud Interconnect between all remote data centers and Google.
  • D. Write a Cloud Dataflow pipeline that aggregates all data in session windows.
Discover Answer Hide Answer

B

Question#78

You are a retailer that wants to integrate your online sales capabilities with different in-home assistants, such as Google Home. You need to interpret customer voice commands and issue an order to the backend systems. Which solutions should you choose?

  • A. Speech-to-Text API
  • B. Cloud Natural Language API
  • C. Dialogflow Enterprise Edition
  • D. AutoML Natural Language
Discover Answer Hide Answer

C

Question#79

Your company has a hybrid cloud initiative. You have a complex data pipeline that moves data between cloud provider services and leverages services from each of the cloud providers. Which cloud-native service should you use to orchestrate the entire pipeline?

  • A. Cloud Dataflow
  • B. Cloud Composer
  • C. Cloud Dataprep
  • D. Cloud Dataproc
Discover Answer Hide Answer

D

Question#80

You use a dataset in BigQuery for analysis. You want to provide third-party companies with access to the same dataset. You need to keep the costs of data sharing low and ensure that the data is current. Which solution should you choose?

  • A. Use Analytics Hub to control data access, and provide third party companies with access to the dataset.
  • B. Use Cloud Scheduler to export the data on a regular basis to Cloud Storage, and provide third-party companies with access to the bucket.
  • C. Create a separate dataset in BigQuery that contains the relevant data to share, and provide third-party companies with access to the new dataset.
  • D. Create a Dataflow job that reads the data in frequent time intervals, and writes it to the relevant BigQuery dataset or Cloud Storage bucket for third-party companies to use.
Discover Answer Hide Answer

A

chevron rightPrevious Nextchevron right