Exams > Google > Professional Data Engineer: Professional Data Engineer on Google Cloud Platform
Professional Data Engineer: Professional Data Engineer on Google Cloud Platform
Page 2 out of 21 pages Questions 11-20 out of 205 questions
Question#11

You are training a spam classifier. You notice that you are overfitting the training data. Which three actions can you take to resolve this problem? (Choose three.)

  • A. Get more training examples
  • B. Reduce the number of training examples
  • C. Use a smaller set of features
  • D. Use a larger set of features
  • E. Increase the regularization parameters
  • F. Decrease the regularization parameters
Discover Answer Hide Answer

ADF

Question#12

You are implementing security best practices on your data pipeline. Currently, you are manually executing jobs as the Project Owner. You want to automate these jobs by taking nightly batch files containing non-public information from Google Cloud Storage, processing them with a Spark Scala job on a Google Cloud
Dataproc cluster, and depositing the results into Google BigQuery.
How should you securely run this workload?

  • A. Restrict the Google Cloud Storage bucket so only you can see the files
  • B. Grant the Project Owner role to a service account, and run the job with it
  • C. Use a service account with the ability to read the batch files and to write to BigQuery
  • D. Use a user account with the Project Viewer role on the Cloud Dataproc cluster to read the batch files and write to BigQuery
Discover Answer Hide Answer

B

Question#13

You are using Google BigQuery as your data warehouse. Your users report that the following simple query is running very slowly, no matter when they run the query:
SELECT country, state, city FROM [myproject:mydataset.mytable] GROUP BY country
You check the query plan for the query and see the following output in the Read section of Stage:1:

What is the most likely cause of the delay for this query?

  • A. Users are running too many concurrent queries in the system
  • B. The [myproject:mydataset.mytable] table has too many partitions
  • C. Either the state or the city columns in the [myproject:mydataset.mytable] table have too many NULL values
  • D. Most rows in the [myproject:mydataset.mytable] table have the same value in the country column, causing data skew
Discover Answer Hide Answer

A

Question#14

Your globally distributed auction application allows users to bid on items. Occasionally, users place identical bids at nearly identical times, and different application servers process those bids. Each bid event contains the item, amount, user, and timestamp. You want to collate those bid events into a single location in real time to determine which user bid first. What should you do?

  • A. Create a file on a shared file and have the application servers write all bid events to that file. Process the file with Apache Hadoop to identify which user bid first.
  • B. Have each application server write the bid events to Cloud Pub/Sub as they occur. Push the events from Cloud Pub/Sub to a custom endpoint that writes the bid event information into Cloud SQL.
  • C. Set up a MySQL database for each application server to write bid events into. Periodically query each of those distributed MySQL databases and update a master MySQL database with bid event information.
  • D. Have each application server write the bid events to Google Cloud Pub/Sub as they occur. Use a pull subscription to pull the bid events using Google Cloud Dataflow. Give the bid for each item to the user in the bid event that is processed first.
Discover Answer Hide Answer

C

Question#15

Your organization has been collecting and analyzing data in Google BigQuery for 6 months. The majority of the data analyzed is placed in a time-partitioned table named events_partitioned. To reduce the cost of queries, your organization created a view called events, which queries only the last 14 days of data. The view is described in legacy SQL. Next month, existing applications will be connecting to BigQuery to read the events data via an ODBC connection. You need to ensure the applications can connect. Which two actions should you take? (Choose two.)

  • A. Create a new view over events using standard SQL
  • B. Create a new partitioned table using a standard SQL query
  • C. Create a new view over events_partitioned using standard SQL
  • D. Create a service account for the ODBC connection to use for authentication
  • E. Create a Google Cloud Identity and Access Management (Cloud IAM) role for the ODBC connection and shared ג€eventsג€
Discover Answer Hide Answer

AE

Question#16

You have enabled the free integration between Firebase Analytics and Google BigQuery. Firebase now automatically creates a new table daily in BigQuery in the format app_events_YYYYMMDD. You want to query all of the tables for the past 30 days in legacy SQL. What should you do?

  • A. Use the TABLE_DATE_RANGE function
  • B. Use the WHERE_PARTITIONTIME pseudo column
  • C. Use WHERE date BETWEEN YYYY-MM-DD AND YYYY-MM-DD
  • D. Use SELECT IF.(date >= YYYY-MM-DD AND date <= YYYY-MM-DD
Discover Answer Hide Answer

A
Reference:
https://cloud.google.com/blog/products/gcp/using-bigquery-and-firebase-analytics-to-understand-your-mobile-app?hl=am

Question#17

Your company is currently setting up data pipelines for their campaign. For all the Google Cloud Pub/Sub streaming data, one of the important business requirements is to be able to periodically identify the inputs and their timings during their campaign. Engineers have decided to use windowing and transformation in Google Cloud Dataflow for this purpose. However, when testing this feature, they find that the Cloud Dataflow job fails for the all streaming insert. What is the most likely cause of this problem?

  • A. They have not assigned the timestamp, which causes the job to fail
  • B. They have not set the triggers to accommodate the data coming in late, which causes the job to fail
  • C. They have not applied a global windowing function, which causes the job to fail when the pipeline is created
  • D. They have not applied a non-global windowing function, which causes the job to fail when the pipeline is created
Discover Answer Hide Answer

C

Question#18

You architect a system to analyze seismic data. Your extract, transform, and load (ETL) process runs as a series of MapReduce jobs on an Apache Hadoop cluster. The ETL process takes days to process a data set because some steps are computationally expensive. Then you discover that a sensor calibration step has been omitted. How should you change your ETL process to carry out sensor calibration systematically in the future?

  • A. Modify the transformMapReduce jobs to apply sensor calibration before they do anything else.
  • B. Introduce a new MapReduce job to apply sensor calibration to raw data, and ensure all other MapReduce jobs are chained after this.
  • C. Add sensor calibration data to the output of the ETL process, and document that all users need to apply sensor calibration themselves.
  • D. Develop an algorithm through simulation to predict variance of data output from the last MapReduce job based on calibration factors, and apply the correction to all data.
Discover Answer Hide Answer

A

Question#19

An online retailer has built their current application on Google App Engine. A new initiative at the company mandates that they extend their application to allow their customers to transact directly via the application. They need to manage their shopping transactions and analyze combined data from multiple datasets using a business intelligence (BI) tool. They want to use only a single database for this purpose. Which Google Cloud database should they choose?

  • A. BigQuery
  • B. Cloud SQL
  • C. Cloud BigTable
  • D. Cloud Datastore
Discover Answer Hide Answer

B
Reference:
https://cloud.google.com/sql/

Question#20

You launched a new gaming app almost three years ago. You have been uploading log files from the previous day to a separate Google BigQuery table with the table name format LOGS_yyyymmdd. You have been using table wildcard functions to generate daily and monthly reports for all time ranges. Recently, you discovered that some queries that cover long date ranges are exceeding the limit of 1,000 tables and failing. How can you resolve this issue?

  • A. Convert all daily log tables into date-partitioned tables
  • B. Convert the sharded tables into a single partitioned table
  • C. Enable query caching so you can cache data from previous months
  • D. Create separate views to cover each month, and query from these views
Discover Answer Hide Answer

A

chevron rightPrevious Nextchevron right