Exams > Google > Professional Cloud Architect: Professional Cloud Architect on Google Cloud Platform
Professional Cloud Architect: Professional Cloud Architect on Google Cloud Platform
Page 12 out of 27 pages Questions 111-120 out of 270 questions
Question#111

Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords.
What authentication strategy should they use?

  • A. Use G Suite Password Sync to replicate passwords into Google
  • B. Federate authentication via SAML 2.0 to the existing Identity Provider
  • C. Provision users in Google using the Google Cloud Directory Sync tool
  • D. Ask users to set their Google password to match their corporate password
Discover Answer Hide Answer

C
Provision users to Google's directory
The global Directory is available to both Cloud Platform and G Suite resources and can be provisioned by a number of means. Provisioned users can take advantage of rich authentication features including single sign-on (SSO), OAuth, and two-factor verification.
You can provision users automatically using one of the following tools and services:
Google Cloud Directory Sync (GCDS)

Google Admin SDK -

A third-party connector -
GCDS is a connector that can provision users and groups on your behalf for both Cloud Platform and G Suite. Using GCDS, you can automate the addition, modification, and deletion of users, groups, and non-employee contacts. You can synchronize the data from your LDAP directory server to your Cloud Platform domain by using LDAP queries. This synchronization is one-way: the data in your LDAP directory server is never modified.
Reference:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#authentication-and-identity

Question#112

Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live- processing some data as it comes in.
Which technology should they use for this?

  • A. Google Cloud Dataproc
  • B. Google Cloud Dataflow
  • C. Google Container Engine with Bigtable
  • D. Google Compute Engine with Google BigQuery
Discover Answer Hide Answer

B
Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and expressiveness -- no more complex workarounds or compromises needed.
Reference:
https://cloud.google.com/dataflow/

Question#113

Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users.
This behavior was not reported before the update.
What strategy should you take?

  • A. Work with your ISP to diagnose the problem
  • B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application
  • C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
  • D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem
Discover Answer Hide Answer

C
Stackdriver Logging allows you to store, search, analyze, monitor, and alert on log data and events from Google Cloud Platform and Amazon Web Services
(AWS). Our API also allows ingestion of any custom log data from any source. Stackdriver Logging is a fully managed service that performs at scale and can ingest application and system log data from thousands of VMs. Even better, you can analyze all that log data in real time.
Reference:
https://cloud.google.com/logging/

Question#114

A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space.
How can you remediate the problem with the least amount of downtime?

  • A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
  • B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine
  • C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux
  • D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk
  • E. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service
Discover Answer Hide Answer

A
On Linux instances, connect to your instance and manually resize your partitions and file systems to use the additional disk space that you added.
Extend the file system on the disk or the partition to use the added space. If you grew a partition on your disk, specify the partition. If your disk does not have a partition table, specify only the disk ID. sudo resize2fs /dev/[DISK_ID][PARTITION_NUMBER] where [DISK_ID] is the device name and [PARTITION_NUMBER] is the partition number for the device where you are resizing the file system.
Reference:
https://cloud.google.com/compute/docs/disks/add-persistent-disk

Question#115

Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating to which payment methods are used.
How should you design your architecture?

  • A. Create a tokenizer service and store only tokenized data
  • B. Create separate projects that only process credit card data
  • C. Create separate subnetworks and isolate the components that process credit card data
  • D. Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data
  • E. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
Discover Answer Hide Answer

A
Reference:
https://www.sans.org/reading-room/whitepapers/compliance/ways-reduce-pci-dss-audit-scope-tokenizing-cardholder-data-33194

Question#116

Your company wants to migrate their 10-TB on-premises database export into Cloud Storage. You want to minimize the time it takes to complete this activity, the overall cost, and database load. The bandwidth between the on-premises environment and Google Cloud is 1 Gbps. You want to follow Google-recommended practices. What should you do?

  • A. Develop a Dataflow job to read data directly from the database and write it into Cloud Storage.
  • B. Use the Data Transfer appliance to perform an offline migration.
  • C. Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage.
  • D. Compress the data and upload it with gsutil -m to enable multi-threaded copy.
Discover Answer Hide Answer

A

Question#117

Your company has an enterprise application running on Compute Engine that requires high availability and high performance. The application has been deployed on two instances in two zones in the same region in active-passive mode. The application writes data to a persistent disk. In the case of a single zone outage, that data should be immediately made available to the other instance in the other zone. You want to maximize performance while minimizing downtime and data loss.
What should you do?

  • A. 1. Attach a persistent SSD disk to the first instance. 2. Create a snapshot every hour. 3. In case of a zone outage, recreate a persistent SSD disk in the second instance where data is coming from the created snapshot.
  • B. 1. Create a Cloud Storage bucket. 2. Mount the bucket into the first instance with gcs-fuse. 3. In case of a zone outage, mount the Cloud Storage bucket to the second instance with gcs-fuse.
  • C. 1. Attach a regional SSD persistent disk to the first instance. 2. In case of a zone outage, force-attach the disk to the other instance.
  • D. 1. Attach a local SSD to the first instance disk. 2. Execute an rsync command every hour where the target is a persistent SSD disk attached to the second instance. 3. In case of a zone outage, use the second instance.
Discover Answer Hide Answer

B

Question#118

You are designing a Data Warehouse on Google Cloud and want to store sensitive data in BigQuery. Your company requires you to generate the encryption keys outside of Google Cloud. You need to implement a solution. What should you do?

  • A. Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
  • B. Generate a new key in Cloud KMS. Create a dataset in BigQuery using the customer-managed key option and select the created key.
  • C. Import a key in Cloud KMS. Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
  • D. Import a key in Cloud KMS. Create a dataset in BigQuery using the customer-supplied key option and select the created key.
Discover Answer Hide Answer

D

Question#119

Your organization has stored sensitive data in a Cloud Storage bucket. For regulatory reasons, your company must be able to rotate the encryption key used to encrypt the data in the bucket. The data will be processed in Dataproc. You want to follow Google-recommended practices for security. What should you do?

  • A. Create a key with Cloud Key Management Service (KMS). Encrypt the data using the encrypt method of Cloud KMS.
  • B. Create a key with Cloud Key Management Service (KMS). Set the encryption key on the bucket to the Cloud KMS key.
  • C. Generate a GPG key pair. Encrypt the data using the GPG key. Upload the encrypted data to the bucket.
  • D. Generate an AES-256 encryption key. Encrypt the data in the bucket using the customer-supplied encryption keys feature.
Discover Answer Hide Answer

D

Question#120

Your team needs to create a Google Kubernetes Engine (GKE) cluster to host a newly built application that requires access to third-party services on the internet.
Your company does not allow any Compute Engine instance to have a public IP address on Google Cloud. You need to create a deployment strategy that adheres to these guidelines. What should you do?

  • A. Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster subnet.
  • B. Configure the GKE cluster as a private cluster. Configure Private Google Access on the Virtual Private Cloud (VPC).
  • C. Configure the GKE cluster as a route-based cluster. Configure Private Google Access on the Virtual Private Cloud (VPC).
  • D. Create a Compute Engine instance, and install a NAT Proxy on the instance. Configure all workloads on GKE to pass through this proxy to access third-party services on the Internet.
Discover Answer Hide Answer

B
Reference:
https://cloud.google.com/architecture/prep-kubernetes-engine-for-prod

chevron rightPrevious Nextchevron right