Google Associate Cloud Engineer - Practice Test 2
You are using Cloud Logging to collect application logs. To analyze these logs efficiently using SQL, following Google's recommended best practices, what action should you take?
Google's recommended practice for SQL analysis of Cloud Logging data is to enable Log Analytics for the log bucket. This feature natively integrates Cloud Logging with BigQuery, automatically creating a linked dataset for seamless SQL querying. This approach avoids manual exports and complex configurations, providing an efficient and real-time solution.
You are using the Google Cloud pricing calculator to estimate the cost of a Kubernetes cluster. Your application demands high I/O operations per second (IOPS) and will utilize disk snapshots. After inputting the number of nodes, average hours, and average days, what is the most appropriate next step to accurately reflect these requirements in the cost estimate?
The requirement for high IOPS directly points to using local SSDs, as they provide significantly higher performance than standard persistent disks. Additionally, the explicit mention of using disk snapshots necessitates including snapshot storage in the cost estimation. Cluster management costs are typically included or calculated separately, and GPUs are unrelated to high IOPS or disk snapshots.
You are tasked with deploying a third-party application on a single Google Compute Engine VM instance. This application's internal database demands the highest possible read and write disk access speed. Additionally, the instance must automatically recover in case of a failure. Which approach should you take?
Hyperdisk Extreme offers superior performance, including higher IOPS and throughput, compared to SSD Persistent Disks, making it ideal for demanding database workloads. A stateful managed instance group is necessary to ensure the instance recovers on failure while preserving its unique state, which is crucial for a single application instance with an internal database.
You are tasked with migrating various datasets from your current environment to Google Cloud, adhering to Google-recommended practices and avoiding custom code. The migration involves: * 200 TB of video files residing in on-premises SAN storage. * Data warehouse data currently stored on Amazon Redshift. * 20 GB of PNG image files located in an Amazon S3 bucket. You need to transfer the video files to a Cloud Storage bucket, migrate the data warehouse data to BigQuery, and move the PNG files to a separate Cloud Storage bucket. Which combination of Google Cloud services should you use for this migration?
For large on-premises data transfers like 200 TB from SAN, Transfer Appliance is the recommended hardware solution. BigQuery Data Transfer Service is specifically designed to automate migrations from sources like Amazon Redshift to BigQuery. Storage Transfer Service is ideal for moving data between cloud storage providers, such as from Amazon S3 to Google Cloud Storage.
You have a critical web application deployed on Google Cloud as a managed instance group, currently handling live traffic. You need to deploy a new version of this application gradually, ensuring that the available capacity for users does not decrease at any point during the deployment. Which action should you take to achieve this?
To ensure no decrease in available capacity during a rolling update, maxUnavailable must be set to 0, meaning no instances can be offline. To allow the update to progress by adding new instances, maxSurge must be set to at least 1, allowing one new instance to be created before an old one is terminated. This combination facilitates a gradual, zero-downtime deployment.