Google Associate Cloud Engineer - Practice Test 3
Your global application receives SSL-encrypted TCP traffic on port 443 and serves clients worldwide. To minimize latency for these clients, which Google Cloud load balancing option should you implement?
The SSL Proxy Load Balancer is designed for SSL-encrypted TCP traffic (non-HTTPS) and provides global load balancing, which is crucial for minimizing latency for clients located all over the world. It terminates SSL at the edge of Google's network, closer to the users, improving performance. HTTPS Load Balancer is for HTTP/S traffic, while Network Load Balancer is a regional pass-through load balancer, and Internal TCP/UDP Load Balancer is for internal traffic.
You have a Google Cloud project with a default VPC containing two subnets: `subnet-a` and `subnet-b`. Your database instances are deployed in `subnet-a`, and your application servers are in `subnet-b`. You need to implement a firewall rule to permit only database-specific traffic originating from the application servers to reach the database instances. Which configuration should you implement?
To allow traffic from application servers to database servers, an ingress rule is required on the database servers. Service accounts are an effective way to define source and target identities for firewall rules. Option 1 correctly uses service accounts for both source and target, and specifies an ingress rule, which is appropriate for allowing incoming connections to the database servers.
Your team manages several Linux virtual machines on Google Cloud. You need to implement a secure and cost-effective method for your team to SSH into these instances. Which approach should you take?
Using IAP (Identity-Aware Proxy) for SSH is the most secure and cost-effective method as it allows SSH access to instances without public IP addresses, leveraging Google's global network. The specified IP range (35.235.240.0/20) is Google's IAP IP range, which must be allowed in firewall rules for IAP-tunneled connections. Other options either expose instances to the public internet or introduce additional management overhead and cost.
A team of data scientists occasionally requires a Google Kubernetes Engine (GKE) cluster for long-running, non-restartable jobs that necessitate GPUs. To minimize costs while meeting these requirements, what is the most appropriate solution?
Node auto-provisioning is designed to automatically create and delete node pools based on pending Pods' resource requests, including GPUs. This ensures that GPU resources are only provisioned when needed by the data scientists' jobs, and de-provisioned when not in use, effectively minimizing cost for infrequent usage. While autoscaling a GPU node pool with a minimum size of 1 would provide GPUs, it would incur costs even when the cluster is not in use.
A VM instance is deployed within a Virtual Private Cloud (VPC) using single-stack subnets. To enable other services within the same VPC to reliably communicate with this VM, a fixed IP address is required. You need to achieve this while adhering to Google's recommended practices and minimizing costs. What is the most appropriate action?
For internal communication within the same VPC, a static internal IP address is the correct and cost-effective solution. Promoting the existing internal IP ensures it remains fixed without incurring costs associated with external IPs. External IP addresses are unnecessary and more expensive for communication solely within the VPC.