Managed vs Unmanaged vs Serverless on Google Cloud: How the ACE Exam Categorizes GCP Services

Ben Makansi
December 3, 2025

One of the most useful frameworks for making sense of GCP services is the spectrum from unmanaged to fully managed to serverless. Every service you encounter on the Associate Cloud Engineer exam falls somewhere on this spectrum, and knowing where a service falls tells you how much operational work you are responsible for versus how much Google handles.

Unmanaged: You Run the Infrastructure

Unmanaged services provide infrastructure that you operate yourself. Google gives you the compute, storage, or networking resource, but the ongoing management is your responsibility.

Google Compute Engine is the primary unmanaged service on GCP. You create a virtual machine, and from that point you are responsible for the operating system, software installation, security patches, monitoring, and scaling. If you want the VM to recover from a crash, you need to configure that. If you want it to scale automatically, you need to set up managed instance groups. If you want the OS patched, you need to handle that yourself or use a separate tool.

The trade-off is control. Compute Engine gives you the most flexibility of any GCP compute service. You choose the machine type, the OS, the disk configuration, the network settings. If your application has unusual dependencies or requires specific kernel parameters, Compute Engine can accommodate that. The cost of that flexibility is the operational overhead of running your own infrastructure.

Managed: Google Handles the Infrastructure Layer

Managed services are where Google takes on responsibility for the underlying infrastructure, software, and often the operating system, while you focus on configuring and using the service rather than operating the machines beneath it.

Google Kubernetes Engine is a managed service. Google manages the control plane, handles Kubernetes version upgrades (when you configure auto-upgrade), and ensures the cluster infrastructure is healthy. You are still responsible for your application deployments, your container configurations, your node pool sizing, and your Kubernetes resource definitions. But you are not managing the etcd cluster or the API server.

Cloud Bigtable is managed at a different level. Google operates the underlying distributed storage system. You configure the cluster size and design your row keys, but you do not manage the servers that run Bigtable. Similarly, Cloud SQL is a managed database service. Google handles the underlying VM, storage, and database engine operation. You configure the instance, manage your schemas and data, and handle application-level concerns.

Managed services still require meaningful configuration and operational attention. A poorly configured GKE cluster or a badly designed Bigtable schema will cause problems regardless of how well Google manages the underlying infrastructure.

Serverless and No-Ops: Google Handles Everything Automatically

Serverless services, sometimes called no-ops services, go further. Google automatically allocates and deallocates the underlying compute resources in response to demand. You do not configure instances, you do not manage scaling policies, and you often do not think about capacity at all. You define what you want to run and Google figures out where and how to run it.

Cloud Run is a serverless container platform. You provide a container image, and Cloud Run runs it. When traffic arrives, Cloud Run starts container instances. When traffic drops to zero, it scales down to zero. You are billed only for the time your containers are actually processing requests. There are no VMs to manage, no autoscaling policies to tune beyond the basic concurrency settings.

Cloud Functions takes this further. You provide a function, a piece of code that runs in response to an event, and Google handles everything else. Google Pub/Sub is serverless in the sense that you do not manage the messaging infrastructure at all. Cloud Dataflow, once a job is submitted, manages its own workers automatically.

App Engine Standard is also serverless. You deploy code and Google runs it. App Engine Flexible is more like a managed service because it uses Compute Engine VMs underneath and gives you more control, but it still automates much of the infrastructure management.

Why the ACE Exam Tests This

The Associate Cloud Engineer exam includes scenario questions that hinge on this spectrum. A question might describe a team that wants to run containers without managing servers, where Cloud Run is the answer. Another question might describe a workload that needs full OS control, where Compute Engine is the answer. A third might describe a team that needs Kubernetes but does not want to manage the control plane, where GKE is the answer.

Recognizing where a service falls on the managed-to-serverless spectrum also helps you reason about cost. Serverless services are often the most cost-efficient for variable or unpredictable traffic because you pay only for actual usage. Unmanaged services like Compute Engine are often the most cost-efficient for steady, high-volume workloads where you can fully utilize reserved capacity.

A quick reference: Compute Engine is unmanaged. GKE, Cloud SQL, Bigtable, and Dataproc are managed. Cloud Run, Cloud Functions, Pub/Sub, App Engine Standard, and Dataflow are serverless. The line is not always perfectly sharp, but this categorization holds for most exam scenarios.

For more on how the managed versus serverless distinction plays out in specific exam scenarios, including the edge cases, my Associate Cloud Engineer course covers the full spectrum with examples drawn from the exam blueprint.

Cost Implications of the Spectrum

The managed-to-serverless spectrum also reflects a cost trade-off that the ACE exam occasionally surfaces. Serverless services charge based on actual consumption. Cloud Run charges per CPU-second of request processing. Cloud Functions charges per invocation and execution time. When traffic is zero, the cost is zero. This makes serverless very cost-efficient for variable or bursty workloads.

Compute Engine charges for provisioned capacity regardless of utilization. A VM that sits idle at two percent CPU overnight still costs the same as a VM running at full load. For steady, predictable workloads where you can fully utilize the provisioned capacity, this is often cheaper per unit of work done. Committed use discounts available for Compute Engine push the cost advantage further for long-running workloads.

The exam tests this cost dimension in scenarios where you need to recommend the most cost-effective architecture for a given usage pattern. Spiky traffic favors serverless. Steady high-throughput traffic often favors Compute Engine with committed use discounts or managed instance groups with efficient utilization.

arrow