November 25, 2025
12 min read

Junior GCP Cloud Engineer Interview Questions

interview
career-advice
job-search
entry-level
Junior GCP Cloud Engineer Interview Questions
Milad Bonakdar

Milad Bonakdar

Author

Prepare for junior GCP cloud engineer interviews with practical questions on IAM, Compute Engine, Cloud Storage, VPC, Pub/Sub, Cloud Run functions, and gcloud troubleshooting.


Introduction

For a junior GCP cloud engineer interview, be ready to explain how Google Cloud resources fit together, not just define product names. Most entry-level interviews check whether you can choose the right compute option, secure access with IAM, reason about VPC networking, use Cloud Storage safely, and troubleshoot basic operations with the gcloud CLI.

Use this guide to practice concise answers for Compute Engine, Cloud Storage, VPC, IAM, Pub/Sub, Cloud Run functions, and common command-line workflows. When you answer, connect each service to a practical scenario: hosting a web app, storing backups, granting a service account access, or investigating why traffic cannot reach a VM.


GCP Compute Engine

1. What is Google Compute Engine and what are its main use cases?

Answer: Compute Engine provides scalable virtual machines running in Google's data centers.

Key Features:

  • Custom or predefined machine types
  • Persistent disks and local SSDs
  • Spot VMs for fault-tolerant cost savings
  • Live migration for maintenance
  • Managed instance groups and load balancing

Use Cases:

  • Web hosting
  • Application servers
  • Batch processing
  • High-performance computing

Use Compute Engine when you need OS-level control, custom networking, or software that does not fit a managed runtime. If the workload is a stateless container or event-driven function, an interviewer may expect you to compare it with Cloud Run, GKE, or Cloud Run functions instead of defaulting to a VM.

# Create VM instance
gcloud compute instances create my-instance \
  --zone=us-central1-a \
  --machine-type=e2-medium \
  --image-family=debian-11 \
  --image-project=debian-cloud

# List instances
gcloud compute instances list

# SSH into instance
gcloud compute ssh my-instance --zone=us-central1-a

# Stop instance
gcloud compute instances stop my-instance --zone=us-central1-a

Rarity: Very Common
Difficulty: Easy


2. Explain the difference between Persistent Disks and Local SSDs.

Answer:

FeaturePersistent DiskLocal SSD
DurabilityData persists independentlyData lost when VM stops
PerformanceGoodExcellent (low latency)
SizeUp to 64 TBUp to 9 TB
Use CaseBoot disks, data storageTemporary cache, scratch space
CostLowerHigher
SnapshotsSupportedNot supported

Example:

# Create VM with persistent disk
gcloud compute instances create my-vm \
  --boot-disk-size=50GB \
  --boot-disk-type=pd-ssd

# Create VM with local SSD
gcloud compute instances create my-vm \
  --local-ssd interface=NVME

Rarity: Common
Difficulty: Easy-Medium


GCP Cloud Storage

3. What are the different storage classes in Cloud Storage?

Answer: Cloud Storage classes are chosen by access pattern and retention needs. The colder the class, the lower the storage price tends to be, but retrieval and minimum-duration charges matter more.

ClassBest fitMinimum storage durationInterview note
StandardFrequently accessed data, active websites, recent backupsNoneChoose this when low-latency access matters
NearlineData read about monthly30 daysGood for occasional backups or long-tail content
ColdlineData read about quarterly90 daysBetter for disaster recovery or rarely used archives
ArchiveData read less than yearly365 daysLowest storage cost, but retrieval and early deletion costs can surprise teams
# Create bucket
gcloud storage buckets create gs://my-bucket \
  --location=us-central1 \
  --default-storage-class=STANDARD

# Upload file
gcloud storage cp myfile.txt gs://my-bucket/

# List objects
gcloud storage ls gs://my-bucket/

# Download file
gcloud storage cp gs://my-bucket/myfile.txt ./

# Change storage class
gcloud storage objects update gs://my-bucket/myfile.txt \
  --storage-class=NEARLINE

Rarity: Very Common
Difficulty: Easy-Medium


GCP VPC (Virtual Private Cloud)

4. What is a VPC and what are its key components?

Answer: VPC is a virtual network that provides connectivity for GCP resources.

Key Components:

Loading diagram...

Components:

  1. Subnets: Regional IP ranges
  2. Firewall Rules: Control traffic
  3. Routes: Define traffic paths
  4. VPC Peering: Connect VPCs
  5. Cloud VPN: Connect to on-premises
# Create VPC
gcloud compute networks create my-vpc \
  --subnet-mode=custom

# Create subnet
gcloud compute networks subnets create my-subnet \
  --network=my-vpc \
  --region=us-central1 \
  --range=10.0.1.0/24

# Create firewall rule (allow SSH)
gcloud compute firewall-rules create allow-ssh \
  --network=my-vpc \
  --allow=tcp:22 \
  --source-ranges=203.0.113.0/24 \
  --target-tags=admin-ssh

Rarity: Very Common
Difficulty: Medium


5. How do firewall rules work in GCP?

Answer: Firewall rules control incoming and outgoing traffic.

Characteristics:

  • Stateful (return traffic automatically allowed)
  • Applied to network or specific instances
  • Priority-based (0-65535, lower = higher priority)
  • Default: Allow egress, deny ingress
  • Best practice: limit source ranges and target only the instances that need the rule

Rule Components:

  • Direction (ingress/egress)
  • Priority
  • Action (allow/deny)
  • Source/destination
  • Protocols and ports
# Allow HTTP traffic
gcloud compute firewall-rules create allow-http \
  --network=my-vpc \
  --allow=tcp:80 \
  --source-ranges=0.0.0.0/0 \
  --target-tags=web-server

# Allow internal communication
gcloud compute firewall-rules create allow-internal \
  --network=my-vpc \
  --allow=tcp:0-65535,udp:0-65535,icmp \
  --source-ranges=10.0.0.0/8

# Deny specific traffic
gcloud compute firewall-rules create deny-telnet \
  --network=my-vpc \
  --action=DENY \
  --rules=tcp:23 \
  --priority=1000

Rarity: Very Common
Difficulty: Medium


GCP IAM

6. Explain IAM roles and permissions in GCP.

Answer: IAM controls who can do what on which resources.

Key Concepts:

  • Member: User, service account, or group
  • Role: Collection of permissions
  • Policy: Binds members to roles

Role Types:

  1. Primitive: Owner, Editor, Viewer (broad)
  2. Predefined: Service-specific (e.g., Compute Admin)
  3. Custom: User-defined permissions
# Grant role to user
gcloud projects add-iam-policy-binding my-project \
  --member=user:[email protected] \
  --role=roles/compute.instanceAdmin.v1

# Grant role to service account
gcloud projects add-iam-policy-binding my-project \
  --member=serviceAccount:[email protected] \
  --role=roles/storage.objectViewer

# List IAM policy
gcloud projects get-iam-policy my-project

# Remove role
gcloud projects remove-iam-policy-binding my-project \
  --member=user:[email protected] \
  --role=roles/compute.instanceAdmin.v1

Best Practices:

  • Use predefined roles before creating custom roles
  • Follow least privilege and avoid broad Owner/Editor access
  • Use service accounts for applications
  • Review permissions regularly and remove unused access

Rarity: Very Common
Difficulty: Medium


GCP Core Concepts

7. What are GCP regions and zones?

Answer:

Region:

  • Geographic location (e.g., us-central1, europe-west1)
  • Contains multiple zones
  • Independent failure domains
  • Choose based on latency, compliance, cost

Zone:

  • Isolated location within a region
  • Single failure domain
  • Deploy across zones for high availability
Loading diagram...

Example:

# List regions
gcloud compute regions list

# List zones
gcloud compute zones list

# Create instance in specific zone
gcloud compute instances create my-vm \
  --zone=us-central1-a

Rarity: Very Common
Difficulty: Easy


8. What is a service account and when do you use it?

Answer: Service Account is a special account for applications and VMs.

Characteristics:

  • Not for humans
  • Used by applications
  • Can have IAM roles
  • Usually attached to resources or used through short-lived credentials

Use Cases:

  • VM instances accessing Cloud Storage
  • Applications calling GCP APIs
  • CI/CD pipelines
  • Cross-project access

For interviews, emphasize that long-lived service account keys are a risk. Prefer attached service accounts on Google Cloud resources, Workload Identity Federation for external workloads, and single-purpose service accounts with only the permissions they need.

# Create service account
gcloud iam service-accounts create my-sa \
  --display-name="My Service Account"

# Grant role to service account
gcloud projects add-iam-policy-binding my-project \
  --member=serviceAccount:[email protected] \
  --role=roles/storage.objectViewer

# Attach to VM
gcloud compute instances create my-vm \
  [email protected] \
  --scopes=cloud-platform

Rarity: Common
Difficulty: Easy-Medium


Serverless & Messaging

9. What is Cloud Pub/Sub and when do you use it?

Answer: Cloud Pub/Sub is a fully managed messaging service for asynchronous communication.

Key Concepts:

  • Topic: Named resource to which messages are sent
  • Subscription: Named resource representing message stream
  • Publisher: Sends messages to topics
  • Subscriber: Receives messages from subscriptions

Architecture:

Loading diagram...

Basic Operations:

# Create topic
gcloud pubsub topics create my-topic

# Create subscription
gcloud pubsub subscriptions create my-subscription \
  --topic=my-topic \
  --ack-deadline=60

# Publish message
gcloud pubsub topics publish my-topic \
  --message="Hello, World!"

# Pull messages
gcloud pubsub subscriptions pull my-subscription \
  --auto-ack \
  --limit=10

Publisher Example (Python):

from google.cloud import pubsub_v1
import json

# Create publisher client
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path('my-project', 'my-topic')

# Publish message
def publish_message(data):
    message_json = json.dumps(data)
    message_bytes = message_json.encode('utf-8')
    
    # Publish with attributes
    future = publisher.publish(
        topic_path,
        message_bytes,
        event_type='order_created',
        user_id='123'
    )
    
    print(f'Published message ID: {future.result()}')

# Batch publishing for efficiency
def publish_batch(messages):
    futures = []
    for message in messages:
        message_bytes = json.dumps(message).encode('utf-8')
        future = publisher.publish(topic_path, message_bytes)
        futures.append(future)
    
    # Wait for all messages to be published
    for future in futures:
        future.result()

Subscriber Example (Python):

from google.cloud import pubsub_v1

subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path('my-project', 'my-subscription')

def callback(message):
    print(f'Received message: {message.data.decode("utf-8")}')
    print(f'Attributes: {message.attributes}')
    
    # Process message
    try:
        process_message(message.data)
        message.ack()  # Acknowledge successful processing
    except Exception as e:
        print(f'Error processing message: {e}')
        message.nack()  # Negative acknowledge (retry)

# Subscribe
streaming_pull_future = subscriber.subscribe(subscription_path, callback=callback)

print(f'Listening for messages on {subscription_path}...')

try:
    streaming_pull_future.result()
except KeyboardInterrupt:
    streaming_pull_future.cancel()

Subscription Types:

1. Pull Subscription:

# Subscriber pulls messages on demand
gcloud pubsub subscriptions create pull-sub \
  --topic=my-topic

2. Push Subscription:

# Pub/Sub pushes messages to HTTPS endpoint
gcloud pubsub subscriptions create push-sub \
  --topic=my-topic \
  --push-endpoint=https://myapp.example.com/webhook

Use Cases:

  • Event-driven architectures
  • Microservices communication
  • Stream processing pipelines
  • IoT data ingestion
  • Asynchronous task processing

Best Practices:

  • Use message attributes for filtering
  • Implement idempotent message processing
  • Set appropriate acknowledgment deadlines
  • Use dead-letter topics for failed messages
  • Monitor subscription backlog

Rarity: Common
Difficulty: Medium


10. What are Cloud Run functions and how do you deploy one?

Answer: Cloud Run functions are the current Google Cloud functions experience for serverless, event-driven code. Many interviewers still say "Cloud Functions," but the practical idea is the same: deploy small pieces of code that run in response to HTTP requests or events without managing servers.

Triggers:

  • HTTP requests
  • Cloud Pub/Sub messages
  • Cloud Storage events
  • Firestore events
  • Firebase events

HTTP Function Example:

# main.py
import functions_framework
from flask import jsonify

@functions_framework.http
def hello_http(request):
    """HTTP Cloud Function"""
    request_json = request.get_json(silent=True)
    
    if request_json and 'name' in request_json:
        name = request_json['name']
    else:
        name = 'World'
    
    return jsonify({
        'message': f'Hello, {name}!',
        'status': 'success'
    })

Pub/Sub Function Example:

import base64
import json
import functions_framework

@functions_framework.cloud_event
def process_pubsub(cloud_event):
    """Triggered by Pub/Sub message"""
    # Decode message
    message_data = base64.b64decode(cloud_event.data["message"]["data"]).decode()
    message_json = json.loads(message_data)
    
    print(f'Processing message: {message_json}')
    
    # Process the message
    result = process_data(message_json)
    
    return result

Storage Function Example:

import functions_framework

@functions_framework.cloud_event
def process_file(cloud_event):
    """Triggered by Cloud Storage object creation"""
    data = cloud_event.data
    
    bucket = data["bucket"]
    name = data["name"]
    
    print(f'File {name} uploaded to {bucket}')
    
    # Process the file
    process_uploaded_file(bucket, name)

Deployment:

# Deploy HTTP function
gcloud functions deploy hello_http \
  --gen2 \
  --runtime=python312 \
  --trigger-http \
  --allow-unauthenticated \
  --entry-point=hello_http \
  --region=us-central1

# Deploy Pub/Sub function
gcloud functions deploy process_pubsub \
  --gen2 \
  --runtime=python312 \
  --trigger-topic=my-topic \
  --entry-point=process_pubsub \
  --region=us-central1

# Deploy Storage function
gcloud functions deploy process_file \
  --gen2 \
  --runtime=python312 \
  --trigger-resource=my-bucket \
  --trigger-event=google.storage.object.finalize \
  --entry-point=process_file \
  --region=us-central1

# Deploy with environment variables
gcloud functions deploy my_function \
  --gen2 \
  --runtime=python312 \
  --trigger-http \
  --set-env-vars DATABASE_URL=...,API_KEY=...

# Deploy with specific memory and timeout
gcloud functions deploy my_function \
  --gen2 \
  --runtime=python312 \
  --trigger-http \
  --memory=512MB \
  --timeout=300s

Requirements File:

# requirements.txt
functions-framework==3.*
google-cloud-storage==2.*
google-cloud-pubsub==2.*
requests==2.*

Testing Locally:

# Install Functions Framework
pip install functions-framework

# Run locally
functions-framework --target=hello_http --port=8080

# Test with curl
curl -X POST http://localhost:8080 \
  -H "Content-Type: application/json" \
  -d '{"name": "Alice"}'

Monitoring:

# View logs
gcloud functions logs read hello_http \
  --region=us-central1 \
  --limit=50

# View function details
gcloud functions describe hello_http \
  --region=us-central1

Best Practices:

  • Keep functions small and focused
  • Use environment variables for configuration
  • Implement proper error handling
  • Set appropriate timeout values
  • Use Cloud Logging for debugging
  • Minimize cold start time
  • Protect HTTP functions with IAM unless they truly need to be public

Rarity: Very Common
Difficulty: Easy-Medium


CLI & Tools

11. Explain common gcloud CLI commands and configuration.

Answer: The gcloud CLI is the primary tool for managing GCP resources.

Initial Setup:

# Install gcloud SDK (macOS)
curl https://sdk.cloud.google.com | bash
exec -l $SHELL

# Initialize and authenticate
gcloud init

# Login
gcloud auth login

# Set default project
gcloud config set project my-project-id

# Set default region/zone
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a

Configuration Management:

# List configurations
gcloud config configurations list

# Create new configuration
gcloud config configurations create dev-config

# Activate configuration
gcloud config configurations activate dev-config

# Set properties
gcloud config set account [email protected]
gcloud config set project dev-project

# View current configuration
gcloud config list

# Unset property
gcloud config unset compute/zone

Common Commands by Service:

Compute Engine:

# List instances
gcloud compute instances list

# Create instance
gcloud compute instances create my-vm \
  --machine-type=e2-medium \
  --zone=us-central1-a

# SSH into instance
gcloud compute ssh my-vm --zone=us-central1-a

# Stop/start instance
gcloud compute instances stop my-vm --zone=us-central1-a
gcloud compute instances start my-vm --zone=us-central1-a

# Delete instance
gcloud compute instances delete my-vm --zone=us-central1-a

Cloud Storage:

# List buckets
gcloud storage buckets list

# Create bucket
gcloud storage buckets create gs://my-bucket --location=us-central1

# Upload file
gcloud storage cp myfile.txt gs://my-bucket/

# Download file
gcloud storage cp gs://my-bucket/myfile.txt ./

# Sync directory
gcloud storage rsync ./local-dir gs://my-bucket/remote-dir --recursive

# Set bucket lifecycle
gcloud storage buckets update gs://my-bucket --lifecycle-file=lifecycle.json

IAM:

# List IAM policies
gcloud projects get-iam-policy my-project

# Add IAM binding
gcloud projects add-iam-policy-binding my-project \
  --member=user:[email protected] \
  --role=roles/viewer

# Create service account
gcloud iam service-accounts create my-sa \
  --display-name="My Service Account"

# Create and download key
gcloud iam service-accounts keys create key.json \
  [email protected]

Kubernetes Engine:

# List clusters
gcloud container clusters list

# Get credentials
gcloud container clusters get-credentials my-cluster \
  --zone=us-central1-a

# Create cluster
gcloud container clusters create my-cluster \
  --num-nodes=3 \
  --zone=us-central1-a

Useful Flags:

# Format output
gcloud compute instances list --format=json
gcloud compute instances list --format=yaml
gcloud compute instances list --format="table(name,zone,status)"

# Filter results
gcloud compute instances list --filter="zone:us-central1-a"
gcloud compute instances list --filter="status=RUNNING"

# Limit results
gcloud compute instances list --limit=10

# Sort results
gcloud compute instances list --sort-by=creationTimestamp

Helpful Commands:

# Get help
gcloud help
gcloud compute instances create --help

# View project info
gcloud projects describe my-project

# List available regions/zones
gcloud compute regions list
gcloud compute zones list

# View quotas
gcloud compute project-info describe \
  --project=my-project

# Enable API
gcloud services enable compute.googleapis.com

# List enabled APIs
gcloud services list --enabled

Best Practices:

  • Use configurations for different environments
  • Set default project and region
  • Use --format for scripting
  • Use --filter to narrow results
  • Enable command completion
  • Keep gcloud SDK updated

Rarity: Very Common
Difficulty: Easy-Medium


Conclusion

Preparing for a junior GCP cloud engineer interview is mostly about showing practical judgment. A strong answer explains the service, names the trade-off, and gives a simple operational step you would take in the console or CLI.

Prioritize these areas:

  1. Compute: when to use Compute Engine, GKE, Cloud Run, or Cloud Run functions
  2. Storage: bucket classes, lifecycle rules, retention, and safe access
  3. Networking: VPCs, subnets, routes, firewall rules, and load balancing basics
  4. IAM: predefined roles, service accounts, least privilege, and key avoidance
  5. Operations: logs, metrics, quotas, gcloud configuration, and simple troubleshooting

Before the interview, create a small project, deploy one VM or function, upload a file to Cloud Storage, grant a service account a narrow role, and practice explaining each decision out loud.

Newsletter subscription

Weekly career tips that actually work

Get the latest insights delivered straight to your inbox

Stop Applying. Start Getting Hired.

Transform your resume into an interview magnet with AI-powered optimization trusted by job seekers worldwide.

Get started free

Share this post

Beat the 75% ATS Rejection Rate

3 out of 4 resumes never reach a human eye. Our keyword optimization increases your pass rate by up to 80%, ensuring recruiters actually see your potential.