Junior Cloud Engineer AWS Interview Questions: Complete Guide

Milad Bonakdar
Author
Master essential AWS fundamentals with comprehensive interview questions covering EC2, S3, VPC, IAM, and core cloud concepts for junior cloud engineer roles.
Introduction
AWS (Amazon Web Services) is the leading cloud platform, offering over 200 services for compute, storage, networking, and more. As a junior cloud engineer, you'll need foundational knowledge of core AWS services and cloud concepts to build and manage cloud infrastructure.
This guide covers essential interview questions for junior AWS cloud engineers, focusing on EC2, S3, VPC, IAM, and fundamental cloud concepts.
AWS EC2 (Elastic Compute Cloud)
1. What is AWS EC2 and what are its main benefits?
Answer: EC2 (Elastic Compute Cloud) provides resizable virtual servers in the cloud.
Key Benefits:
- Elasticity: Scale up/down based on demand
- Pay-as-you-go: Only pay for what you use
- Variety: Multiple instance types for different workloads
- Global: Deploy in multiple regions worldwide
- Integration: Works seamlessly with other AWS services
Common Use Cases:
- Web hosting
- Application servers
- Development/test environments
- Batch processing
- High-performance computing
Rarity: Very Common
Difficulty: Easy
2. Explain the difference between stopping and terminating an EC2 instance.
Answer:
Stopping an Instance:
- Instance is shut down but not deleted
- EBS root volume persists
- You're charged for EBS storage
- Can restart later with same configuration
- Elastic IP remains associated
- Instance ID stays the same
Terminating an Instance:
- Instance is permanently deleted
- EBS root volume deleted (unless configured otherwise)
- No charges after termination
- Cannot restart
- Elastic IP is disassociated
- Instance ID cannot be reused
# AWS CLI examples
# Stop an instance
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
# Start a stopped instance
aws ec2 start-instances --instance-ids i-1234567890abcdef0
# Terminate an instance
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0Rarity: Very Common
Difficulty: Easy
AWS S3 (Simple Storage Service)
3. What is Amazon S3 and what are the different storage classes?
Answer: S3 is object storage for storing and retrieving any amount of data from anywhere.
Storage Classes:
| Class | Use Case | Availability | Cost |
|---|---|---|---|
| S3 Standard | Frequently accessed data | 99.99% | Highest |
| S3 Intelligent-Tiering | Unknown/changing access patterns | 99.9% | Auto-optimized |
| S3 Standard-IA | Infrequently accessed | 99.9% | Lower |
| S3 One Zone-IA | Infrequent, non-critical | 99.5% | Lowest IA |
| S3 Glacier Instant | Archive, instant retrieval | 99.9% | Very low |
| S3 Glacier Flexible | Archive, minutes-hours retrieval | 99.99% | Very low |
| S3 Glacier Deep Archive | Long-term archive, 12hr retrieval | 99.99% | Lowest |
# Create S3 bucket
aws s3 mb s3://my-bucket-name
# Upload file
aws s3 cp myfile.txt s3://my-bucket-name/
# List objects
aws s3 ls s3://my-bucket-name/
# Download file
aws s3 cp s3://my-bucket-name/myfile.txt ./Rarity: Very Common
Difficulty: Easy-Medium
AWS VPC (Virtual Private Cloud)
4. What is AWS VPC and what are its key components?
Answer: VPC is a logically isolated virtual network where you launch AWS resources.
Key Components:
Components:
-
Subnets: Segments of VPC IP range
- Public: Has route to Internet Gateway
- Private: No direct internet access
-
Internet Gateway: Enables internet access
-
NAT Gateway: Allows private subnet internet access (outbound only)
-
Route Tables: Control traffic routing
-
Security Groups: Instance-level firewall (stateful)
-
Network ACLs: Subnet-level firewall (stateless)
Rarity: Very Common
Difficulty: Medium
5. What's the difference between Security Groups and Network ACLs?
Answer:
| Feature | Security Group | Network ACL |
|---|---|---|
| Level | Instance | Subnet |
| State | Stateful | Stateless |
| Rules | Allow only | Allow and Deny |
| Return Traffic | Automatic | Must be explicitly allowed |
| Application | Selective (per instance) | All instances in subnet |
| Rule Evaluation | All rules | Rules in order |
Example:
# Create security group
aws ec2 create-security-group \
--group-name web-sg \
--description "Web server security group" \
--vpc-id vpc-1234567890abcdef0
# Add inbound rule (allow HTTP)
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0
# Add inbound rule (allow SSH from specific IP)
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 22 \
--cidr 203.0.113.0/24Rarity: Very Common
Difficulty: Medium
AWS IAM (Identity and Access Management)
6. Explain IAM users, groups, and roles.
Answer: IAM controls access to AWS resources.
IAM Users:
- Individual identity with credentials
- Long-term credentials (password, access keys)
- Use for people or applications
IAM Groups:
- Collection of users
- Attach policies to groups
- Users inherit group permissions
IAM Roles:
- Temporary credentials
- Assumed by users, applications, or services
- No long-term credentials
- Use for EC2 instances, Lambda functions, cross-account access
// Example IAM Policy (S3 read-only)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}Best Practices:
- Use roles for EC2 instances (not access keys)
- Follow least privilege principle
- Enable MFA for privileged users
- Rotate credentials regularly
- Use groups for permission management
Rarity: Very Common
Difficulty: Medium
AWS Core Concepts
7. What are AWS Regions and Availability Zones?
Answer:
AWS Region:
- Geographic location (e.g., us-east-1, eu-west-1)
- Contains multiple Availability Zones
- Isolated from other regions
- Choose based on: latency, compliance, cost
Availability Zone (AZ):
- One or more data centers within a region
- Isolated from failures in other AZs
- Connected with low-latency networking
- Deploy across multiple AZs for high availability
High Availability Example:
# Launch instances in multiple AZs
aws ec2 run-instances \
--image-id ami-0abcdef1234567890 \
--instance-type t2.micro \
--subnet-id subnet-1a \
--count 1
aws ec2 run-instances \
--image-id ami-0abcdef1234567890 \
--instance-type t2.micro \
--subnet-id subnet-1b \
--count 1Rarity: Very Common
Difficulty: Easy
8. What is an AMI (Amazon Machine Image)?
Answer: AMI is a template for creating EC2 instances.
Contains:
- Operating system
- Application server
- Applications
- Configuration settings
Types:
- AWS-provided: Amazon Linux, Ubuntu, Windows
- Marketplace: Third-party AMIs
- Custom: Your own AMIs
Creating Custom AMI:
# Create AMI from running instance
aws ec2 create-image \
--instance-id i-1234567890abcdef0 \
--name "My-Web-Server-AMI" \
--description "Web server with Apache configured"
# Launch instance from AMI
aws ec2 run-instances \
--image-id ami-0abcdef1234567890 \
--instance-type t2.micro \
--key-name my-key-pairUse Cases:
- Standardized deployments
- Backup and recovery
- Auto Scaling
- Multi-region deployment
Rarity: Common
Difficulty: Easy-Medium
AWS Storage
9. What is EBS and what are the different volume types?
Answer: EBS (Elastic Block Store) provides persistent block storage for EC2 instances.
Volume Types:
| Type | Use Case | Performance | Cost |
|---|---|---|---|
| gp3 (General Purpose SSD) | Most workloads | 3,000-16,000 IOPS | Lowest SSD |
| gp2 (General Purpose SSD) | Legacy, general use | Up to 16,000 IOPS | Low |
| io2/io1 (Provisioned IOPS) | Databases, critical apps | Up to 64,000 IOPS | Highest |
| st1 (Throughput Optimized HDD) | Big data, data warehouses | High throughput | Low |
| sc1 (Cold HDD) | Infrequent access | Lowest cost | Lowest |
Creating and Attaching EBS:
# Create EBS volume
aws ec2 create-volume \
--availability-zone us-east-1a \
--size 100 \
--volume-type gp3 \
--iops 3000 \
--throughput 125
# Attach to instance
aws ec2 attach-volume \
--volume-id vol-1234567890abcdef0 \
--instance-id i-1234567890abcdef0 \
--device /dev/sdf
# Format and mount (on the instance)
sudo mkfs -t ext4 /dev/sdf
sudo mkdir /data
sudo mount /dev/sdf /data
# Make permanent (add to /etc/fstab)
echo "/dev/sdf /data ext4 defaults,nofail 0 2" | sudo tee -a /etc/fstabEBS Snapshots:
# Create snapshot
aws ec2 create-snapshot \
--volume-id vol-1234567890abcdef0 \
--description "Backup before upgrade"
# Restore from snapshot
aws ec2 create-volume \
--snapshot-id snap-1234567890abcdef0 \
--availability-zone us-east-1a \
--volume-type gp3
# Copy snapshot to another region
aws ec2 copy-snapshot \
--source-region us-east-1 \
--source-snapshot-id snap-1234567890abcdef0 \
--destination-region us-west-2Key Features:
- Persistent: Data survives instance termination
- Snapshots: Point-in-time backups to S3
- Encryption: At-rest and in-transit
- Resizable: Increase size without downtime
- Multi-attach: io2 volumes can attach to multiple instances
Best Practices:
- Use gp3 for most workloads (better price/performance)
- Enable encryption by default
- Regular snapshots for backups
- Delete unused volumes to save costs
Rarity: Very Common
Difficulty: Easy-Medium
10. Explain S3 bucket policies and how they differ from IAM policies.
Answer: Both control access to S3, but they work differently:
IAM Policies:
- Attached to users, groups, or roles
- Control what identities can do
- Managed centrally in IAM
S3 Bucket Policies:
- Attached to S3 buckets
- Control access to specific buckets
- Can grant cross-account access
- Can restrict by IP address
Example IAM Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}Example S3 Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-public-bucket/*"
},
{
"Sid": "RestrictByIP",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "203.0.113.0/24"
}
}
}
]
}Apply Bucket Policy:
# Create policy file (policy.json)
# Then apply it
aws s3api put-bucket-policy \
--bucket my-bucket \
--policy file://policy.json
# View current policy
aws s3api get-bucket-policy \
--bucket my-bucket
# Delete policy
aws s3api delete-bucket-policy \
--bucket my-bucketCommon Use Cases:
1. Public Website Hosting:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-website/*"
}
]
}2. Cross-Account Access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::shared-bucket",
"arn:aws:s3:::shared-bucket/*"
]
}
]
}3. Enforce Encryption:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
]
}When to Use:
- IAM Policy: Control what your users/applications can do
- Bucket Policy: Control who can access your bucket (including external accounts)
Rarity: Very Common
Difficulty: Medium
Monitoring & Management
11. What is CloudWatch and how do you use it for monitoring?
Answer: CloudWatch is AWS's monitoring and observability service.
Key Components:
1. Metrics:
- Numerical data points over time
- EC2: CPU, Network, Disk
- RDS: Connections, IOPS
- Custom metrics: Application-specific
# View EC2 CPU metrics
aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name CPUUtilization \
--dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
--start-time 2024-11-25T00:00:00Z \
--end-time 2024-11-25T23:59:59Z \
--period 3600 \
--statistics Average,Maximum
# Publish custom metric
aws cloudwatch put-metric-data \
--namespace MyApp \
--metric-name PageLoadTime \
--value 0.5 \
--unit Seconds2. Alarms:
# Create alarm for high CPU
aws cloudwatch put-metric-alarm \
--alarm-name high-cpu-alarm \
--alarm-description "Alert when CPU exceeds 80%" \
--metric-name CPUUtilization \
--namespace AWS/EC2 \
--statistic Average \
--period 300 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 2 \
--dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
--alarm-actions arn:aws:sns:us-east-1:123456789012:my-topic3. Logs:
# Create log group
aws logs create-log-group \
--log-group-name /aws/application/myapp
# Create log stream
aws logs create-log-stream \
--log-group-name /aws/application/myapp \
--log-stream-name instance-1
# Put log events
aws logs put-log-events \
--log-group-name /aws/application/myapp \
--log-stream-name instance-1 \
--log-events \
timestamp=1234567890000,message="Application started" \
timestamp=1234567891000,message="Processing request"
# Query logs
aws logs filter-log-events \
--log-group-name /aws/application/myapp \
--filter-pattern "ERROR" \
--start-time 12345678900004. Dashboards:
# Create dashboard with boto3
import boto3
import json
cloudwatch = boto3.client('cloudwatch')
dashboard_body = {
"widgets": [
{
"type": "metric",
"properties": {
"metrics": [
["AWS/EC2", "CPUUtilization", {"stat": "Average"}]
],
"period": 300,
"stat": "Average",
"region": "us-east-1",
"title": "EC2 CPU Utilization"
}
},
{
"type": "metric",
"properties": {
"metrics": [
["AWS/RDS", "DatabaseConnections"]
],
"period": 300,
"stat": "Sum",
"region": "us-east-1",
"title": "RDS Connections"
}
}
]
}
cloudwatch.put_dashboard(
DashboardName='MyAppDashboard',
DashboardBody=json.dumps(dashboard_body)
)Common Monitoring Scenarios:
Monitor EC2 Instance:
# CPU, Network, Disk metrics are automatic
# For memory, need CloudWatch agent
# Install CloudWatch agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
sudo rpm -U ./amazon-cloudwatch-agent.rpm
# Configure agent (creates config file)
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
# Start agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
-a fetch-config \
-m ec2 \
-s \
-c file:/opt/aws/amazon-cloudwatch-agent/bin/config.jsonMonitor Application Logs:
# Python application with CloudWatch logging
import boto3
import logging
from datetime import datetime
class CloudWatchHandler(logging.Handler):
def __init__(self, log_group, log_stream):
super().__init__()
self.client = boto3.client('logs')
self.log_group = log_group
self.log_stream = log_stream
def emit(self, record):
log_entry = self.format(record)
self.client.put_log_events(
logGroupName=self.log_group,
logStreamName=self.log_stream,
logEvents=[{
'timestamp': int(datetime.now().timestamp() * 1000),
'message': log_entry
}]
)
# Usage
logger = logging.getLogger()
logger.addHandler(CloudWatchHandler('/aws/myapp', 'instance-1'))
logger.info("Application started")Best Practices:
- Set up alarms for critical metrics
- Use log groups to organize logs
- Create dashboards for quick overview
- Set retention policies to control costs
- Use metric filters for log analysis
Rarity: Very Common
Difficulty: Medium
Conclusion
Preparing for a junior AWS cloud engineer interview requires understanding core services and cloud concepts. Focus on:
- EC2: Instance types, lifecycle, security
- S3: Storage classes, bucket policies, versioning
- VPC: Networking, subnets, security groups
- IAM: Users, roles, policies, least privilege
- Core Concepts: Regions, AZs, AMIs
Practice using the AWS Console and CLI to gain hands-on experience. Good luck!




