Getting Started with Cloud Computing: Comparing AWS, Azure, and Google Cloud, and Deploying a Simple App

KIKI-Generiert
Nov 19, 2025
10 Min Lesezeit
0 reads
Keine Bewertungen
Technologie

Cloud computing lets you offload infrastructure management and scale on-demand, but the number of services and choices can feel overwhelming. This tutorial compares Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) from an intermediate developer’s perspective and walks you through deploying a simple containerized web app using each provider’s modern, managed platform. By the end, you’ll be able to pick a provider confidently and ship a basic service in minutes. High-level comparison of AWS, Azure, and Google Cloud services

Cloud service models in a minute

  • IaaS (Infrastructure as a Service): You manage virtual machines (VMs), disks, networks. Max control, more ops overhead. Examples: AWS EC2, Azure Virtual Machines, Google Compute Engine.
  • PaaS (Platform as a Service): You push code or containers; the platform manages OS, scaling, patches. Faster to start; less low-level control. Examples: AWS App Runner/Elastic Beanstalk, Azure App Service/Container Apps, Google Cloud Run/App Engine.
  • Serverless/Functions: Event-driven code; scales to zero; pay per use. Examples: AWS Lambda, Azure Functions, Google Cloud Functions.

For most teams starting out, a PaaS/serverless container (Cloud Run, App Runner, Azure Container Apps) gives the best ratio of speed to control.

AWS vs Azure vs Google Cloud: what matters

Compute and containers

  • AWS
    • VMs: EC2 for fine-grained control.
    • Containers: ECS (simpler) or EKS (Kubernetes). App Runner for serverless containers; Elastic Beanstalk for managed apps.
  • Azure
    • VMs: Azure Virtual Machines.
    • Containers: Azure Kubernetes Service (AKS), Azure Container Apps (serverless containers), Azure App Service (code or containers).
  • Google Cloud
    • VMs: Compute Engine.
    • Containers: Google Kubernetes Engine (GKE), Cloud Run (serverless containers), App Engine (PaaS).

Tip: Prefer serverless containers (Cloud Run, Azure Container Apps, AWS App Runner) for stateless web APIs and frontends.

Storage and databases

  • Object storage: AWS S3, Azure Blob Storage, Google Cloud Storage.
  • Managed databases: AWS RDS/Aurora, Azure Database (MySQL/Postgres), Cloud SQL/AlloyDB. All offer automatic backups, high availability, and scaling options.

Networking and regions

  • All three offer global footprints, load balancers, CDN (CloudFront, Azure Front Door, Cloud CDN), VPC/VNet isolation, and private networking.
  • Be mindful of regional availability. Some services are regional only. Pick regions close to your users.

IAM and security

  • AWS IAM: fine-grained, policy documents in JSON.
  • Azure Entra ID (formerly Azure AD) + Azure RBAC: role assignments at subscription/resource levels.
  • Google Cloud IAM: roles bound to principals at project/folder/org levels.
  • Best practice: adopt least-privilege roles, use managed identities/service accounts for deployments, rotate secrets or use a managed secrets store (AWS Secrets Manager, Azure Key Vault, Secret Manager in GCP).

Pricing and free tiers

  • All have usage-based pricing and free tiers. Serverless containers are cost-efficient for spiky or low-traffic workloads (Cloud Run can scale to zero).
  • Watch for hidden costs: data egress, NAT gateways, managed load balancers, and log ingestion/retention.

Tooling and IaC

  • CLIs: aws, az, gcloud. All are mature and scriptable.
  • Infrastructure as Code: Terraform works across clouds; native options include AWS CDK/CloudFormation, Azure Bicep/ARM, and Google Cloud Deploy/Config Connector.

Choosing a provider: a quick decision checklist

  • Language/runtime: All support common runtimes. Cloud Run/Container Apps/App Runner handle any container.
  • Operational comfort: Are you already on Microsoft 365/Azure or have AWS experience? Prefer GCP’s developer tooling?
  • Region and compliance: Data locality, certifications, and service availability.
  • Pricing predictability: Serverless for variable traffic, reserved instances for steady load.
  • Ecosystem: If you need managed ML (Vertex AI, SageMaker, Azure ML) or analytics (BigQuery, Redshift, Synapse), pick accordingly.

If you don’t have constraints, Cloud Run (GCP), Azure Container Apps, and AWS App Runner are excellent starting points for web APIs.

Prerequisites

  • A recent Docker installation.
  • Git and a terminal.
  • Accounts on AWS, Azure, and Google Cloud.
  • CLIs installed:
    • AWS: brew install awscli or see AWS docs.
    • Azure: brew install azure-cli or see Azure docs.
    • GCP: brew install --cask google-cloud-sdk or see GCP docs.
  • Log in:
    • AWS: aws configure (or aws configure sso if your org uses SSO); set default region (e.g., us-east-1).
    • Azure: az login; az account set --subscription "<SUBSCRIPTION_ID>".
    • GCP: gcloud init; gcloud auth login; gcloud config set project <PROJECT_ID>; gcloud config set run/region us-central1.

Build a simple containerized app

We’ll create a minimal Node.js HTTP server, package it in Docker, and run it locally.

App files

package.json

{
  "name": "hello-cloud",
  "version": "1.0.0",
  "main": "server.js",
  "scripts": { "start": "node server.js" },
  "dependencies": { "express": "^4.18.0" }
}

server.js

const express = require('express');
const app = express();
const port = process.env.PORT || 8080;

app.get('/', (req, res) => {
  res.send(`Hello, Cloud! Deployed at ${new Date().toISOString()}\n`);
});

app.listen(port, () => console.log(`Listening on ${port}`));

Dockerfile

FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
ENV PORT=8080
EXPOSE 8080
CMD ["npm", "start"]

Build and test locally

docker build -t hello-cloud:v1 .
docker run -p 8080:8080 hello-cloud:v1
# In another terminal:
curl http://localhost:8080

If you see “Hello, Cloud!”, the app is ready to deploy.

Deploy options across AWS, Azure, and GCP

We’ll deploy the same container to each provider’s serverless container platform. Each path publishes the image to the provider’s registry and creates a public HTTPS URL.

Deploying a containerized app to App Runner, Container Apps, and Cloud Run

Option A: AWS App Runner (simple, managed)

App Runner runs your container with HTTPS, autoscaling, and no direct VM/Kubernetes management. The easiest path is via the console (it auto-creates IAM roles). CLI is possible but more involved.

Steps (Console-first for simplicity):

  1. Push the image to Amazon ECR
  • Create a repository in ECR named hello-cloud.
  • Authenticate and push:
AWS_REGION=us-east-1
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
aws ecr create-repository --repository-name hello-cloud --region $AWS_REGION || true
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
docker tag hello-cloud:v1 $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/hello-cloud:v1
docker push $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/hello-cloud:v1
  1. Create an App Runner service
  • In AWS Console > App Runner > Create service.
  • Source: Container registry > Amazon ECR.
  • Select your image and tag v1.
  • Runtime: Port 8080. Auto deploy: off (for manual control initially).
  • Create and wait for “Running” status.
  1. Test
  • Open the Default domain URL (e.g., https://<random>.<region>.awsapprunner.com/).
  • You should see “Hello, Cloud!”

Notes and tips:

  • App Runner creates a service role; keep least privilege.
  • To update: push a new tag and deploy a new revision in App Runner, or enable auto-deploy from ECR.
  • Costs: You’re billed for provisioned CPU/memory and requests; stop the service when done to avoid charges.

CLI alternative (advanced):

  • You can script aws apprunner create-service with an ECR image identifier and a service role; consult AWS docs for the latest JSON service configuration schema.

Option B: Azure Container Apps (serverless containers, managed revisions)

Azure Container Apps runs containers on a serverless control plane with automatic scaling.

  1. Create an Azure Container Registry (ACR) and push the image
RESOURCE_GROUP=rg-hello-cloud
LOCATION=eastus
ACR_NAME=myhellocloudacr$RANDOM

az group create --name $RESOURCE_GROUP --location $LOCATION
az acr create --name $ACR_NAME --resource-group $RESOURCE_GROUP --sku Basic
az acr login --name $ACR_NAME

ACR_LOGIN_SERVER=$(az acr show -n $ACR_NAME --query loginServer -o tsv)
docker tag hello-cloud:v1 $ACR_LOGIN_SERVER/hello-cloud:v1
docker push $ACR_LOGIN_SERVER/hello-cloud:v1
  1. Enable the Container Apps extension and create an environment
az extension add --name containerapp --upgrade
az provider register --namespace Microsoft.App
az provider register --namespace Microsoft.OperationalInsights

LOG_ANALYTICS_WORKSPACE=law-hello-cloud$RANDOM
LOG_RG=$RESOURCE_GROUP

az monitor log-analytics workspace create -g $LOG_RG -n $LOG_ANALYTICS_WORKSPACE -l $LOCATION
LOG_ID=$(az monitor log-analytics workspace show -g $LOG_RG -n $LOG_ANALYTICS_WORKSPACE --query customerId -o tsv)
LOG_KEY=$(az monitor log-analytics workspace get-shared-keys -g $LOG_RG -n $LOG_ANALYTICS_WORKSPACE --query primarySharedKey -o tsv)

ENV_NAME=cae-hello-cloud
az containerapp env create \
  -g $RESOURCE_GROUP -n $ENV_NAME -l $LOCATION \
  --logs-workspace-id $LOG_ID --logs-workspace-key $LOG_KEY
  1. Deploy the container app
APP_NAME=hello-cloud
az containerapp create \
  -g $RESOURCE_GROUP -n $APP_NAME \
  --environment $ENV_NAME \
  --image $ACR_LOGIN_SERVER/hello-cloud:v1 \
  --target-port 8080 \
  --ingress external \
  --registry-server $ACR_LOGIN_SERVER \
  --query properties.configuration.ingress.fqdn -o tsv

Open the returned FQDN in your browser to verify the response.

Notes:

  • Revisions let you do blue/green and traffic splitting.
  • Scale rules support HTTP and event-driven scaling.
  • Keep an eye on Log Analytics costs.

Option C: Google Cloud Run (serverless, scales to zero)

Cloud Run deploys a container as a fully managed HTTPS service and scales down to zero when idle.

Option 1: Build from source (no local Docker push needed):

gcloud run deploy hello-cloud \
  --source . \
  --region us-central1 \
  --allow-unauthenticated \
  --port 8080

Option 2: Push an image to Artifact Registry, then deploy:

REGION=us-central1
REPO=hello-repo
PROJECT_ID=$(gcloud config get-value project)
gcloud artifacts repositories create $REPO --repository-format=docker --location=$REGION || true
gcloud auth configure-docker $REGION-docker.pkg.dev

docker tag hello-cloud:v1 $REGION-docker.pkg.dev/$PROJECT_ID/$REPO/hello-cloud:v1
docker push $REGION-docker.pkg.dev/$PROJECT_ID/$REPO/hello-cloud:v1

gcloud run deploy hello-cloud \
  --image $REGION-docker.pkg.dev/$PROJECT_ID/$REPO/hello-cloud:v1 \
  --region $REGION \
  --allow-unauthenticated \
  --port 8080

The command outputs a service URL (e.g., https://hello-cloud-xyz-uc.a.run.app). Open it in your browser.

Notes:

  • Cloud Run supports min/max instances, concurrency, CPU during idle, and VPC access.
  • Request/response timeouts default to 5 minutes; adjust as needed.

Updating and rolling out changes

  • Bump the version: Make a code change, rebuild/push a new image tag (v2), and update the service.
  • Zero-downtime deployments:
    • App Runner: Deployment creates a new revision; traffic shifts automatically.
    • Container Apps: Use revisions and traffic-split flags to canary.
    • Cloud Run: Deploy creates a new revision; you can pin or split traffic.

Example (Cloud Run canary):

gcloud run services update-traffic hello-cloud \
  --to-revisions REVISION-NEW=10,REVISION-OLD=90

Observability and troubleshooting

  • Logs:
    • AWS: CloudWatch Logs (App Runner streams automatically).
    • Azure: Log Analytics + Container Apps insights.
    • GCP: Cloud Logging (gcloud logs tail for CLI).
  • Metrics:
    • Look for request count, latency, error rate, CPU/memory usage, instance count.
  • Debugging tips:
    • Ensure PORT matches service configuration.
    • Validate health checks (HTTP 200 on /).
    • Check container image architecture (linux/amd64 vs arm64).
    • Confirm registry permissions and image pull secrets.

Best practices

  • Configuration and secrets
    • Store non-secret config in environment variables.
    • Use managed secret stores (Secrets Manager, Key Vault, Secret Manager).
    • Avoid baking secrets into images.
  • Security
    • Restrict public ingress where possible; use private networking and identity-aware proxies when needed.
    • Use least-privilege roles for deployment automation.
  • Cost control
    • Prefer serverless for bursty/low-traffic services.
    • Set max instances and autoscaling limits.
    • Clean up unused images, old revisions, and log retention.
  • Reliability
    • Implement health endpoints and reasonable timeouts.
    • Use multiple zones/regions for critical workloads.
    • Add a CDN or edge caching for static content.

Common pitfalls

  • Port mismatch: Your app listens on a different port than the platform’s expected port.
  • Missing auth to pull images: Grant the service access to your registry (ACR/ECR/Artifact Registry).
  • Region confusion: Resources in different regions can cause latency and cross-region charges.
  • Egress costs: Calling external APIs or cross-region services can add up; keep data close.
  • Cold starts: Serverless can scale to zero; set min instances for latency-sensitive endpoints (Cloud Run min-instances, Container Apps minReplicas, App Runner provisioned settings).

Where to go next

  • Add HTTPS custom domains and managed certificates.
  • Attach a database:
    • AWS: RDS/Aurora with security groups and private networking.
    • Azure: Azure Database for PostgreSQL/MySQL with VNet integration.
    • GCP: Cloud SQL with Cloud Run connector.
  • CI/CD:
    • GitHub Actions to build/push and deploy via CLIs.
    • Native: AWS CodePipeline, Azure DevOps/GitHub Actions, Google Cloud Build/Deploy.
  • Infrastructure as Code: Capture your setup with Terraform so you can reproduce environments.

By starting with serverless containers on App Runner, Azure Container Apps, or Cloud Run, you minimize operational overhead while keeping a clean path to scale. Choose the provider that best aligns with your team’s tools and constraints, then automate everything from builds to rollouts.

Bewerte dieses Tutorial

Anmelden um dieses Tutorial zu bewerten

Kommentare (0)

Anmelden um an der Diskussion teilzunehmen

Scrolle nach unten um Kommentare und Bewertungen zu laden