AWS Cloud Practitioner

Selecting the Right AWS Instance Type

When it comes to selecting the appropriate AWS EC2 instance type for your workloads, there are several factors you need to consider to ensure optimal performance and cost efficiency. Each instance type is designed to meet specific needs in terms of compute power, memory, storage, and networking

1. General Purpose Instances

Best for balanced workloads.

General purpose instances offer a balanced mix of compute, memory, and networking resources, making them versatile for a variety of workloads. These instances are ideal for applications that do not require specialized configurations for any one resource type. 

Use cases:

  • Application servers: Running web servers, app servers, or microservices.
  • Gaming servers: Hosting online games that require moderate compute, memory, and networking power.
  • Backend servers for enterprise applications: These include middleware and backend processing that don’t have very high computational demands.
  • Small and medium-sized databases: Running relational or NoSQL databases that don’t require massive memory or storage, but benefit from balanced resources.

Example scenario: Suppose you are deploying a content management system (CMS) or a lightweight web application. These applications typically have moderate requirements for compute, memory, and networking, making them a perfect fit for general purpose instances.


2. Compute Optimized Instances

Best for compute-intensive applications.

Compute optimized instances are designed for workloads that require high-performance processors. 

Use cases:

  • Web servers and application servers: For high-performance websites or application backends that demand superior CPU performance.
  • Gaming servers: Game servers that require low latency and high throughput for fast computations.
  • Batch processing workloads: Compute-bound applications that process large amounts of data or transactions in parallel.
  • High-performance computing (HPC): For simulations, financial modeling, or scientific computations that require heavy CPU usage.

Example scenario: If you are building a real-time recommendation engine or a data analytics pipeline, both of which require extensive CPU power for processing data at scale, compute optimized instances would provide the best performance and value.


3. Memory Optimized Instances

Best for memory-intensive applications.

Memory optimized instances are tailored for workloads that need large amounts of memory (RAM) for faster data processing. Memory plays a crucial role in application performance, particularly for workloads involving large datasets that need to be stored in memory to be processed effectively.

Use cases:

  • High-performance databases: Memory-intensive relational databases like SAP HANA, Microsoft SQL Server, or in-memory NoSQL databases like Redis or Memcached.
  • Real-time data processing: Big data workloads that require large amounts of memory to handle high-speed data ingestion and analysis.
  • Data-intensive applications: Running large-scale in-memory analytics or data warehousing systems, where the ability to hold datasets entirely in memory can greatly improve performance.

Example scenario: If you’re running a real-time analytics engine or working with big data frameworks like Apache Spark or Hadoop, a memory optimized instance ensures fast access to the data and reduces latency by allowing larger portions of data to be stored and processed in memory.


4. Accelerated Computing Instances

Best for specialized computational tasks requiring hardware acceleration.

Accelerated computing instances utilize hardware accelerators (e.g., GPUs, FPGA) to offload and accelerate specific tasks, delivering faster performance for certain types of computational workloads. These instances are ideal for applications that can leverage parallel processing or specialized computing resources.

Use cases:

  • Graphics processing: Applications like video rendering, 3D modeling, and graphics-intensive games that require substantial GPU resources.
  • Machine learning and AI workloads: Training deep learning models or running AI inference tasks with the help of GPUs or other accelerators.
  • Scientific computing: Tasks that involve complex simulations and require the power of GPUs or FPGAs for parallel computation.

Example scenario: If you’re working on machine learning or deep learning models using frameworks like TensorFlow or PyTorch, leveraging accelerated computing instances equipped with NVIDIA GPUs can significantly speed up the training process.


5. Storage Optimized Instances

Best for data-intensive workloads requiring high I/O performance.

Storage optimized instances are designed for applications that need high, sequential read/write access to very large datasets on local storage. These instances are optimized to handle workloads that require high Input/Output Operations Per Second (IOPS), low-latency access, and large amounts of storage.

Use cases:

  • Distributed file systems: Running Hadoop, HDFS, or GlusterFS for scalable storage solutions.
  • Data warehousing: Applications like Amazon Redshift or Google BigQuery that require large amounts of data to be loaded quickly and processed efficiently.
  • High-frequency online transaction processing (OLTP): Financial services and e-commerce applications where fast and reliable access to transactional data is crucial.
  • Big data analytics: Applications that involve complex data queries and data lakes that need fast I/O operations to perform at scale.

Example scenario: If your application handles high-frequency trading, financial transaction processing, or other high-volume data management applications that require fast access to large datasets, a storage optimized instance can provide the necessary performance to meet your needs.


EC2 (Elastic Compute Cloud) Pricing: A Quick Overview

1. On-Demand Instances

  • Best for: Short-term, unpredictable workloads that cannot be interrupted.
  • Pricing Model: Pay-as-you-go; no upfront cost or long-term commitment.
  • Use Cases:
    • Developing and testing applications.
    • Running applications with unpredictable usage patterns.
  • Limitations:
    • Not ideal for long-term workloads (1 year or more) as Reserved Instances provide better savings.

2. Amazon EC2 Savings Plans

  • Best for: Users who can commit to consistent compute usage for a 1- or 3-year term.
  • Pricing Model: Save up to 72% over On-Demand prices by committing to a consistent usage level.
  • How It Works:
    • Any usage up to your commitment is charged at the discounted rate.
    • Usage beyond the commitment is billed at standard On-Demand rates.
  • Cost Management Tools: AWS Cost Explorer provides insights on potential savings and recommends the best Savings Plan based on historical usage.

3. Reserved Instances (RIs)

  • Best for: Long-term, predictable workloads that require consistent compute capacity.
  • Pricing Model: Pay upfront for a 1- or 3-year term to receive a discount on On-Demand rates.
  • Types:
    • Standard Reserved: Best for steady, long-term workloads.
    • Convertible Reserved: Offers flexibility to change instance attributes.
    • Scheduled Reserved: Reserved capacity for specific time windows.
  • Cost Savings: Greater savings with a 3-year commitment.
  • Post-Term: After the Reserved Instance term ends, you continue using EC2 instances but are billed at On-Demand rates unless you purchase a new Reserved Instance.

4. Spot Instances

  • Best for: Workloads with flexible start/end times or that can tolerate interruptions.
  • Pricing Model: Pay up to 90% less than On-Demand prices by utilizing unused EC2 capacity.
  • How It Works:
    • Spot Instances are ideal for background tasks like data processing or batch jobs.
    • Your instance may be interrupted if capacity is needed elsewhere, making it unsuitable for critical applications.
  • Limitations: Spot Instances may be unavailable or interrupted due to demand fluctuations.

5. Dedicated Hosts

  • Best for: Users requiring physical isolation of EC2 instances or those with specific licensing needs.
  • Pricing Model: More expensive than other EC2 options, with the instance capacity fully dedicated to your use.
  • How It Works:
    • Use your own software licenses or purchase additional ones for compliance.
    • Available as On-Demand or Reserved Hosts.
  • Cost: Most expensive EC2 option due to dedicated hardware and isolation.

Reserved Instances vs. Savings Plans

1. Commitment and Flexibility

  • Reserved Instances (RIs):

    • Commitment: You commit to a specific instance type, region, platform, and tenancy for the duration (1 or 3 years).
    • Flexibility: Limited. Once the terms are set, changes are limited to instance size and scope within the same family (for Convertible RIs).
    • Types:
      • Standard Reserved Instances: Fixed instance type, region, platform, and tenancy.
      • Convertible Reserved Instances: Allows some flexibility to change instance attributes (e.g., instance family, OS, tenancy) during the term.
  • Savings Plans:

    • Commitment: You commit to a consistent amount of compute usage (measured in $/hr) for 1 or 3 years.
    • Flexibility: Much higher. Savings Plans apply across various EC2 instance types, regions, operating systems, and even services like AWS Lambda and AWS Fargate.
    • Types:
      • Compute Savings Plans: Broadest flexibility, covering EC2 usage across any instance family, region, OS, or tenancy.
      • EC2 Instance Savings Plans: Tied to a specific instance family and region, but still provides some flexibility.

2. Cost Savings

  • Reserved Instances (RIs):

    • Discount: Significant discounts (up to 75%) compared to On-Demand pricing, especially with longer terms and upfront payments.
    • Savings: The savings depend on the specific instance attributes you commit to (e.g., instance type, region, platform).
  • Savings Plans:

    • Discount: Up to 72% savings compared to On-Demand prices, with flexible usage options.
    • Savings: You get savings regardless of instance type, region, or OS (for Compute Savings Plans), which provides more room for optimization.

3. Use Case

  • Reserved Instances (RIs):
    • Best for: Long-term, predictable workloads that require a specific instance type and configuration. Ideal for applications with steady, consistent demand where you know the instance types and regions you’ll use.
  • Savings Plans:
    • Best for: Organizations that have a more dynamic or diverse usage pattern. Great for teams that need flexibility in instance types, regions, or even cross-service usage (like AWS Lambda, Fargate, etc.).

4. Usage Flexibility

  • Reserved Instances (RIs):

    • Usage: Tied strictly to the committed instance attributes (e.g., instance type, region, OS, and tenancy).
    • Instance Modifications: Only Convertible RIs offer the flexibility to change attributes. Standard RIs are more rigid.
  • Savings Plans:

    • Usage: Applies to any instance type, family, and region (for Compute Savings Plans) or specific to EC2 instance families and regions (for EC2 Instance Savings Plans).
    • Instance Modifications: High flexibility in scaling workloads across different instance types and regions without losing your discount.

5. Billing Structure

  • Reserved Instances (RIs) & Savings Plans:
    • Billing: You can choose between three payment options:
      • All Upfront: Best savings, full payment at the start.
      • Partial Upfront: Pay a portion upfront, with the remainder billed monthly.
      • No Upfront: Pay monthly, but with a slightly higher price.

6. Term Length

  • Reserved Instances (RIs) :
    • Term: Fixed 1- or 3-year commitment.
  • Savings Plans:
    • Term: Flexible 1- or 3-year commitment.

7. Post-Term Usage

  • Reserved Instances (RIs) & Savings Plans:
    • Post-Term: After the term ends, you will be charged On-Demand prices unless you renew or purchase new Reserved Instances.
 

EC2 Auto Scaling

Amazon EC2 Auto Scaling automatically adjusts the number of EC2 instances running based on your application’s demand. This means:

  • Increase capacity when traffic spikes.
  • Reduce capacity when traffic drops.

This approach ensures optimal application performance and cost efficiency, as you pay only for the instances in use.

Key Concepts of EC2 Auto Scaling

  1. Scaling Out vs. Scaling In:
    • Scaling Out: Adding more EC2 instances when demand increases (e.g., website traffic spikes).
    • Scaling In: Removing EC2 instances when demand decreases, saving costs during off-peak times.

How Does EC2 Auto Scaling Work?

When configuring an Auto Scaling group, you define three main settings:

  • Minimum Capacity: The minimum number of EC2 instances that should always be running. For example, if you set the minimum to 1, the Auto Scaling group will always ensure at least one EC2 instance is running.

  • Desired Capacity: The ideal number of EC2 instances you want to run. If your application requires two instances for optimal performance, you can set this number to 2, even though the minimum is set to 1.

  • Maximum Capacity: The maximum number of instances you want the Auto Scaling group to scale out to. For instance, you may want to scale out to a maximum of 4 EC2 instances to avoid over-provisioning and excessive costs.

If the Desired Capacity is not specified, it defaults to the Minimum Capacity.

Types of Scaling in EC2 Auto Scaling

  1. Dynamic Scaling:

    • How It Works: Adjusts the number of EC2 instances based on real-time demand.
    • Use Case: If you notice a surge in traffic, dynamic scaling adds more instances to handle the increased load. Conversely, when demand drops, it reduces the instances to save costs.
  2. Predictive Scaling:

    • How It Works: Predicts future demand based on historical data and schedules the necessary number of EC2 instances in advance.
    • Use Case: If you know your application experiences regular demand spikes (e.g., during specific hours of the day), predictive scaling automatically adjusts capacity in anticipation, avoiding delays and ensuring the right number of instances are ready when needed.

Elastic Load Balancing (ELB)

  1. Traffic Distribution:
    • Elastic Load Balancing acts as a single point of contact for all incoming web traffic. It automatically distributes this traffic across your EC2 instances, ensuring that no single instance bears the full load.
  2. Integration with EC2 Auto Scaling:
    • ELB works alongside Amazon EC2 Auto Scaling to dynamically adjust resources in response to traffic demands. As EC2 instances are added or removed based on the scaling policy, the load balancer ensures traffic is evenly distributed across all available instances.
    • ELB ensures that traffic is always routed to healthy EC2 instances. If an instance becomes unhealthy or fails, the load balancer stops sending traffic to it, automatically rerouting to healthy instances.
  3. Types of Load Balancers:
    • Application Load Balancer (ALB): Ideal for HTTP and HTTPS traffic, supporting complex routing based on URL paths, hostnames, and HTTP headers.
    • Network Load Balancer (NLB): Best for high-performance applications requiring ultra-low latency and handling millions of requests per second.
    • Classic Load Balancer (CLB): An older version that can balance both HTTP/HTTPS and TCP traffic.
  1.  

Monolithic vs. Microservices Architecture

Monolithic Applications

A monolithic application is one where all the components are tightly coupled together. These components, such as databases, servers, user interfaces, and business logic, work as a single unit. While this approach can simplify development, it introduces certain challenges:

  • Tightly Coupled Components: All components are interdependent, meaning changes to one component can affect the others.
  • Single Point of Failure: If one component fails, the entire application may fail, impacting availability and reliability.
  • Limited Scalability: Scaling the application means scaling the entire system, which can be inefficient and resource-intensive.

Microservices Architecture

In contrast to monolithic applications, microservices break down the application into smaller, independent services. Each service is responsible for a specific business function and communicates with other services via lightweight protocols. Microservices offer several advantages:

  • Loose Coupling: Services are independent of one another, reducing the risk that a failure in one component will affect the others.
  • Resilience: If one service fails, the others can continue functioning, improving application availability and reliability.
  • Scalability: Individual services can be scaled independently, optimizing resource use and cost efficiency.
  • Flexibility: Different components can be developed, deployed, and updated independently, allowing for faster development cycles.

Microservices on AWS

  • Simple Notification Service (SNS): A fully managed messaging service that enables the communication between decoupled microservices through push notifications or event-driven messages.

  • Simple Queue Service (SQS): A highly scalable and reliable message queue service that allows microservices to send, receive, and process messages asynchronously. It ensures that services can communicate without direct dependencies on each other.

Comparison of SNS and SQS

Two widely used messaging services in AWS, but they serve different purposes and are designed for different use cases. Both are essential for building scalable, event-driven architectures, especially in microservices, but they differ in how they handle communication between components.

SNS (Simple Notification Service)

Purpose:

  • Pub/Sub Messaging: SNS is designed for publish-subscribe (pub/sub) messaging, where messages are sent to multiple subscribers or endpoints simultaneously.
  • It enables applications to send messages to multiple recipients, such as SQS queues, HTTP endpoints, Lambda functions, email addresses, or mobile devices (SMS).
 

Key Features:

  1. Fan-out: One message published to an SNS topic is sent to multiple subscribers, allowing for a one-to-many communication pattern.
  2. Push-based: SNS delivers messages to subscribed endpoints immediately (push model).
  3. Multiple Protocols Supported: Can send messages to SQS queues, Lambda functions, HTTP/S endpoints, emails, and even mobile devices via SMS.
  4. Real-time Messaging: Ideal for use cases requiring real-time notifications or alerts.
 

Use Cases:

  • Real-Time Notifications: For example, sending push notifications to users’ mobile devices or alerts to system administrators.
  • Fan-out Message Delivery: When a single message needs to trigger multiple actions (e.g., alerting multiple microservices or systems simultaneously).
  • Event-Driven Architecture: Integrating different systems or microservices that need to be informed immediately of new events.
 

Example:

  • A system triggers an SNS message when an order is placed. This message is sent to multiple subscribers: an SQS queue for order processing, a Lambda function for inventory updates, and a mobile app for customer notification.
 

SQS (Simple Queue Service)

Purpose:

  • Message Queuing: SQS is designed for message queuing, providing a reliable, scalable, and decoupled way to communicate between different components of a distributed system.
  • It uses a point-to-point communication model where messages are stored in queues and can be retrieved by consumers (workers, services, etc.) for processing.
 

Key Features:

  1. Decoupling: SQS decouples the producer and consumer, allowing asynchronous communication. This means that the producer doesn’t need to wait for the consumer to process the message.
  2. Polling-based: Consumers poll the queue to retrieve messages. This is a pull-based model, meaning consumers decide when to fetch messages. A user or service retrieves a message from the queue, processes it, and then deletes it from the queue.
  3. Message Retention: Messages can remain in a queue for up to 14 days, allowing for delayed processing or retry mechanisms.
  4. At-Least-Once Delivery: Ensures that messages are delivered at least once to a consumer (can be more in rare cases due to retries).

Use Cases:

  • Task Queuing: Use SQS for background processing tasks like image or video processing, email sending, or batch jobs.
  • Decoupling Microservices: In microservices architectures, SQS helps decouple services so they can process messages at different rates without direct dependencies.
  • Asynchronous Processing: For systems that need to handle high-volume, asynchronous workflows (e.g., job queues, processing pipelines).
 

Example:

  • An e-commerce platform uses SQS to handle order processing. The order service sends messages to an SQS queue. A worker service pulls messages from the queue to process the orders, independent of when the orders are placed.
 

SNS vs. SQS

FeatureSNS (Simple Notification Service)SQS (Simple Queue Service)
Communication ModelPublish/Subscribe (One-to-Many)Point-to-Point (One-to-One)
Message DeliveryPush-based (immediate delivery)Pull-based (consumer retrieves)
Use CaseReal-time notifications, fan-out deliveryDecoupling components, background processing
Delivery GuaranteesAt least once deliveryAt least once delivery (with retries)
Message RetentionNo message retention (transient)Up to 14 days
IntegrationSupports multiple protocols (Lambda, SQS, HTTP, SMS, email, etc.)Primarily used for message queuing between services
ThroughputHigh throughput for broadcast messagingHigh throughput with message queuing
Cost ModelCharges based on the number of published messages and notifications sentCharges based on the number of messages sent, received, and retained

 

Serverless Computing: Focus on Code, Not Infrastructure

  • No Server Management: AWS handles the servers, allowing you to focus on the application code.
  • Automatic Scaling: Serverless platforms scale automatically based on demand. 
  • Cost Efficiency: You only pay for the compute time your code consumes. 
  • Event-Driven: Serverless applications are triggered by events like file uploads, HTTP requests, or changes in data.

AWS Lambda: The Core of Serverless Computing

How AWS Lambda Works:

  1. Code Deployment: You upload your code (e.g., in Python, Node.js, Java, etc.) to Lambda.
  2. Event Trigger: Lambda functions are linked to event sources such as AWS S3, DynamoDB, API Gateway, or any HTTP endpoint.
  3. Execution: When an event occurs, Lambda triggers the function, runs your code, and automatically scales to handle the event volume. e.g. Imagine a scenario where users upload images to an S3 bucket. With AWS Lambda, you can automatically trigger a function to resize the image and save it to another S3 bucket, all with minimal configuration and no need for server management.
  4. Billing: AWS charges based on the compute time consumed

Benefits of AWS Lambda:

  • Zero Administration: No need to manage or provision servers.
  • Scalability: Automatically scales depending on the number of events or triggers.
  • Flexible Triggers: Supports a variety of event sources from other AWS services or external sources (like HTTP requests).

Containers: Packaging Code with Dependencies

Containers provide a standardized environment to package and deploy applications. Containers ensure consistency between development, testing, and production environments, simplifying deployment and reducing “it works on my machine” problems. Unlike virtual machines, containers are lightweight and share the underlying host OS, making them faster to start and less resource-intensive.

Benefits of Containers:

  • Consistency: Containers encapsulate code, libraries, and dependencies, providing consistent execution environments across different stages of development and production.
  • Portability: Containers can run across any environment, from developers’ laptops to production environments in AWS.
  • Efficiency: Containers share the host operating system’s kernel, so they are more efficient than full VMs, requiring less memory and startup time.

Elastic Container Service (ECS): Managing Containers at Scale

As applications grow in complexity, managing multiple containers becomes more challenging. Amazon ECS is a fully managed container orchestration service that simplifies the deployment, management, and scaling of containerized applications in AWS.

Key Features of Amazon ECS:

  • Scalability: ECS automatically scales the infrastructure to meet the demands of your application. You can scale both horizontally (more instances) and vertically (increasing resources per instance).
  • Docker Support: ECS supports Docker containers, a popular containerization platform. You can deploy Docker containers directly or use AWS Fargate to manage container instances without needing to manage EC2 instances.
  • Integration with AWS Services: ECS integrates seamlessly with other AWS services such as Amazon RDS, Amazon S3, and AWS CloudWatch, enabling full-stack management.
  • Task Definitions: ECS allows you to define the configuration for your containers, including the Docker image, CPU and memory requirements, and networking settings. A task is a running instance of a containerized application.
  • Service Discovery & Load Balancing: ECS integrates with Elastic Load Balancing (ELB) and AWS Cloud Map for service discovery and load balancing, ensuring that requests are routed to the correct container instances.

 

How ECS Works:

  1. Task Definitions: Create a task definition that specifies how containers should run (e.g., which Docker image to use, memory and CPU requirements).
  2. Clusters: Organize tasks into clusters—groups of EC2 instances or Fargate-managed infrastructure.
  3. Scaling: ECS can scale your containers based on demand. You can specify auto-scaling policies for the ECS service or use AWS Fargate for serverless container management.
  4. Deployment: ECS handles container orchestration, allowing you to update, scale, and manage containerized applications with minimal overhead.

 

Example Use Case: A company might use ECS to run hundreds of containers that power various microservices in their application. ECS allows them to scale individual services based on demand, monitor their health, and update services without downtime, all while managing the underlying infrastructure.


Containers vs. Serverless (Lambda)

FeatureAWS Lambda (Serverless)Amazon ECS (Containers)
Infrastructure ManagementNo infrastructure management, AWS handles everything.You manage container configurations and orchestration (or use Fargate for serverless).
ScalingAuto-scaling based on triggers.Manual or auto-scaling based on container load and cluster resources.
BillingCharged for compute time (execution duration).Charged for resources (CPU, memory) used by containers.
Event-drivenIdeal for event-driven architectures.Better suited for long-running, stateful applications.
Use CaseBest for lightweight, event-driven functions (e.g., real-time processing).Suitable for containerized microservices, batch jobs, and multi-container apps.

 

Conclusion: Serverless vs. Containers on AWS

  • Serverless (AWS Lambda) is perfect for event-driven, stateless applications with minimal infrastructure management. It’s an excellent choice for tasks like image processing, data streaming, and microservices where each function executes based on triggers.

  • Containers (Amazon ECS) provide more control over the environment and are ideal for complex applications, microservices architectures, or when you need to manage multiple services or applications that require specific environments, dependencies, or configuration.

In AWS, you can combine both to leverage the strengths of each approach: use Lambda for simple, event-driven tasks and ECS for more complex, long-running workloads.

Compute Resources

Search

Table of Contents

You may also like to read