- 1. Configuring Elastic Load Balancer (ELB)
- a. Characteristics
- i. Region wide load balancer
- 1. Deploy to multiple availability zones
- ii. Fully managed load balancer
- iii. Can be used internally or externally
- iv. Layer-7 functionality
- 1. SSL termination and processing
- a. Important because if its not done at the ELB it will happen on the instances
- b. If the instances have to do this then it takes more processing away from the instances
- 1. SSL termination and processing
- v. Cookie-based sticky sessions
- 1. AWS recommends using a database to do sticky sessions
- vi. Integrates with auto scaling
- vii. ELB EC2 health checks / Amazon CloudWatch
- 1. Ability to have advanced metric based load balancing
- a. CPU / memory / etc.
- 1. Ability to have advanced metric based load balancing
- viii. Integrate with Route 53 (cloud based DNS)
- b. Redundancy
- i. You can setup redundant ELB for failover but may not be helpful because ELB is a software service. If there is a bug or an issue it will likely effect both.
- c. Setup
- i. Listener Configuration
- 1. What protocol/port you want to listen to and what protocol/port you want to forward to
- ii. You need to check a checkbox to make it internal
- iii. You can add additional subnets specified within all availability zones you want to distribute load across
- iv. SSL Certificates
- 1. Upload the SSL cert once, you can keep reusing the certificate later…
- v. Health Check
- 1. Protocol: HTTP/HTTPS/etc
- 2. Ping Port: 80
- 3. Ping Path: /index.html
- a. This could be a blank page or a dynamic page to verify that its doing something (maybe even a smoke test)
- 4. You can also control the frequency of the check and thresholds to determine whether its unhealthy
- vi. Enabling Cross-Zone Load Balancing is a checkbox check
- vii. Connection Draining
- 1. Gracefully getting users off servers going down for updates
- viii. Create a CNAME using your domain record to give it a more friendly domain name
- ix. Enable alarms through CloudWatch
- i. Listener Configuration
- a. Characteristics
- 2. Auto Scaling
- a. Features
- i. Elasticity – grow and shrink your environment based on performance metrics
- ii. Bootstrapping / dynamic configuration
- 1. AMI to setup the base OS
- 2. Also use chef / puppet to augment the provisioning of the new instance with the new version / configuration of your software
- iii. CloudWatch or manual schedule
- 1. If one instance hits 90% cpu utilization deploy 1-2 instances
- 2. Always have a minimum / maximum
- 3. Manual schedule
- a. Based on time of year / time of month
- iv. Notifications
- 1. Orders are going through the SQS, if a number of orders hits a maximum then the SQS can also trigger auto-scale
- v. It’s free!
- b. How it works
- i. Auto Scaling Groups
- 1. Apply policies across multiple instances
- ii. Launch config
- 1. Gold image à AMI base image and augment with dynamic configuration
- 2. Used as the mechanism to provision each new instance
- iii. Scaling Plans
- 1. How to provision new instances and terminate them
- i. Auto Scaling Groups
- c. Setup
- i. First Step: Create Launch Configuration
- 1. Need to select an AMI or your own custom AMI
- 2. Select an instance type (size)
- 3. Name, purchasing option, IAM role, enable monitoring w/ CloudWatch
- 4. Specify whether you want public IP addresses or not
- 5. Add volumes / storage
- 6. Select a VPC and security group to place the new instance in
- 7. Key pairs,
- a. If you know that you’ll never access them via RDP à you don’t need a key pair!
- ii. Next: Create Auto Scaling Group
- 1. Name, size (# of instances), target VPC and subnet
- 2. Integrate with ELB but you can use 3rd party load balancers (netscalar, etc.)
- iii. Next: Setup Scaling Plans
- 1. Min + Max
- 2. Increase Group Size policy
- a. Trigger / alarm that monitors the instances
- b. Actions
- i. Can have steps like this:
- 1. 70 – 80 % CPU utilization: add 1 instance
- 2. 80-90 % CPU utilization: add 2 instances
- 3. >90% CPU utilization: add 3 instances
- ii. You can also use % of group instead of constant numbers
- iii. You can ‘add’ or ‘set’ instance count
- iv. Warm up time: allows instances to boot up and start handling load so you can let the environment settle first before scaling up further
- i. Can have steps like this:
- c. Step size (how many instances to add)
- 3. Decrease Group Size Policy
- a. Works just like the increase group policy but designed to step down the size based on declining resource usage
- i. First Step: Create Launch Configuration
- a. Features
Pingback: AWS Certified SysOps Administrator Exam: Study Guide | Sky Cliffs