Metadata is mostly for organizing and presenting Parameters in a better way when using CloudFormation in the AWS Web UI.
These are the input parameters for this template. All of these parameters must be supplied for this template to be deployed.
These are all of the actual AWS resources created for this application.
This defines a Layer in the OpsWorks stack. It sets up the Chef cookbooks to pull in from S3, as well as a list of specific Chef recipies to run.
You have to mount the EFS volumes BEFORE installing and running docker, otherwise docker doesn't pick up the mounts.
This defines the application for the OpsWorks stack. There's one Application for each Docker container to be deployed. For KFS its just the one container.
The Application mostly provides a secure place to store secret texts for use with the Docker container passed in via environment variables.
This env var sets up an EFS volume to be mounted at '/efs/shared'. The chef recipe ecs-utilities::efs_mount does this.
Various Docker relate env vars used by ecs-docker::docker_login_ecr and ecs-docker::docker_pull_deploy chef recipies
This part is a bit of a mess, because we have to splice in the environment name into the list of paths to mount into the docker container. There's a single "Non-Prod" and "Prod" EFS volume, and the specific environments will be folders in there, ie "dev", "tst", etc. So for this specific environment being deployed, only the appropriate folder should be mapped in.
The other folders, conf, security, and logs are not shared across EFS, they're built out on the host by the kfs::install_configs chef recipe
Now that we need multiple docker containers, we need multiple OpsWorks Apps.
The way the ecs-docker Chef recipes deal with multiple Apps requires the Docker Login environment variables be on their own App, rather than just part of the single App.
In order to properly respond to initial HTTP traffic on port 80, we need some way to detect and redirect that traffic to HTTPS on 443. This App launches a very stripped down nginx container which only does that.
UAFAWS-301 : Add these two variables hydrated by jenkins, so KFS_docker_pull_deploy can calculate CloudWatch LOG_Group_Name
Rhubarb is used to run batch jobs, and is deployed as a separate container along side the application container on the same host.
This defines the OpsWork stack itself. Mostly its network configs and defaults. The main thing in here is the source for the Chef Cookbooks to be used by any layers.
This creates and turns on an OpsWorks EC2 Instance to actually run the application. It is assigned to a specific layer, and that's where it gets the Chef recipes to run.
Currently only a single App Instance is created. Subsequent instances could be created in OpsWorks. IMPORTANT! Any resources created outside of CloudFormation cannot be removed via CloudFormation stack deletion. To be safe, delete all Instances out of OpsWorks before deleting the CloudFormation Stack.
Only bring up this instance if we are running production Not need for two instances for SUP
Defines the Load Balancer for KFS. http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb.html UAFAWS-198 - To change subnets(in Jenkins) and Scheme (in ELB) to Private network; Two option available for ELB Scheme: "internet-facing" (public) vs "internal" (private)
This is the role that is given to the OpsWorks service, which allows OpsWorks to manage AWS resources. This is a standard policy provided by AWS. See the AWS Documentation for OpsWorks Service Role
This is the IAM role that will be applied to the OpsWorks EC2 Instances. Any AWS specific permissions that the node might need should be defined here.
Access to the S3 Bucket which holds application specific files that need to be loaded on each application node. (ojdbc.jar, encrypted keystores, etc)
Access to CloudWatch for the log group and log stream from each application environment.
Access to the S3 Bucket which holds application specific files that need to be loaded on each application node. (ojdbc.jar, encrypted keystores, etc)
Allow the Rhubarb container on this host to update ELB Listeners
Create a CloudWatch Log Group for each application environment. This allows us to set the retention timeframe. UAFAWS-302 - Create dependency on CWlog Log group and EC2 instance with CWlogs agent. During CF template delete CW agent has access to recreate log group after first pass
This is just a little construct to connect a set of roles together into a profile. The profile is referenced in the OpsWorks stack itself.
Security group for the OpsWorks application instances themselves. Needs to permit incoming traffice from the ELB, and any other authorized incoming sources.
This is the Security Group that wraps the Load Balancer. This controls what network traffic is allowed into the ELB. Just web traffic is allowed from anywhere.
Defines the Security Group for the RDS Database. This restricts DB access to only the devices in the InstanceSecurityGroup, so our App nodes.
Create a DNS entry in Route53 for this environment. This creates a CNAME pointing at the DNS name of the Load Balancer.
Output values that can be viewed from the AWS CloudFormation console.
KFS Environment CloudFormation Deployment
This CloudFormation template will build out a whole KFS environment, including an OpsWorks stack, Load Balancer, Application nodes and Database.