service;question;answers /ec2/autoscaling/faqs/;What is Amazon EC2 Auto Scaling?;Amazon EC2 Auto Scaling is a fully managed service designed to launch or terminate Amazon EC2 instances automatically to help ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon EC2 Auto Scaling helps you maintain application availability through fleet management for EC2 instances, which detects and replaces unhealthy instances, and by scaling your Amazon EC2 capacity up or down automatically according to conditions you define. You can use Amazon EC2 Auto Scaling to automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs. /ec2/autoscaling/faqs/;When should I use Amazon EC2 Auto Scaling vs. AWS Auto Scaling?;You should use AWS Auto Scaling to manage scaling for multiple resources across multiple services. AWS Auto Scaling lets you define dynamic scaling policies for multiple EC2 Auto Scaling groups or other resources using predefined scaling strategies. Using AWS Auto Scaling to configure scaling policies for all of the scalable resources in your application is faster than managing scaling policies for each resource via its individual service console. It’s also easier, as AWS Auto Scaling includes predefined scaling strategies that simplify the setup of scaling policies. /ec2/autoscaling/faqs/;How is Predictive Scaling Policy different from Predictive Scaling of AWS Auto Scaling plan?;Predictive Scaling Policy brings the similar prediction algorithm offered through AWS Auto Scaling plan as a native scaling policy in EC2 Auto Scaling. You can use predictive scaling directly through AWS Command Line Interface (CLI), EC2 Auto Scaling Management Console, and AWS SDKs similar to how you use other scaling policies, such as Simple Scaling or Target Tracking etc. You don’t have to create an AWS Auto Scaling plan just for using predictive scaling. /ec2/autoscaling/faqs/;What are the benefits of using Amazon EC2 Auto Scaling?;"Amazon EC2 Auto Scaling helps to maintain your Amazon EC2 instance availability. Whether you are running one Amazon EC2 instance or thousands, you can use Amazon EC2 Auto Scaling to detect impaired Amazon EC2 instances, and replace the instances without intervention. This ensures that your application has the compute capacity that you expect. You can use Amazon EC2 Auto Scaling to automatically scale your Amazon EC2 fleet by following the demand curve for your applications, reducing the need to manually provision Amazon EC2 capacity in advance. For example, you can set a condition to add new Amazon EC2 instances in increments to the ASG when the average utilization of your Amazon EC2 fleet is high; and similarly, you can set a condition to remove instances in increments when CPU utilization is low. You can also use Amazon CloudWatch to send alarms to trigger scaling activities and Elastic Load Balancing (ELB) to distribute traffic to your instances within the ASG. If you have predictable load changes, you can use Predictive Scaling policy to proactively increase capacity ahead of upcoming demand. Amazon EC2 Auto Scaling enables you to run your Amazon EC2 fleet at optimal utilization." /ec2/autoscaling/faqs/;What is fleet management and how is it different from dynamic scaling?;If your application runs on Amazon EC2 instances, then you have what’s referred to as a ‘fleet’. Fleet management refers to the functionality that automatically replaces unhealthy instances and maintains your fleet at the desired capacity. Amazon EC2 Auto Scaling fleet management ensures that your application is able to receive traffic and that the instances themselves are working properly. When Auto Scaling detects a failed health check, it can replace the instance automatically. /ec2/autoscaling/faqs/;What is target tracking?;Target tracking is a new type of scaling policy that you can use to set up dynamic scaling for your application in just a few simple steps. With target tracking, you select a load metric for your application, such as CPU utilization or request count, set the target value, and Amazon EC2 Auto Scaling adjusts the number of EC2 instances in your ASG as needed to maintain that target. It acts like a home thermostat, automatically adjusting the system to keep the environment at your desired temperature. For example, you can configure target tracking to keep CPU utilization for your fleet of web servers at 50%. From there, Amazon EC2 Auto Scaling launches or terminates EC2 instances as required to keep the average CPU utilization at 50%. /ec2/autoscaling/faqs/;What is an EC2 Auto Scaling group (ASG)?;An Amazon EC2 Auto Scaling group (ASG) contains a collection of EC2 instances that share similar characteristics and are treated as a logical grouping for the purposes of fleet management and dynamic scaling. For example, if a single application operates across multiple instances, you might want to increase the number of instances in that group to improve the performance of the application, or decrease the number of instances to reduce costs when demand is low. Amazon EC2 Auto Scaling will automaticallly adjust the number of instances in the group to maintain a fixed number of instances even if a instance becomes unhealthy, or based on criteria that you specify. You can find more information about ASG in the Amazon EC2 Auto Scaling User Guide. /ec2/autoscaling/faqs/;What happens to my Amazon EC2 instances if I delete my ASG?;If you have an EC2 Auto Scaling group (ASG) with running instances and you choose to delete the ASG, the instances will be terminated and the ASG will be deleted. /ec2/autoscaling/faqs/;How do I know when EC2 Auto Scaling is launching or terminating the EC2 instances in an EC2 Auto Scaling group?;When you use Amazon EC2 Auto Scaling to scale your applications automatically, it is useful to know when EC2 Auto Scaling is launching or terminating the EC2 instances in your EC2 Auto Scaling group. Amazon SNcoordinates and manages the delivery or sending of notifications to subscribing clients or endpoints. You can configure EC2 Auto Scaling to send an SNnotification whenever your EC2 Auto Scaling group scales. Amazon SNcan deliver notifications as HTTP or HTTPS POST, email (SMTP, either plain-text or in JSON format), or as a message posted to an Amazon SQS queue. For example, if you configure your EC2 Auto Scaling group to use the autoscaling: EC2_INSTANCE_TERMINATE notification type, and your EC2 Auto Scaling group terminates an instance, it sends an email notification. This email contains the details of the terminated instance, such as the instance ID and the reason that the instance was terminated. /ec2/autoscaling/faqs/;What is a launch configuration?;A launch configuration is a template that an EC2 Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you've launched an EC2 instance before, you specified the same information in order to launch the instance. When you create an EC2 Auto Scaling group, you must specify a launch configuration. You can specify your launch configuration with multiple EC2 Auto Scaling groups. However, you can only specify one launch configuration for an EC2 Auto Scaling group at a time, and you can't modify a launch configuration after you've created it. Therefore, if you want to change the launch configuration for your EC2 Auto Scaling group, you must create a launch configuration and then update your EC2 Auto Scaling group with the new launch configuration. When you change the launch configuration for your EC2 Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected. You can see the launch configurations section of the EC2 Auto Scaling User Guide for more details. /ec2/autoscaling/faqs/;How many instances can an EC2 Auto Scaling group have?;You can have as many instances in your EC2 Auto Scaling group as your EC2 quota allows. /ec2/autoscaling/faqs/;What happens if a scaling activity causes me to reach my Amazon EC2 limit of instances?;Amazon EC2 Auto Scaling cannot scale past the Amazon EC2 limit of instances that you can run. If you need more Amazon EC2 instances, complete the Amazon EC2 instance request form. /ec2/autoscaling/faqs/;Can EC2 Auto Scaling groups span multiple AWS regions?;EC2 Auto Scaling groups are regional constructs. They can span Availability Zones, but not AWS regions. /ec2/autoscaling/faqs/;How can I implement changes across multiple instances in an EC2 Auto Scaling group?;You can use AWS CodeDeploy or CloudFormation to orchestrate code changes to multiple instances in your EC2 Auto Scaling group. /ec2/autoscaling/faqs/;If I have data installed in an EC2 Auto Scaling group, and a new instance is dynamically created later, is the data copied over to the new instances?;Data is not automatically copied from existing instances to new instances. You can use lifecycle hooks to copy the data, or an Amazon RDS database including replicas. /ec2/autoscaling/faqs/;When I create an EC2 Auto Scaling group from an existing instance, does it create a new AMI (Amazon Machine Image)?;When you create an Auto Scaling group from an existing instance, it does not create a new AMI. For more information see Creating an Auto Scaling Group Using an EC2 Instance. /ec2/autoscaling/faqs/;How does Amazon EC2 Auto Scaling balance capacity?;Balancing resources across Availability Zones is a best practice for well-architected applications, as this greatly increases aggregate system availability. Amazon EC2 Auto Scaling automatically balances EC2 instances across zones when you configure multiple zones in your EC2 Auto Scaling group settings. Amazon EC2 Auto Scaling always launches new instances such that they are balanced between zones as evenly as possible across the entire fleet. What’s more, Amazon EC2 Auto Scaling only launches into Availability Zones in which there is available capacity for the requested instance type. /ec2/autoscaling/faqs/;What are lifecycle hooks?;Lifecycle hooks let you take action before an instance goes into service or before it gets terminated. This can be especially useful if you are not baking your software environment into an Amazon Machine Image (AMI). For example, launch hooks can perform software configuration on an instance to ensure that it’s fully prepared to handle traffic before Amazon EC2 Auto Scaling proceeds to connect it to your load balancer. One way to do this is by connecting the launch hook to an AWS Lambda function that invokes RunCommand on the instance. Terminate hooks can be useful for collecting important data from an instance before it goes away. For example, you could use a terminate hook to preserve your fleet’s log files by copying them to an Amazon S3 bucket when instances go out of service. /ec2/autoscaling/faqs/;Can I customize a health check?;Yes, there is an API called SetInstanceHealth that allows you to change an instance's state to UNHEALTHY, which will then result in a termination and replacement. /ec2/autoscaling/faqs/;Can I suspend health checks (for example, to evaluate unhealthy instances)?;Yes, you can temporarily suspend Amazon EC2 Auto Scaling health checks by using the SuspendProcesses API. You can use the ResumeProcesses API to resume automatic health checks. /ec2/autoscaling/faqs/;Which health check type should I select?;If you are using Elastic Load Balancing (ELB) with your group, you should select an ELB health check. If you’re not using ELB with your group, you should select the EC2 health check. /ec2/autoscaling/faqs/;Is there any way to use Amazon EC2 Auto Scaling to only add a volume without adding an instance?;A volume is attached to a new instance when it is added. Amazon EC2 Auto Scaling doesn't automatically add a volume when the existing one is approaching capacity. You can use the EC2 API to add a volume to an existing instance. /ec2/autoscaling/faqs/;What does the term “stateful instances” refer to?;When we refer to a stateful instance, we mean an instance that has data on it, which exists only on that instance. In general, terminating a stateful instance means that the data (or state information) on the instance is lost. You may want to consider using lifecycle hooks to copy the data off of a stateful instance before it’s terminated, or enable instance protection to prevent Amazon EC2 Auto Scaling from terminating it. /ec2/autoscaling/faqs/;How does Amazon EC2 Auto Scaling replace an impaired instance?;When an impaired instance fails a health check, Amazon EC2 Auto Scaling automatically terminates it and replaces it with a new one. If you’re using an Elastic Load Balancing load balancer, Amazon EC2 Auto Scaling gracefully detaches the impaired instance from the load balancer before provisioning a new one and attaching it to the load balancer. This is all done automatically, so you don’t need to respond manually when an instance needs replacing. /ec2/autoscaling/faqs/;How do I control which instances Amazon EC2 Auto Scaling terminates when scaling in, and how do I protect data on an instance?;With each Amazon EC2 Auto Scaling group, you control when Amazon EC2 Auto Scaling adds instances (referred to as scaling out) or remove instances (referred to as scaling in) from your group. You can scale the size of your group manually by attaching and detaching instances, or you can automate the process through the use of a scaling policy. When you have Amazon EC2 Auto Scaling automatically scale in, you must decide which instances Amazon EC2 Auto Scaling should terminate first. You can configure this through the use of a termination policy. You can also use instance protection to prevent Amazon EC2 Auto Scaling from selecting specific instances for termination when scaling in. If you have data on an instance, and you need that data to be persistent even if your instance is scaled in, then you can use a service like S3, RDS, or DynamoDB, to make sure that it is stored off the instance. /ec2/autoscaling/faqs/;How long is the turn-around time for Amazon EC2 Auto Scaling to spin up a new instance at inService state after detecting an unhealthy server?;The turnaround time is within minutes. The majority of replacements happen within less than 5 minutes, and on average it is significantly less than 5 minutes. It depends on a variety of factors, including how long it takes to boot up the AMI of your instance. /ec2/autoscaling/faqs/;If Elastic Load Balancing (ELB) determines that an instance is unhealthy, and moved offline, will the previous requests sent to the failed instance be queued and rerouted to other instances within the group?;When ELB notices that the instance is unhealthy, it will stop routing requests to it. However, prior to discovering that the instance is unhealthy, some requests to that instance will fail. /ec2/autoscaling/faqs/;If you don’t use Elastic Load Balancing (ELB) how would users be directed to the other servers in a group if there was a failure?;You can integrate with Route53 (which Amazon EC2 Auto Scaling does not currently support out of the box, but many customers use). You can also use your own reverse proxy, or for internal microservices, can use service discovery solutions. /ec2/autoscaling/faqs/;How do I control access to Amazon EC2 Auto Scaling resources?;Amazon EC2 Auto Scaling integrates with AWS Identity and Access Management (IAM), a service that enables you to do the following: /ec2/autoscaling/faqs/;Can you define a default admin password on Windows instances with Amazon EC2 Auto Scaling?;You can use the Key Name parameter to CreateLaunchConfiguration to associate a key pair with your instance. You can then use the GetPasswordData API in EC2. This is also possible through the AWS Management Console. /ec2/autoscaling/faqs/;Are CloudWatch agents automatically installed on EC2 instances when you create an Amazon EC2 Auto Scaling group?;If your AMI contains a CloudWatch agent, it’s automatically installed on EC2 instances when you create an EC2 Auto Scaling group. With the stock Amazon Linux AMI, you need to install it (recommended, via yum). /ec2/autoscaling/faqs/;Can I create a single ASG to scale instances across different purchase options?;Yes. You can provision and automatically scale EC2 capacity across different EC2 instance types, Availability Zones, and On-Demand, RIs and Spot purchase options in a single Auto Scaling Group. You have the option to define the desired split between On-Demand and Spot capacity, select which instance types work for your application, and specify preference for how EC2 Auto Scaling should distribute the ASG capacity within each purchasing model. /ec2/autoscaling/faqs/;What are the costs for using Amazon EC2 Auto Scaling?;Amazon EC2 Auto Scaling fleet managment for EC2 instances carries no additional fees. The dynamic scaling capabilities of Amazon EC2 Auto Scaling are enabled by Amazon CloudWatch and also carry no additional fees. Amazon EC2 and Amazon CloudWatch service fees apply and are billed separately. /ecr/faqs/;What is Amazon Elastic Container Registry (Amazon ECR)?;Amazon ECR is a fully managed container registry that makes it easy for developers to share and deploy container images and artifacts. Amazon ECR is integrated with Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and AWS Lambda, simplifying your development to production workflow. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to deploy containers for your applications reliably. Integration with AWS Identity and Access Management (IAM) provides resource-level control of each repository that lets you share images across your organization or with anyone in the world. /ecr/faqs/;Why should I use Amazon ECR?;Amazon ECR eliminates the need to operate and scale the infrastructure required to power your container registry. Amazon ECR uses Amazon Simple Storage Service (S3) for storage to make your container images highly available and accessible, allowing you to deploy new containers for your applications reliably. Amazon ECR transfers your container images over HTTPS and automatically encrypts your images at rest. You can configure policies to manage permissions for each repository and restrict access to IAM users, roles, or other AWS accounts. Amazon ECR integrates with Amazon ECS, Amazon EKS, AWS Fargate, AWS Lambda, and the Docker CLI, allowing you to simplify your development and production workflows. You can easily push your container images to Amazon ECR using the Docker CLI from your development machine, and Amazon container orchestrators or compute can pull them directly for production deployments. /ecr/faqs/;What is the pricing for Amazon ECR?;With Amazon ECR, there are no upfront fees or commitments. You pay only for the amount of data you store in your public or private repositories, and data transferred to the internet. Please see our Pricing page for more details. /ecr/faqs/;Is Amazon ECR a global service?;Amazon ECR is a Regional service and is designed to give you flexibility in how images are deployed. You have the ability to push/pull images to the same AWS Region where your Docker cluster runs for the best performance. You can also access Amazon ECR anywhere that Docker runs, such as desktops and on-premises environments. Pulling images between Regions or out to the internet will have additional latency and data transfer costs. /ecr/faqs/;Can Amazon ECR host public container images?;Yes. Amazon ECR has a highly available container registry and website that makes it easy for you to share or search for public container software. Anyone with or without an AWS account can use the Amazon ECR public gallery to search for and download commonly used container images such as operating systems, AWS published images, and files, such as Helm charts, for Kubernetes. /ecr/faqs/;What compliance capabilities can I enable on Amazon ECR?;You can use AWS CloudTrail on Amazon ECR to provide a history of all API actions such as who pulled an image and when tags were moved between images. Administrators can also find which EC2 instances pulled which images. /ecr/faqs/;How do I get started using Amazon ECR?; Yes. You can set up AWS PrivateLink endpoints to allow your instances to pull images from your private repositories without traversing through the public internet. /ecr/faqs/;Can I access Amazon ECR inside a VPC?; Amazon ECR provides a command line interface and APIs to create, monitor, and delete repositories and set repository permissions. You can perform the same actions in the Amazon ECR console, which can be accessed via the “Repositories” section of the Amazon ECR console. Amazon ECR also integrates with the Docker CLI, allowing you to push, pull, and tag images on your development machine. /ecr/faqs/;How do I publicly share an image using Amazon ECR?;You publish an image to the Amazon ECR public gallery by signing into your AWS account and pushing to a public repository you create. You are assigned a unique alias per account to use in image URLs that identifies all public images that you publish. /ecr/faqs/;Can I use a custom alias for my public images?;Yes. You can request a custom alias such as your organization or project name, unless it’s a reserved alias. Names that identify AWS services are reserved. Names that identify AWS Marketplace sellers may also be reserved. We will review and approve your custom alias request within a few days unless your alias request violates the AWS Acceptable Use Policy or other AWS policies. /ecr/faqs/;How do I pull a public image from Amazon ECR?;You pull using the familiar ‘docker pull’ command with the URL of the image. You can easily search for this URL by finding images using a publisher alias, image name, or image description using the Amazon ECR public gallery. Image URLs are in the format public.ecr.aws//:, for example public.ecr.aws/eks/aws-alb-ingress-controller:v1.1.5 /ecr/faqs/;Does Amazon ECR replicate images across AWS Regions?;Yes. Amazon ECR is designed to give you flexibility in where you store and how you deploy your images. You can create deployment pipelines that build images, push them to Amazon ECR in one Region, and Amazon ECR can automatically replicate them to other Regions and accounts for deployment to multi-Region clusters. /ecr/faqs/;Can I use Amazon ECR within local and on-premises environments?; Yes. Services such as Amazon EKS, Amazon SageMaker and AWS Lambda publish their official public use container images and artifacts to Amazon ECR. /ecr/faqs/;Does the Amazon ECR public gallery provide AWS-published images?; Yes. Amazon ECR is integrated with Amazon ECS, allowing you to easily store, run, and manage container images for applications running on Amazon ECS. All you need to do is specify the Amazon ECR repository in your task definition and Amazon ECS will retrieve the appropriate images for your applications. /ecr/faqs/;Does Amazon ECR work with Amazon ECS?; Yes. AWS Elastic Beanstalk supports Amazon ECR for both single and multi-container Docker environments, allowing you to easily deploy container images stored in Amazon ECR with AWS Elastic Beanstalk. All you need to do is specify the Amazon ECR repository in your Dockerrun.aws.json configuration and attach the AmazonEC2ContainerRegistryReadOnly policy to your container instance role. /ecr/faqs/;Does Amazon ECR work with AWS Elastic Beanstalk?; Amazon ECR currently supports Docker Engine 1.7.0 and up. /ecr/faqs/;What version of Docker Engine does Amazon ECR support?; Amazon ECR supports the Docker Registry V2 API specification. /ecr/faqs/;What version of the Docker Registry API does Amazon ECR support?; No. However, Amazon ECR integrates with a number of popular CI/CD solutions to provide this capability. See the Amazon ECR Partners page for more information. /ecr/faqs/;Will Amazon ECR automatically build images from a Dockerfile?; Yes. Amazon ECR is integrated with AWS Identity and Access Management (IAM), which supports identity federation for delegated access to the AWS Management Console or AWS APIs. /ecr/faqs/;Does Amazon ECR support federated access?; Amazon ECR supports the Docker Image Manifest V2, Schema 2 format. In order to maintain backwards compatibility with Schema 1 images, Amazon ECR will continue to accept images uploaded in the Schema 1 format. Additionally, Amazon ECR can down-translate from a Schema 2 to a Schema 1 image when pulling with an older version of Docker Engine (1.9 and below). /ecr/faqs/;What version of the Docker Image Manifest specification does Amazon ECR support?; Yes. Amazon ECR is compatible with the Open Container Initiative (OCI) image specification, letting you push and pull OCI images and artifacts. Amazon ECR can also translate between Docker Image Manifest V2, Schema 2 images and OCI images on pull. /ecr/faqs/;How does Amazon ECR help ensure that container images are secure?; You can use IAM resource-based policies to control and monitor who and what (e.g., EC2 instances) can access your container images, as well as how, when, and where they can access them. To get started, use the AWS Management Console to create resource-based policies for your repositories. Alternatively, you can use sample policies and attach them to your repositories via the Amazon ECR CLI. /ecr/faqs/;How can I use AWS Identity and Access Management (IAM) for permissions?; Yes. Here is an example of how to create and set a policy for cross-account image sharing. /ecr/faqs/;Can I share my images across AWS accounts?; You can enable Amazon ECR to automatically scan your container images for a broad range of operating system vulnerabilities. You can also scan images using an API command, and Amazon ECR will notify you over API and in the console when a scan completes. For enhanced image scanning, you can turn on Amazon Inspector. /ecs/faqs/;What is Amazon Elastic Container Service?;Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop container-enabled applications, query the complete state of your cluster, and access many familiar features like security groups, Elastic Load Balancing, Amazon Elastic Block Store (EBS) volumes, and Identity Access Management (IAM). roles. You can use Amazon ECS to schedule container placement across your cluster based on your resource needs and availability requirements. You can also integrate your own scheduler or third-party schedulers to meet business or application specific requirements. /ecs/faqs/;Why should I use Amazon ECS?;Amazon ECS makes it easy to use containers as a building block for your applications by eliminating the need for you to install, operate, and scale your own cluster management infrastructure. Amazon ECS lets you schedule long-running applications, services, and batch processes using Docker containers. Amazon ECS maintains application availability and allows you to scale your containers up or down to meet your application's capacity requirements. Amazon ECS is integrated with familiar features like Elastic Load Balancing, EBS volumes, Amazon Virtual Private Cloud (VPC), and IAM. Simple APIs let you integrate and use your own schedulers or connect Amazon ECS into your existing software delivery process. /ecs/faqs/;What is the pricing for Amazon ECS?;"There is no additional charge for Amazon ECS. You pay for AWS resources (e.g. Amazon EC2 instances or EBS volumes) you create to store and run your application. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments." /ecs/faqs/;How is Amazon ECS different from AWS Elastic Beanstalk?;AWS Elastic Beanstalk is an application management platform that helps customers easily deploy and scale web applications and services. It keeps the building block provisioning (e.g., EC2, Amazon RDS, Elastic Load Balancing, AWS Auto Scaling, and Amazon CloudWatch), application deployment, and health monitoring abstracted from the user so they can focus on writing code. You simply specify which container images to deploy, the CPU and memory requirements, the port mappings, and the container links. /ecs/faqs/;How is Amazon ECS different from AWS Lambda?;Amazon ECS is a highly scalable Docker container management service that allows you to run and manage distributed applications that run in Docker containers. AWS Lambda is an event-driven task compute service that runs your code in response to “events” such as changes in data, website clicks, or messages from other AWS services without you having to manage any compute infrastructure. /ecs/faqs/;How do I get started using Amazon ECS?;Visit our Getting Started page for more information on how to start using Amazon ECS. /ecs/faqs/;Does Amazon ECS support any other container types?;No. Docker is the only container platform supported by Amazon ECS at this time. /ecs/faqs/;I want to launch containers. Why do I have to launch tasks?;Docker encourages you to split your applications up into their individual components, and Amazon ECSis optimized for this pattern. Tasks allow you to define a set of containers you would like to place together (or part of the same placement decision), their properties, and how they may be linked. Tasks include all the information Amazon ECS needs to make the placement decision. To launch a single container, your task Definition should only include one container definition. /ecs/faqs/;Does Amazon ECS support applications and services?;Yes. The Amazon ECS Service scheduler can manage long-running applications and services. The service scheduler helps you maintain application availability and allows you to scale your containers up or down to meet your application's capacity requirements. The service scheduler allows you to distribute traffic across your containers using Elastic Load Balancing (ELB). Amazon ECS will automatically register and deregister your containers from the associated load balancer. /ecs/faqs/;Does Amazon ECS support dynamic port mapping?;Yes. It is possible to associate a service on Amazon ECS to an Application Load Balancer (ALB) for the ELB service. The ALB supports a target group containing a set of instance ports. You can specify a dynamic port in the ECS task definition which gives the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler will automatically add the task to the Application Load Balancer’s target group using this port. /ecs/faqs/;Does Amazon ECS support batch jobs?;Yes. You can use Amazon ECS Run task to run one or more tasks once. Run task starts the task on an instance that meets the task’s requirements including CPU, memory, and ports. /ecs/faqs/;Can I use my own scheduler with Amazon ECS?;ECS provides Blox, a collection of open-source projects for container management and orchestration. Blox makes it easy to consume events from Amazon ECS, store the cluster state locally, and query the local data store through APIs. Blox also includes a daemon scheduler that can be used as a reference for how to use the cluster state server. See the Blox GitHub page to learn more. /ecs/faqs/;Can I use my own Amazon Machine Image (AMI)?;Yes. You can use any AMI that meets the Amazon ECS AMI specification. We recommend starting from the Amazon ECS-enabled Amazon Linux AMI. Partner AMIs compatible with Amazon ECS are also available. You can review the Amazon ECS AMI specification in the documentation. /ecs/faqs/;How can I configure my container instances to pull from Amazon Elastic Container Registry?;Amazon ECR is integrated with Amazon ECS allowing you to easily store, run, and manage container images for applications running on Amazon ECS. All you need to do is specify the Amazon ECR repository in your task Definition and attach the AmazonEC2ContainerServiceforEC2Role to your instances. Then Amazon ECS will retrieve the appropriate images for your applications. /ecs/faqs/;How does Amazon ECS isolate containers belonging to different customers?;Amazon ECS schedules containers for execution on customer-controlled Amazon EC2 instances or with AWS Fargate and builds on the same isolation controls and compliance settings available for EC2 customers. Your compute instances are located in a Virtual Private Cloud (VPC) with an IP range that you specify. You decide which instances are exposed to the Internet and which remain private. /ecs/faqs/;Can I apply additional security configuration and isolation frameworks to my container instances?;Yes. As an Amazon EC2 customer, you have root access to the operating system (OS) of your container instances. You can take ownership of the OS security settings, as well as configure additional software components for security capabilities such as monitoring, patch management, log management, and host intrusion detection. /ecs/faqs/;Can I operate container instances with different security settings or segregate different tasks across different environments?;Yes. You can configure your different container instances using the tooling of your choice. Amazon ECS allows you to control the placement of tasks in different container instances through the construct of clusters and targeted launches. /ecs/faqs/;Does Amazon ECS support retrieving Docker images from a private or internal source?;Yes. Customers can configure their container instances to access a private Docker image registry within a VPC or a registry that’s accessible outside a VPC such as Amazon Elastic Container Registry (ECR). /ecs/faqs/;How do I configure IAM roles for ECS tasks?;You first need to create an IAM role for your task, using the 'Amazon EC2 Container Service Task Role’ service role and attaching a policy with the required permissions. When you create a new task definition or a task definition revision, you can then specify a role by selecting it from the ’Task Role’ drop-down or using the ‘taskRoleArn’ filed in the JSON format. /ecs/faqs/;With which compliance programs does Amazon ECS conform?;Amazon ECS meets the standards for PCI DSS Level 1, ISO 9001, ISO 27001, ISO 27017, ISO 27018, SOC 1, SOC 2, SOC 3, and HIPAA eligibility. For more information, visit our compliance pages. /ecs/faqs/;Can I use Amazon ECS for US Government-regulated workloads or processing sensitive Controlled Unclassified Information (CUI)?;Yes. By using the AWS GovCloud (US) region, containers and clusters managed by Amazon ECS can meet the requirements to sensitive data and regulated workloads with your containers. For more information, visit our page on AWS GovCloud. /ecs/faqs/;What does the Amazon ECS SLA guarantee?;Our Compute SLA guarantees a Monthly Uptime Percentage of at least 99.99% for Amazon ECS. /ecs/faqs/;How do I know if I qualify for a SLA Service Credit?;You are eligible for an SLA credit for Amazon ECS under the Compute SLA if more than one Availability Zone in which you are running a task, within the same region has a Monthly Uptime Percentage of less than 99.99% during any monthly billing cycle. /ec2/faqs/;What is Amazon Elastic Compute Cloud (Amazon EC2)?;Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. /ec2/faqs/;What can I do with Amazon EC2?;Just as Amazon Simple Storage Service (Amazon S3) enables storage in the cloud, Amazon EC2 enables “compute” in the cloud. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. /ec2/faqs/;How can I get started with Amazon EC2?;"To sign up for Amazon EC2, click the “Sign up for This Web Service” button on the Amazon EC2 detail page. You must have an AWS account to access this service; if you do not already have one, you will be prompted to create one when you begin the Amazon EC2 sign-up process. After signing up, please refer to the Amazon EC2 documentation, which includes our Getting Started Guide." /ec2/faqs/;Why am I asked to verify my phone number when signing up for Amazon EC2?;Amazon EC2 registration requires you to have a valid phone number and email address on file with AWS in case we ever need to contact you. Verifying your phone number takes only a couple of minutes and involves receiving a phone call during the registration process and entering a PIN number using the phone key pad. /ec2/faqs/;What can developers now do that they could not before?;Until now, small developers did not have the capital to acquire massive compute resources and ensure they had the capacity they needed to handle unexpected spikes in load. Amazon EC2 enables any developer to leverage Amazon’s own benefits of massive scale with no upfront investment or performance compromises. Developers are now free to innovate knowing that no matter how successful their businesses become, it will be inexpensive and simple to ensure they have the compute capacity they need to meet their business requirements. /ec2/faqs/;How do I run systems in the Amazon EC2 environment?;Once you have set up your account and select or create your AMIs, you are ready to boot your instance. You can start your AMI on any number of On-Demand instances by using the RunInstances API call. You simply need to indicate how many instances you wish to launch. If you wish to run more than your On-Demand quota, complete the Amazon EC2 instance request form. /ec2/faqs/;What is the difference between using the local instance store and Amazon Elastic Block Store (Amazon EBS) for the root device?;When you launch your Amazon EC2 instances you have the ability to store your root device data on Amazon EBS or the local instance store. By using Amazon EBS, data on the root device will persist independently from the lifetime of the instance. This enables you to stop and restart the instance at a subsequent time, which is similar to shutting down your laptop and restarting it when you need it again. /ec2/faqs/;How quickly will systems be running?;It typically takes less than 10 minutes from the issue of the RunInstances call to the point where all requested instances begin their boot sequences. This time depends on a number of factors including: the size of your AMI, the number of instances you are launching, and how recently you have launched that AMI. Images launched for the first time may take slightly longer to boot. /ec2/faqs/;How do I load and store my systems with Amazon EC2?;Amazon EC2 allows you to set up and configure everything about your instances from your operating system up to your applications. An Amazon Machine Image (AMI) is simply a packaged-up environment that includes all the necessary bits to set up and boot your instance. Your AMIs are your unit of deployment. You might have just one AMI or you might compose your system out of several building block AMIs (e.g., webservers, appservers, and databases). Amazon EC2 provides a number of tools to make creating an AMI easy. Once you create a custom AMI, you will need to bundle it. If you are bundling an image with a root device backed by Amazon EBS, you can simply use the bundle command in the AWS Management Console. If you are bundling an image with a boot partition on the instance store, then you will need to use the AMI Tools to upload it to Amazon S3. Amazon EC2 uses Amazon EBS and Amazon S3 to provide reliable, scalable storage of your AMIs so that we can boot them when you ask us to do so. /ec2/faqs/;How do I access my systems?;The RunInstances call that initiates execution of your application stack will return a set of DNnames, one for each system that is being booted. This name can be used to access the system exactly as you would if it were in your own data center. You own that machine while your operating system stack is executing on it. /ec2/faqs/;Is Amazon EC2 used in conjunction with Amazon S3?;Yes, Amazon EC2 is used jointly with Amazon S3 for instances with root devices backed by local instance storage. By using Amazon S3, developers have access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. In order to execute systems in the Amazon EC2 environment, developers use the tools provided to load their AMIs into Amazon S3 and to move them between Amazon S3 and Amazon EC2. See How do I load and store my systems with Amazon EC2? for more information about AMIs. /ec2/faqs/;;You are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region. New AWS accounts may start with limits that are lower than the limits described here. /ec2/faqs/;Are there any limitations in sending email from Amazon EC2 instances?;Yes. In order to maintain the quality of Amazon EC2 addresses for sending email, we enforce default limits on the amount of email that can be sent from EC2 accounts. If you wish to send larger amounts of email from EC2, you can apply to have these limits removed from your account by filling out this form. /ec2/faqs/;How quickly can I scale my capacity both up and down?;Amazon EC2 provides a truly elastic computing environment. Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously. When you need more instances, you simply call RunInstances, and Amazon EC2 will typically set up your new instances in a matter of minutes. Of course, because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs. /ec2/faqs/;What operating system environments are supported?;Amazon EC2 currently supports a variety of operating systems including: Amazon Linux, Ubuntu, Windows Server, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, openSUSE Leap, Fedora, Fedora CoreOS, Debian, CentOS, Gentoo Linux, Oracle Linux, and FreeBSD. We are looking for ways to expand it to other platforms. /ec2/faqs/;Does Amazon EC2 use ECC memory?;In our experience, ECC memory is necessary for server infrastructure, and all the hardware underlying Amazon EC2 uses ECC memory. /ec2/faqs/;How is this service different than a plain hosting service?;Traditional hosting services generally provide a pre-configured resource for a fixed amount of time and at a predetermined cost. Amazon EC2 differs fundamentally in the flexibility, control and significant cost savings it offers developers, allowing them to treat Amazon EC2 as their own personal data center with the benefit of Amazon.com’s robust infrastructure. /ec2/faqs/;What is changing?;Amazon EC2 is transitioning On-Demand Instance limits from the current instance count-based limits to the new vCPU-based limits to simplify the limit management experience for AWS customers. Usage toward the vCPU-based limit is measured in terms of number of vCPUs (virtual central processing units) for the Amazon EC2 Instance Types to launch any combination of instance types that meet your application needs. /ec2/faqs/;What are vCPU-based limits?;"You are limited to running one or more On-Demand Instances in an AWS account, and Amazon EC2 measures usage towards each limit based on the total number of vCPUs (virtual central processing unit) that are assigned to the running On-Demand instances in your AWS account. The following table shows the number of vCPUs for each instance size. The vCPU mapping for some instance types may differ; see Amazon EC2 Instance Types for details." /ec2/faqs/;How many On-Demand instances can I run in Amazon EC2?;"There are five vCPU-based instance limits; each defines the amount of capacity you can use of a given instance family. All usage of instances in a given family, regardless of generation, size, or configuration variant (e.g. disk, processor type), will accrue towards the family’s total vCPU limit, listed in the table below. New AWS accounts may start with limits that are lower than the limits described here." /ec2/faqs/;Are these On-Demand Instance vCPU-based limits regional?;Yes, the On-Demand Instance limits for an AWS account are set on a per-region basis. /ec2/faqs/;Will these limits change over time?;Yes, limits can change over time. Amazon EC2 is constantly monitoring your usage within each region and your limits are raised automatically based on your use of EC2. /ec2/faqs/;How can I request a limit increase?;Even though EC2 automatically increases your On-Demand Instance limits based on your usage, if needed you can request a limit increase from the Limits Page on Amazon EC2 console, the Amazon EC2 service page on the Service Quotas console, or the Service Quotas API/CLI. /ec2/faqs/;How can I calculate my new vCPU limit?;You can find the vCPU mapping for each of the Amazon EC2 Instance Types or use the simplified vCPU Calculator to compute the total vCPU limit requirements for your AWS account. /ec2/faqs/;Do vCPU limits apply when purchasing Reserved Instances or requesting Spot Instances?;No, the vCPU-based limits only apply to running On-Demand instances and Spot Instances. /ec2/faqs/;How can I view my current On-Demand Instance limits?;You can find your current On-Demand Instance limits on the EC2 Service Limits page in the Amazon EC2 console, or from the Service Quotas console and APIs. /ec2/faqs/;Will this affect running instances?;No, opting into vCPU-based limits will not affect any running instances. /ec2/faqs/;Can I still launch the same number of instances?;Yes, the vCPU-based instance limits allow you to launch at least the same number of instances as count-based instance limits. /ec2/faqs/;Will I be able to view instance usage against these limits?;With the Amazon CloudWatch metrics integration, you can view EC2 usage against limits in the Service Quotas console. Service Quotas also enables customers to use CloudWatch for configuring alarms to warn customers of approaching limits. In addition, you can continue to track and inspect your instance usage in Trusted Advisor and Limit Monitor. /ec2/faqs/;Will I still be able to use the DescribeAccountAttributes API?;With the vCPU limits, we no longer have total instance limits governing the usage. Hence the DescribeAccountAttributes API will no longer return the max-instances value. Instead you can now use the Service Quotas APIs to retrieve information about EC2 limits. You can find more information about the Service Quotas APIs in the AWS documentation. /ec2/faqs/;Will the vCPU limits have an impact on my monthly bill?;No. EC2 usage is still calculated either by the hour or the second, depending on which AMI you're running and the instance type and size you’ve launched. /ec2/faqs/;Will vCPU limits be available in all Regions?;vCPU-based instance limits are available in all commercial AWS Regions. /ec2/faqs/;What is changing?;Starting Jan-27 2020, Amazon Elastic Compute Cloud (EC2) will begin rolling out a change to restrict email traffic over port 25 by default to protect customers and other recipients from spam and email abuse. Port 25 is typically used as the default SMTP port to send emails. AWS accounts that have requested and had Port 25 throttles removed in the past will not be impacted by this change. /ec2/faqs/;I have a valid use-case for sending emails to port 25 from EC2. How can I have these port 25 restrictions removed?;If you have a valid use-case for sending emails to port 25 (SMTP) from EC2, please submit a Request to Remove Email Sending Limitations to have these restrictions lifted. You can alternately send emails using a different port, or leverage an existing authenticated email relay service such as Amazon Simple Email Service (SES). /ec2/faqs/;What does your Amazon EC2 Service Level Agreement guarantee?;Our SLA guarantees a Monthly Uptime Percentage of at least 99.99% for Amazon EC2 and Amazon EBS within a Region. /ec2/faqs/;How do I know if I qualify for an SLA Service Credit?;You are eligible for an SLA credit for either Amazon EC2 or Amazon EBS (whichever was Unavailable, or both if both were Unavailable) if the Region that you are operating in has an Monthly Uptime Percentage of less than 99.99% during any monthly billing cycle. For full details on all of the terms and conditions of the SLA, as well as details on how to submit a claim, please see http://aws.amazon.com/ec2/sla/ /ec2/faqs/;What are Accelerated Computing instances?;The Accelerated Computing instance category includes instance families that use hardware accelerators, or co-processors, to perform some functions, such as floating-point number calculation and graphics processing, more efficiently than is possible in software running on CPUs. Amazon EC2 provides three types of Accelerated Computing instances – GPU compute instances for general-purpose computing, GPU graphics instances for graphics intensive applications, and FPGA programmable hardware compute instances for advanced scientific workloads. /ec2/faqs/;When should I use GPU Graphics and Compute instances?;GPU instances work best for applications with massive parallelism such as workloads using thousands of threads. Graphics processing is an example with huge computational requirements, where each of the tasks is relatively small, the set of operations performed form a pipeline, and the throughput of this pipeline is more important than the latency of the individual operations. To be able to build applications that exploit this level of parallelism, one needs GPU device specific knowledge by understanding how to program against various graphics APIs (DirectX, OpenGL) or GPU compute programming models (CUDA, OpenCL). /ec2/faqs/;What applications can benefit from P4d?;Some of the applications that we expect customers to use P4d for are machine learning (ML) workloads like natural language understanding, perception model training for autonomous vehicles, image classification, object detection and recommendation engines. The increased GPU performance can significantly reduce the time to train and the additional GPU memory will help customers train larger, more complex models. HPC customers can use P4’s increased processing performance and GPU memory for seismic analysis, drug discovery, DNsequencing, and insurance risk modeling. /ec2/faqs/;How do P4d instances compare to P3 instances?;P4 instances feature NVIDIA’s latest generation A100 Tensor Core GPUs to provide on average 2.5X increase in TFLOP performance over the previous generation V100 along with 2.5X the GPU memory. P4 instances feature Cascade Lake Intel CPU that has 24C per socket and an additional instruction set for vector neural network instructions. P4 instances will have 1.5X the total system memory and 4X the networking throughput of P3dn or 16x compared to P3.16xl. Another key difference is that the NVSwitch GPU interconnect throughput will double what was possible on P3 so each GPU can communicate with every other GPU at the same 600GB/s bidirectional throughput and with single-hop latency. This allows application development to consider multiple GPUs and memories as a single large GPU and a unified pool of memory. P4d instances are also deployed in tightly coupled hyperscale clusters, called EC2 UltraClusters, that enable you to run the most complex multi-node ML training and HPC applications. /ec2/faqs/;What are EC2 UltraClusters and how can I get access?;P4d instances are deployed in hyperscale clusters called EC2 UltraClusters. Each EC2 UltraCluster is comprised of more than 4,000 NVIDIA A100 Tensor Core GPUs, Petabit-scale networking, and scalable low latency storage with FSx for Lustre. Each EC2 UltraCluster is one of the world’s top supercomputers. Anyone can easily spin up P4d instances in EC2 SuperClusters. For additional help, contact us. /ec2/faqs/;Will AMIs I used on P3 and P3dn work on P4?;The P4 AMIs will need new NVIDIA drivers for the A100 GPUs and a newer version of the ENdriver installed. P4 instances are powered by Nitro System and they require AMIs with NVMe and ENdriver installed. P4 also comes with new Intel Cascade Lake CPUs, which come with an updated instruction set, so we recommend using the latest distributions of ML frameworks, which take advantage of these new instruction sets for data pre-processing. /ec2/faqs/;How are P3 instances different from G3 instances?;P3 instances are the next-generation of EC2 general-purpose GPU computing instances, powered by up to 8 of the latest-generation NVIDIA Tesla V100 GPUs. These new instances significantly improve performance and scalability, and add many new features, including new Streaming Multiprocessor (SM) architecture for machine learning (ML)/deep learning (DL) performance optimization, second-generation NVIDIA NVLink high-speed GPU interconnect, and highly tuned HBM2 memory for higher-efficiency. /ec2/faqs/;What are the benefits of NVIDIA Volta GV100 GPUs?;The new NVIDIA Tesla V100 accelerator incorporates the powerful new Volta GV100 GPU. GV100 not only builds upon the advances of its predecessor, the Pascal GP100 GPU, it significantly improves performance and scalability, and adds many new features that improve programmability. These advances will supercharge HPC, data center, supercomputer, and deep learning systems and applications. /ec2/faqs/;Who will benefit from P3 instances?;P3 instances with their high computational performance will benefit users in artificial intelligence (AI), machine learning (ML), deep learning (DL) and high performance computing (HPC) applications. Users include data scientists, data architects, data analysts, scientific researchers, ML engineers, IT managers and software developers. Key industries include transportation, energy/oil & gas, financial services (banking, insurance), healthcare, pharmaceutical, sciences, IT, retail, manufacturing, high-tech, transportation, government, and academia, among many others. /ec2/faqs/;What are some key use cases of P3 instances?;P3 instances use GPUs to accelerate numerous deep learning systems and applications including autonomous vehicle platforms, speech, image, and text recognition systems, intelligent video analytics, molecular simulations, drug discovery, disease diagnosis, weather forecasting, big data analytics, financial modeling, robotics, factory automation, real-time language translation, online search optimizations, and personalized user recommendations, to name just a few. /ec2/faqs/;Why should customers use GPU-powered Amazon P3 instances for AI/ML and HPC?;GPU-based compute instances provide greater throughput and performance because they are designed for massively parallel processing using thousands of specialized cores per GPU, versus CPUs offering sequential processing with a few cores. In addition, developers have built hundreds of GPU-optimized scientific HPC applications such as quantum chemistry, molecular dynamics, and meteorology, among many others. Research indicates that over 70% of the most popular HPC applications provide built-in support for GPUs. /ec2/faqs/;Will P3 instances support EC2 Classic networking and Amazon VPC?;P3 instances will support VPC only. /ec2/faqs/;How are G3 instances different from P2 instances?;G3 instances use NVIDIA Tesla M60 GPUs and provide a high-performance platform for graphics applications using DirectX or OpenGL. NVIDIA Tesla M60 GPUs support NVIDIA GRID Virtual Workstation features, and H.265 (HEVC) hardware encoding. Each M60 GPU in G3 instances supports 4 monitors with resolutions up to 4096x2160, and is licensed to use NVIDIA GRID Virtual Workstation for one Concurrent Connected User. Example applications of G3 instances include 3D visualizations, graphics-intensive remote workstation, 3D rendering, application streaming, video encoding, and other server-side graphics workloads. /ec2/faqs/;How are P3 instances different from P2 instances?;P3 Instances are the next-generation of EC2 general-purpose GPU computing instances, powered by up to 8 of the latest-generation NVIDIA Volta GV100 GPUs. These new instances significantly improve performance and scalability and add many new features, including new Streaming Multiprocessor (SM) architecture, optimized for machine learning (ML)/deep learning (DL) performance, second-generation NVIDIA NVLink high-speed GPU interconnect, and highly tuned HBM2 memory for higher-efficiency. /ec2/faqs/;What APIs and programming models are supported by GPU Graphics and Compute instances?;P3 instances support CUDA 9 and OpenCL, P2 instances support CUDA 8 and OpenCL 1.2 and G3 instances support DirectX 12, OpenGL 4.5, CUDA 8, and OpenCL 1.2. /ec2/faqs/;Where do I get NVIDIA drivers for P3 and G3 instances?;There are two methods by which NVIDIA drivers may be obtained. There are listings on the AWS Marketplace that offer Amazon Linux AMIs and Windows Server AMIs with the NVIDIA drivers pre-installed. You may also launch 64-bit, HVM AMIs and install the drivers yourself. You must visit the NVIDIA driver website and search for the NVIDIA Tesla V100 for P3, NVIDIA Tesla K80 for P2, and NVIDIA Tesla M60 for G3 instances. /ec2/faqs/;Which AMIs can I use with P3, P2 and G3 instances?;You can currently use Windows Server, SUSE Enterprise Linux, Ubuntu, and Amazon Linux AMIs on P2 and G3 instances. P3 instances only support HVM AMIs. If you want to launch AMIs with operating systems not listed here, contact AWS Customer Support with your request or reach out through EC2 Forums. /ec2/faqs/;Does the use of G2 and G3 instances require third-party licenses?;Aside from the NVIDIA drivers and GRID SDK, the use of G2 and G3 instances does not necessarily require any third-party licenses. However, you are responsible for determining whether your content or technology used on G2 and G3 instances requires any additional licensing. For example, if you are streaming content you may need licenses for some or all of that content. If you are using third-party technology such as operating systems, audio and/or video encoders, and decoders from Microsoft, Thomson, Fraunhofer IIS, Sisvel S.p.A., MPEG-LA, and Coding Technologies, please consult these providers to determine if a license is required. For example, if you leverage the on-board h.264 video encoder on the NVIDIA GRID GPU you should reach out to MPEG-LA for guidance, and if you use mp3 technology you should contact Thomson for guidance. /ec2/faqs/;Why am I not getting NVIDIA GRID features on G3 instances using the driver downloaded from the NVIDIA website?;The NVIDIA Tesla M60 GPU used in G3 instances requires a special NVIDIA GRID driver to enable all advanced graphics features, and 4 monitors support with resolution up to 4096x2160. You need to use an AMI with NVIDIA GRID driver pre-installed, or download and install the NVIDIA GRID driver following the AWS documentation. /ec2/faqs/;Why am I unable to see the GPU when using Microsoft Remote Desktop?;When using Remote Desktop, GPUs using the WDDM driver model are replaced with a non-accelerated Remote Desktop display driver. In order to access your GPU hardware, you need to utilize a different remote access tool, such as VNC. /ec2/faqs/;What is Amazon EC2 F1?;Amazon EC2 F1 is a compute instance with programmable hardware you can use for application acceleration. The new F1 instance type provides a high performance, easy to access FPGA for developing and deploying custom hardware accelerations. /ec2/faqs/;What are FPGAs and why do I need them?;FPGAs are programmable integrated circuits that you can configure using software. By using FPGAs you can accelerate your applications up to 30x when compared with servers that use CPUs alone. And, FPGAs are reprogrammable, so you get the flexibility to update and optimize your hardware acceleration without having to redesign the hardware. /ec2/faqs/;How does F1 compare with traditional FPGA solutions?;F1 is an AWS instance with programmable hardware for application acceleration. With F1, you have access to FPGA hardware in a few simple clicks, reducing the time and cost of full-cycle FPGA development and scale deployment from months or years to days. While FPGA technology has been available for decades, adoption of application acceleration has struggled to be successful in both the development of accelerators and the business model of selling custom hardware for traditional enterprises, due to time and cost in development infrastructure, hardware design, and at-scale deployment. With this offering, customers avoid the undifferentiated heavy lifting associated with developing FPGAs in on-premises data centers. /ec2/faqs/;What is an Amazon FPGA Image (AFI)?;The design that you create to program your FPGA is called an Amazon FPGA Image (AFI). AWS provides a service to register, manage, copy, query, and delete AFIs. After an AFI is created, it can be loaded on a running F1 instance. You can load multiple AFIs to the same F1 instance, and can switch between AFIs in runtime without reboot. This lets you quickly test and run multiple hardware accelerations in rapid sequence. You can also offer to other customers on the AWS Marketplace a combination of your FPGA acceleration and an AMI with custom software or AFI drivers. /ec2/faqs/;How do I list my hardware acceleration on the AWS Marketplace?;You would develop your AFI and the software drivers/tools to use this AFI. You would then package these software tools/drivers into an Amazon Machine Image (AMI) in an encrypted format. AWS manages all AFIs in the encrypted format you provide to maintain the security of your code. To sell a product in the AWS Marketplace, you or your company must sign up to be an AWS Marketplace reseller, you would then submit your AMI ID and the AFI ID(s) intended to be packaged in a single product. AWS Marketplace will take care of cloning the AMI and AFI(s) to create a product, and associate a product code to these artifacts, such that any end-user subscribing to this product code would have access to this AMI and the AFI(s). /ec2/faqs/;What is available with F1 instances?;For developers, AWS is providing a Hardware Development Kit (HDK) to help accelerate development cycles, a FPGA Developer AMI for development in the cloud, an SDK for AMIs running the F1 instance, and a set of APIs to register, manage, copy, query, and delete AFIs. Both developers and customers have access to the AWS Marketplace where AFIs can be listed and purchased for use in application accelerations. /ec2/faqs/;Do I need to be an FPGA expert to use an F1 instance?;AWS customers subscribing to an F1-optimized AMI from AWS Marketplace do not need to know anything about FPGAs to take advantage of the accelerations provided by the F1 instance and the AWS Marketplace. Simply subscribe to an F1-optimized AMI from the AWS Marketplace with an acceleration that matches the workload. The AMI contains all the software necessary for using the FPGA acceleration. Customers need only write software to the specific API for that accelerator and start using the accelerator. /ec2/faqs/;"I’m a FPGA developer; how do I get started with F1 instances?";Developers can get started on the F1 instance by creating an AWS account and downloading the AWS Hardware Development Kit (HDK). The HDK includes documentation on F1, internal FPGA interfaces, and compiler scripts for generating AFI. Developers can start writing their FPGA code to the documented interfaces included in the HDK to create their acceleration function. Developers can launch AWS instances with the FPGA Developer AMI. This AMI includes the development tools needed to compile and simulate the FPGA code. The Developer AMI is best run on the latest C5, M5, or R4 instances. Developers should have experience in the programming languages used for creating FPGA code (i.e. Verilog or VHDL) and an understanding of the operation they wish to accelerate. /ec2/faqs/;"I’m not an FPGA developer; how do I get started with F1 instances?";Customers can get started with F1 instances by selecting an accelerator from the AWS Marketplace, provided by AWS Marketplace sellers, and launching an F1 instance with that AMI. The AMI includes all of the software and APIs for that accelerator. AWS manages programming the FPGA with the AFI for that accelerator. Customers do not need any FPGA experience or knowledge to use these accelerators. They can work completely at the software API level for that accelerator. /ec2/faqs/;Does AWS provide a developer kit?;Yes. The Hardware Development Kit (HDK) includes simulation tools and simulation models for developers to simulate, debug, build, and register their acceleration code. The HDK includes code samples, compile scripts, debug interfaces, and many other tools you will need to develop the FPGA code for your F1 instances. You can use the HDK either in an AWS provided AMI, or in your on-premises development environment. These models and scripts are available publicly with an AWS account. /ec2/faqs/;Can I use the HDK in my on-premises development environment?;Yes. You can use the Hardware Development Kit HDK either in an AWS-provided AMI, or in your on-premises development environment. /ec2/faqs/;Can I add an FPGA to any EC2 instance type?;No. F1 instances comes in two instance sizes: f1.2xlarge, f1.4xlarge, and f1.16 xlarge. /ec2/faqs/;How do I use the Inferentia chip in Inf1 instances?;You can start your workflow by building and training your model in one of the popular ML frameworks such as TensorFlow, PyTorch, or MXNet using GPU instances such as P4, P3, or P3dn. Once the model is trained to your required accuracy, you can use the ML framework’s API to invoke Neuron, a software development kit for Inferentia, to compile the model for execution on Inferentia chips, load it in to Inferentia’s memory, and then execute inference calls. In order to get started quickly, you can use AWS Deep Learning AMIs that come pre-installed with ML frameworks and the Neuron SDK. For a fully managed experience you will be able to use Amazon SageMaker, which will enable you to seamlessly deploy your trained models on Inf1 instances. /ec2/faqs/;When would I use Inf1 vs. C6i or C5 vs. G4 instances for inference?;Customers running machine learning models that are sensitive to inference latency and throughput can use Inf1 instances for high-performance cost-effective inference. For those ML models that are less sensitive to inference latency and throughput, customers can use EC2 C6i or C5 instances and utilize the AVX-512/VNNinstruction set. For ML models that require access to NVIDIA’s CUDA, CuDNor TensorRT libraries, we recommend using G4 instances. /ec2/faqs/;When should I choose Elastic Inference (EI) for inference vs Amazon EC2 Inf1 instances?;There are two cases where developers would choose EI over Inf1 instances: (1) if you need different CPU and memory sizes than what Inf1 offers, then you can use EI to attach acceleration to the EC2 instance with the right mix of CPU and memory for your application (2) if your performance requirements are significantly lower than what the smallest Inf1 instance provides, then using EI could be a more cost effective choice. For example, if you only need 5 TOPS, enough for processing up to 6 concurrent video streams, then using the smallest slice of EI with a C5.large instance could be up to 50% cheaper than using the smallest size of an Inf1 instance. /ec2/faqs/;What ML models types and operators are supported by EC2 Inf1 instances using the Inferentia chip?;Inferentia chips support the commonly used machine learning models such as single shot detector (SSD) and ResNet for image recognition/classification and Transformer and BERT for natural language processing and translation and many others. A list of supported operators can be found on GitHub. /ec2/faqs/;How do I take advantage of AWS Inferentia’s NeuronCore Pipeline capability to lower latency?;Inf1 instances with multiple Inferentia chips, such as Inf1.6xlarge or Inf1.24xlarge, support a fast chip-to-chip interconnect. Using the Neuron Processing Pipeline capability, you can split your model and load it to local cache memory across multiple chips. The Neuron compiler uses ahead-of-time (AOT) compilation technique to analyze the input model and compile it to fit across the on-chip memory of single or multiple Inferentia chips. Doing so enables the Neuron Cores to have high-speed access to models and not require access to off-chip memory, keeping latency bounded while increasing the overall inference throughput. /ec2/faqs/;What is the difference between AWS Neuron and Amazon SageMaker Neo?;AWS Neuron is a specialized SDK for AWS Inferentia chips that optimizes the machine learning inference performance of Inferentia chips. It consists of a compiler, run-time, and profiling tools for AWS Inferentia and is required to run inference workloads on EC2 Inf1 instances. On the other hand, Amazon SageMaker Neo is a hardware agnostic service that consists of a compiler and run-time that enables developers to train machine learning models once, and run them on many different hardware platforms. /ec2/faqs/;How do I use the Trainium chips in Trn1 instances?;The Trainium software stack, AWS Neuron SDK, integrates with leading ML frameworks, such as PyTorch and TensorFlow, so you can get started with minimal code changes. To get started quickly, you can use AWS Deep Learning AMIs and AWS Deep Learning Containers, which come preconfigured with AWS Neuron. If you are using containerized applications, you can deploy AWS Neuron by using Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), or your preferred native container engine. AWS Neuron also supports Amazon SageMaker, which you can use to build, train, and deploy machine learning models. /ec2/faqs/;Where can I deploy deep learning models trained on Trn1?;You can deploy deep learning models trained on Trn1 instances on any other Amazon EC2 instance that supports deep learning use cases, including instances based on CPUs, GPUs, or other accelerators. You can also deploy models trained on Trn1 instances outside of AWS, such as on-premises data centers or in embedded devices at the edge. For example, you can train your models on Trn1 instances and deploy them on Inf1 instances, G5 instances, G4 instances, or compute devices at the edge. /ec2/faqs/;When would I use Trn1 instances over GPU-based instances for training ML models?;Trn1 instances are a good fit for your natural language processing (NLP), large language model (LLM), and computer vision (CV) model training use cases. Trn1 instances focus on accelerating model training to deliver high performance while also lowering your model training costs. If you have ML models that need third-party proprietary libraries or languages, for example NVIDIA CUDA, CUDA Deep Neural Network (cuDNN), or TensorRT libraries, we recommend using the NVIDIA GPU-based instances (P4, P3). /ec2/faqs/;How are Burstable Performance Instances different?;Amazon EC2 allows you to choose between Fixed Performance Instances (e.g. C, M and R instance families) and Burstable Performance Instances (e.g. T2). Burstable Performance Instances provide a baseline level of CPU performance with the ability to burst above the baseline. /ec2/faqs/;How do I choose the right Amazon Machine Image (AMI) for my T2 instances?;You will want to verify that the minimum memory requirements of your operating system and applications are within the memory allocated for each T2 instance size (e.g. 512 MiB for t2.nano). Operating systems with Graphical User Interfaces (GUI) that consume significant memory and CPU, for example Microsoft Windows, might need a t2.micro or larger instance size for many use cases. You can find AMIs suitable for the t2.nano instance types on AWS Marketplace. Windows customers who do not need the GUI can use the Microsoft Windows Server 2012 R2 Core AMI. /ec2/faqs/;When should I choose a Burstable Performance Instance, such as T2?;T2 instances provide a cost-effective platform for a broad range of general purpose production workloads. T2 Unlimited instances can sustain high CPU performance for as long as required. If your workloads consistently require CPU usage much higher than the baseline, consider a dedicated CPU instances such as the M or C. /ec2/faqs/;How can I see the CPU Credit balance for each T2 instance?;You can see the CPU Credit balance for each T2 instance in EC2 per-Instance metrics in Amazon CloudWatch. T2 instances have four metrics, CPUCreditUsage, CPUCreditBalance, CPUSurplusCreditBalance and CPUSurplusCreditsCharged. CPUCreditUsage indicates the amount of CPU Credits used. CPUCreditBalance indicates the balance of CPU Credits. CPUSurplusCredit Balance indicates credits used for bursting in the absence of earned credits. CPUSurplusCreditsCharged indicates credits that are charged when average usage exceeds the baseline. /ec2/faqs/;What happens to CPU performance if my T2 instance is running low on credits (CPU Credit balance is near zero)?;If your T2 instance has a zero CPU Credit balance, performance will remain at baseline CPU performance. For example, the t2.micro provides baseline CPU performance of 10% of a physical CPU core. If your instance’s CPU Credit balance is approaching zero, CPU performance will be lowered to baseline performance over a 15-minute interval. /ec2/faqs/;Does my T2 instance credit balance persist at stop / start?;No, a stopped instance does not retain its previously earned credit balance. /ec2/faqs/;Can T2 instances be purchased as Reserved Instances or Spot Instances?;T2 instances can be purchased as On-Demand Instances, Reserved Instances or Spot Instances. /ec2/faqs/;What are Amazon EC2 T4g instances?;Amazon EC2 T4g instances are the next-generation of general purpose burstable instances powered by Arm-based AWS Graviton2 processors. T4g instances deliver up to 40% better price performance over T3 instances. They are built on the AWS Nitro System, a combination of dedicated hardware and Nitro hypervisor. /ec2/faqs/;What are some of the ideal use cases for T4g instances?;T4g instances deliver up to 40% better price performance over T3 instances for a wide variety of burstable general purpose workloads such as micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications. Customers deploying applications built on open source software across T instances will find the T4g instances an appealing option to realize the best price performance. Arm developers can also build their applications directly on native Arm hardware as opposed to cross-compilation or emulation. /ec2/faqs/;How can customers get access to the T4g free trial?;Until December 31, 2023, all AWS customers will be enrolled automatically in the T4g free trial as detailed in the AWS Free Tier. During the free-trial period, customers who run a t4g.small instance will automatically get 750 free hours per month deducted from their bill during each month. The 750 hours are calculated in aggregate across all Regions in which the t4g.small instances are used. Customers must pay for surplus CPU credits when they exceed the instances allocated credits during the 750 free hours of the T4g free trial program. For more information about how CPU credits work, see Key concepts and definitions for burstable performance instances in the Amazon EC2 User Guide for Linux Instances. /ec2/faqs/;Who is eligible for the T4g free trial?; The T4g free trial is currently available across these AWS Regions: US East (Ohio), US East (NVirginia), US West (NCalifornia), US West (Oregon), South America (Sao Paulo), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), and Europe (Stockholm). It is currently not available in the China (Beijing) and China (Ningxia) Regions. /ec2/faqs/;What is the regional availability of T4g free trial?;As part of the free trial, customers can run t4g.small instances across one or multiple Regions from a single cumulative bucket of 750 free hours per month until December 31, 2023. For example, a customer can run t4g.small in Oregon for 300 hours for a month and run another t4g.small in Tokyo for 450 hours during the same month. This would add up to 750 hours per month of the free-trial limit. /ec2/faqs/;Is there an additional charge for running specific AMIs under the T4g free trial?; The T4g free trial has a monthly billing cycle that starts on the first of every month and ends on the last day of that month. Under the T4g free-trial billing plan, customers using t4g.small will see a $0 line item on their bill under the On-Demand pricing plan for the first 750 aggregate hours of usage for every month during the free-trial period. Customers can start any time during the free-trial period and get 750 free hours for the remainder of that month. Any unused hours from the previous month will not be carried over. Customers can launch multiple t4g.small instances under the free trial. Customers will be notified automatically through email using AWS Budgets when their aggregate monthly usage reaches 85% of 750 free hours. When the aggregate instance usage exceeds 750 hours for the monthly billing cycle, customers will be charged based on regular On-Demand pricing for the exceeded hours for that month. For customers with a Compute Savings Plan or T4g Instance Savings Plan, Savings Plan (SV) discount will be applied to On-Demand pricing for hours exceeding the 750 free trial hours. If customers have purchased the T4g Reserved Instance (RI) plan, the RI plan applies first to any usage on an hourly basis. For any remaining usage after the RI plan has been applied, the free trial billing plan is in effect. /ec2/faqs/;How will the t4g.small free trial be reflected on my AWS bill?; No, customers who use consolidated billing to consolidate payment across multiple accounts will have access to one free trial per Organization. Each payer account gets a total aggregate of 750 free hours a month. For more details about consolidated billing, see Consolidated billing for AWS Organizations in the AWS Billing and Cost Management User Guide. /ec2/faqs/;If customers sign up for consolidated billing (or a single payer account), can they get the T4g free trial for each account that is tied to the payer account?; Customers must pay for surplus CPU credits when they exceed the instances allocated credits during the 750 free hours of the T4g free trial program. For details about how CPU credits work, see Key concepts and definitions for burstable performance instances in the Amazon EC2 User Guide for Linux Instances. /ec2/faqs/;Will customers get charged for surplus CPU credits as a part of T4g free trial?; Starting January 1, 2024, customers running on t4g.small instances will be automatically switched from the free trial plan to the On-Demand pricing plan (or Reserved Instance (RI)/Savings Plan (SV) plan, if purchased). Accumulated credits will be set to zero. Customers will receive an email notification seven days before the end of the free trial period stating that the free trial period will be ending in seven days. Starting January 1, 2024, if the RI plan is purchased, the RI plans will apply. Otherwise, customers will be charged regular On-Demand pricing for t4g.small instances. For customers who have the T4g Instance Savings Plan or a Compute Savings Plan, t4g.small instance billing will apply the Savings Plan discount on their On-Demand pricing. /ec2/faqs/;When should I use Compute Optimized instances?;Compute Optimized instances are designed for applications that benefit from high compute power. These applications include compute-intensive applications like high-performance web servers, high-performance computing (HPC), scientific modelling, distributed analytics and machine learning inference. /ec2/faqs/;What are Amazon EC2 C6g instances?;Amazon EC2 C6g instances are the next-generation of compute-optimized instances powered by Arm-based AWS Graviton2 Processors. C6g instances deliver up to 40% better price performance over C5 instances. They are built on the AWS Nitro System, a combination of dedicated hardware and Nitro hypervisor. /ec2/faqs/;What are some of the ideal use cases for C6g instances?;C6g instances deliver significant price performance benefits for compute-intensive workloads such as high performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference. Customers deploying applications built on open source software across C instances family will find the C6g instances an appealing option to realize the best price performance. Arm developers can also build their applications directly on native Arm hardware as opposed to cross-compilation or emulation. /ec2/faqs/;What are the various storage options available on C6g instances?;C6g instances are EBS-optimized by default and offer up to 19,000 Mbps of dedicated EBS bandwidth to both encrypted and unencrypted EBS volumes. C6g instances only support Non-Volatile Memory Express (NVMe) interface to access EBS storage volumes. Additionally, options with local NVMe instance storage are also available through the C6gd instance types. /ec2/faqs/;Which network interface is supported on C6g instances?;C6g instances support ENbased Enhanced Networking. With ENA, C6g instances can deliver up to 25 Gbps of network bandwidth between instances when launched within a Placement Group. /ec2/faqs/;Will customers need to modify their applications and workloads to be able to run on the C6g instances?;The changes required are dependent on the application. Customers running applications built on open source software will find that the Arm ecosystem is well developed and already likely supports their applications. Most Linux distributions as well as containers (Docker, Kubernetes, Amazon ECS, Amazon EKS, Amazon ECR) support the Arm architecture. Customers will find Arm versions of commonly used software packages available for installation through the same mechanisms that they currently use. Applications that are based on interpreted languages (such as Java, Node, Python) not reliant on native CPU instruction sets should run with minimal to no changes. Applications developed using compiled languages (C, C++, GoLang) will need to be re-compiled to generate Arm binaries. The Arm architecture is well supported in these popular programming languages and modern code usually requires a simple ‘Make’ command. Refer to the Getting Started guide on Github for more details. /ec2/faqs/;Will there be more compute choices offered with the C6 instance families?;Yes, we plan to offer Intel and AMD CPU powered instances in the future as part of the C6 instance families. /ec2/faqs/;Can I launch C4 instances as Amazon EBS-optimized instances?;Each C4 instance type is EBS-optimized by default. C4 instances 500 Mbps to 4,000 Mbps to EBS above and beyond the general-purpose network throughput provided to the instance. Since this feature is always enabled on C4 instances, launching a C4 instance explicitly as EBS-optimized will not affect the instance's behavior. /ec2/faqs/;How can I use the processor state control feature available on the c4.8xlarge instance?;"The c4.8xlarge instance type provides the ability for an operating system to control processor C-states and P-states. This feature is currently available only on Linux instances. You may want to change C-state or P-state settings to increase processor performance consistency, reduce latency, or tune your instance for a specific workload. By default, Amazon Linux provides the highest-performance configuration that is optimal for most customer workloads; however, if your application would benefit from lower latency at the cost of higher single- or dual-core frequencies, or from lower-frequency sustained performance as opposed to bursty Turbo Boost frequencies, then you should consider experimenting with the C-state or P-state configuration options that are available to these instances. For additional information on this feature, see the Amazon EC2 User Guide section on Processor State Control." /ec2/faqs/;Which instances are available within Compute Optimized instances category?;C6g instances: Amazon EC2 C6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over C5 instances and are ideal for running advanced compute-intensive workloads. This includes workloads such as high performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference. /ec2/faqs/;Why should customers choose C6i instances over C5 instances?;C6i instances offer up to 15% better price performance over C5 instances, and always-on memory encryption using Intel Total Memory encryption (TME). C6i instances provide a new instance size (c6i.32xlarge) with 128 vCPUs and 256 GiB of memory, 33% more than the largest C5 instance. They also provide up to 9% higher memory bandwidth per vCPU compared to C5 instances. C6i also give customers up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, twice that of C5 instances. /ec2/faqs/;Why should customers choose C5 instances over C4 instances?;The generational improvement in CPU performance and lower price of C5 instances, which combined result in a 25% price/performance improvement relative to C4 instances, benefit a broad spectrum of workloads that currently run on C3 or C4 instances. For floating point intensive applications, Intel AVX-512 enables significant improvements in delivered TFLOPS by effectively extracting data level parallelism. Customers looking for absolute performance for graphics rendering and HPC workloads that can be accelerated with GPUs or FPGAs should also evaluate other instance families in the Amazon EC2 portfolio that include those resources to find the ideal instance for their workload. /ec2/faqs/;Which storage interface is supported on C5 instances?;C5 instances will support only NVMe EBS device model. EBS volumes attached to C5 instances will appear as NVMe devices. NVMe is a modern storage interface that provides latency reduction and results in increased disk I/O and throughput. /ec2/faqs/;Why does the total memory reported by the operating system not exactly match the advertised memory on instance types?;Portions of the EC2 instance memory are reserved and used by the virtual BIOS for video RAM, DMI, and ACPI. In addition, for instances that are powered by the AWS Nitro Hypervisor, a small percentage of the instance memory is reserved by the Amazon EC2 Nitro Hypervisor to manage virtualization. /ec2/faqs/;Which instances are available within the high performance computing (HPC) instances category?;Hpc6a instances: Hpc6a instances are powered by 96 cores of third generation AMD EPYC processors with an all-core turbo frequency of 3.6 GHz and 384 GiB RAM. Hpc6a instances offer 100 Gbps Elastic Fabric Adapter (EFA) networking enabled for high throughput inter-node communications to help you run your HPC workloads at scale. Hpc6a instances deliver up to 65% better price performance over comparable, compute-optimized x-86 based instances. /ec2/faqs/;What is the Regional availability of the Hpc6a instances?;The Hpc6a instances are available in US East (Ohio), Europe (Stockholm), and AWS GovCloud (US-West). To optimize networking for tightly coupled workloads, you can access Hpc6a instances in a single Availability Zone in each Region where available. /ec2/faqs/;Which Amazon Machine Images (AMIs) are supported on Hpc6a instances?;Hpc6a instances support Amazon Linux 2, Amazon Linux, Ubuntu 18.04 or later, Red Hat Enterprise Linux 7.4 or later, SUSE Linux Enterprise Server 12 SP2 or later, CentOS 7 or later, and FreeBSD 11.1 or later. These instances also support Windows Server 2012, 2012 R2, 2016, and 2019. /ec2/faqs/;Which pricing models do Hpc6a instances support?;Hpc6a instances are available for purchase through 1-year and 3-year Standard Reserved Instances, Convertible Reserved Instances, Savings Plans, and On-Demand Instances. /ec2/faqs/;How are Hpc6id instances different from other EC2 instances?;Hpc6id instances are optimized to deliver capabilities suited for memory-bound, data-intensive high performance computing (HPC) workloads. Hyperthreading is disabled to increase per-vCPU CPU throughput and up to 5 GB/s memory bandwidth per vCPU. These instances deliver 200 Gbps network bandwidth optimized for traffic between instances in the same virtual private cloud (VPC), and support Elastic Fabric Adapter (EFA) for increased network performance. To optimize Hpc6id instances networking for tightly coupled workloads, you can access EC2 Hpc6id instances in a single Availability Zone in each Region. /ec2/faqs/;What is the Regional availability of the Hpc6id instances?;Hpc6id instances are available in US East (Ohio) and AWS GovCloud (US-West) in a single Availability Zone in each Region. /ec2/faqs/;Which Amazon Machine Images (AMIs) are supported on Hpc6id instances?;Hpc6id supports Amazon Linux 2, Amazon Linux, Ubuntu 18.04 or later, Red Hat Enterprise Linux 7.4 or later, SUSE Linux Enterprise Server 12 SP2 or later, CentOS 7 or later, Windows Server 2008 R2 or earlier, and FreeBSD 11.1 or later. /ec2/faqs/;Which pricing models do Hpc6id instances support?;Hpc6id instances are available for purchase through the 1-year and 3-year Amazon EC2 Instance Savings Plans, Compute Savings Plans, EC2 On-Demand Instances, and EC2 Reserved Instances. /ec2/faqs/;What are Amazon EC2 M6g instances?;Amazon EC2 M6g instances are the next-generation of general-purpose instances powered by Arm-based AWS Graviton2 Processors. M6g instances deliver up to 40% better price/performance over M5 instances. They are built on the AWS Nitro System, a combination of dedicated hardware and Nitro hypervisor. /ec2/faqs/;What are the specifications of the new AWS Graviton2 Processors?;The AWS Graviton2 processors deliver up to 7x performance, 4x the number of compute cores, 2x larger caches, 5x faster memory, and 50% faster per core encryption performance than first generation AWS Graviton processors. Each core of the AWS Graviton2 processor is a single-threaded vCPU. These processors also offer always-on fully encrypted DRAM memory, hardware acceleration for compression workloads, dedicated engines per vCPU that double the floating-point performance for workloads such as video encoding, and instructions for int8/fp16 CPU-based machine learning inference acceleration. The CPUs are built utilizing 64-bit Arm Neoverse cores and custom silicon designed by AWS on the advanced 7 nm manufacturing technology. /ec2/faqs/;Is memory encryption supported by AWS Graviton2 processors?;AWS Graviton2 processors support always-on 256-bit memory encryption to further enhance security. Encryption keys are securely generated within the host system, do not leave the host system, and are irrecoverably destroyed when the host is rebooted or powered down. Memory encryption does not support integration with AWS Key Management Service (KMS) and customers cannot bring their own keys. /ec2/faqs/;What are some of the ideal use cases for M6g instances?;M6g instances deliver significant performance and price performance benefits for a broad spectrum of general-purpose workloads such as application servers, gaming servers, microservices, mid-size databases, and caching fleets. Customers deploying applications built on open source software across the M instances will find the M6g instances an appealing option to realize the best price performance. Arm developers can also build their applications directly on native Arm hardware as opposed to cross-compilation or emulation. /ec2/faqs/;What are the various storage options available on M6g instances?;M6g instances are EBS-optimized by default and offer up to 19,000 Mbps of dedicated EBS bandwidth to both encrypted and unencrypted EBS volumes. M6g instances only support Non-Volatile Memory Express (NVMe) interface to access EBS storage volumes. Additionally, options with local NVMe instance storage are also available through the M6gd instance types. /ec2/faqs/;Which network interface is supported on M6g instances?;M6g instances support ENbased Enhanced Networking. With ENA, M6g instances can deliver up to 25 Gbps of network bandwidth between instances when launched within a Placement Group. /ec2/faqs/;Will customers need to modify their applications and workloads to be able to run on the M6g instances?;The changes required are dependent on the application. Customers running applications built on open source software will find that the Arm ecosystem is well developed and already likely supports their applications. Most Linux distributions as well as containers (Docker, Kubernetes, Amazon ECS, Amazon EKS, Amazon ECR) support the Arm architecture. Customers will find Arm versions of commonly used software packages available for installation through the same mechanisms that they currently use. Applications that are based on interpreted languages (such as Java, Node, Python) not reliant on native CPU instruction sets should run with minimal to no changes. Applications developed using compiled languages (C, C++, GoLang) will need to be re-compiled to generate Arm binaries. The Arm architecture is well supported in these popular programming languages and modern code usually requires a simple ‘Make’ command. Refer to the Getting Started guide on Github for more details. /ec2/faqs/;What are Amazon EC2 A1 instances?;Amazon EC2 A1 instances are general purpose instances powered by the first-generation AWS Graviton Processors that are custom designed by AWS. /ec2/faqs/;When should I use A1 instances?;A1 instances deliver significant cost savings for scale-out workloads that can fit within the available memory footprint. A1 instances are ideal for scale-out applications such as web servers, containerized microservices, and data/log processing. These instances will also appeal to developers, enthusiasts, and educators across the Arm developer community. /ec2/faqs/;Will customers have to modify applications and workloads to be able to run on the A1 instances?;The changes required are dependent on the application. Applications based on interpreted or run-time compiled languages (e.g. Python, Java, PHP, Node.js) should run without modifications. Other applications may need to be recompiled and those that don't rely on x86 instructions will generally build with minimal to no changes. /ec2/faqs/;Which operating systems/AMIs are supported on A1 Instances?;The following AMIs are supported on A1 instances: Amazon Linux 2, Ubuntu 16.04.4 or newer, Red Hat Enterprise Linux (RHEL) 7.6 or newer, SUSE Linux Enterprise Server 15 or newer. Additional AMI support for Fedora, Debian, NGINPlus are also available through community AMIs and the AWS Marketplace. EBS backed HVM AMIs launched on A1 instances require NVMe and ENdrivers installed at instance launch. /ec2/faqs/;Are there specific AMI requirements to run on M6g and A1 instances?;You will need to use the “arm64” AMIs with the M6g and A1 instances. x86 AMIs are not compatible with M6g and A1 instances. /ec2/faqs/;When should customers use A1 instances versus the new M6g instances?;A1 instances continue to offer significant cost benefits for scale-out workloads that can run on multiple smaller cores and fit within the available memory footprint. The new M6g instances are a good fit for a broad spectrum of applications that require more compute, memory, networking resources and/or can benefit from scaling up across platform capabilities. M6g instances will deliver the best price-performance within the instance family for these applications. M6g supports up to 16xlarge instance size (A1 supports up to 4xlarge), 4GB of memory per vCPU (A1 supports 2GB memory per vCPU), and up to 25 Gbps of networking bandwidth (A1 supports up to 10 Gbps). /ec2/faqs/;What are the various storage options available to A1 customers?;A1 instances are EBS-optimized by default and offer up to 3,500 Mbps of dedicated EBS bandwidth to both encrypted and unencrypted EBS volumes. A1 instances only support Non-Volatile Memory Express (NVMe) interface to access EBS storage volumes. A1 instances will not support the blkfront interface. /ec2/faqs/;Which network interface is supported on A1 instances?;A1 instances support ENbased Enhanced Networking. With ENA, A1 instances can deliver up to 10 Gbps of network bandwidth between instances when launched within a Placement Group. /ec2/faqs/;Do A1 instances support the AWS Nitro System?;Yes, A1 instances are powered by the AWS Nitro System, a combination of dedicated hardware and Nitro hypervisor. /ec2/faqs/;Why should customers choose EC2 M5 Instances over EC2 M4 Instances?;Compared with EC2 M4 Instances, the new EC2 M5 Instances deliver customers greater compute and storage performance, larger instance sizes for less cost, consistency and security. The biggest benefit of EC2 M5 Instances is based on its usage of the latest generation of Intel Xeon Scalable processors (Skylake-SP or Cascade Lake), which deliver up to 20% improvement in price/performance compared to M4. With AVX-512 support in M5 vs. the older AVX2 in M4, customers will gain 2x higher performance in workloads requiring floating point operations. M5 instances offer up to 25 Gbps of network bandwidth and up to 10 Gbps of dedicated bandwidth to Amazon EBS. M5 instances also feature significantly higher networking and Amazon EBS performance on smaller instance sizes with EBS burst capability. /ec2/faqs/;Why should customers choose M6i instances over M5 instances?;Amazon M6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offer up to 15% better compute price performance over M5 instances, and always-on memory encryption using Intel Total Memory Encryption (TME). Amazon EC2 M6i instances are the first to use a lower-case “i” to indicate they are Intel-powered instances. M6i instances provide a new instance size (m6i.32xlarge) with 128 vCPUs and 512 GiB of memory, 33% more than the largest M5 instance. They also provide up to 20% higher memory bandwidth per vCPU compared to M5 instances, allowing customers to efficiently perform real-time analysis for data-intensive AI/ML, gaming, and High Performance Computing (HPC) applications. M6i also give customers up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, twice that of M5 instances. M6i also allows customers to use Elastic Fabric Adapter on the 32xlarge size, enabling low latency and high scale inter-node communication. For optimal networking performance on these new instances, Elastic Network Adapter (ENA) driver update may be required. For more information on optimal ENdriver for M6i, see this article. /ec2/faqs/;How does support for Intel AVX-512 benefit customers who use the EC2 M5 family or the M6i family?;Intel Advanced Vector Extensions 512 (AVX-512) is a set of new CPU instructions available on the latest Intel Xeon Scalable processors, that can accelerate performance for workloads and usages such as scientific simulations, financial analytics, artificial intelligence, machine learning/deep learning, 3D modeling and analysis, image and video processing, cryptography and data compression, among others. Intel AVX-512 offers exceptional processing of encryption algorithms, helping to reduce the performance overhead for cryptography, which means customers who use the EC2 M5 family or M6i family can deploy more secure data and services into distributed environments without compromising performance. /ec2/faqs/;What are M5zn instances?;M5zn instances are a variant of the M5 general purpose instances that are powered by the fastest Intel Xeon Scalable processor in the cloud, with an all-core turbo frequency of up to 4.5 GHz, along with 100 Gbps networking and support for Amazon EFA. M5zn instances are an ideal fit for workloads such as gaming, financial applications, simulation modeling applications such as those used in the automotive, aerospace, energy, and telecommunication industries, and other High Performance Computing applications. /ec2/faqs/;How are M5zn instances different than z1d instances?;z1d instances are a memory-optimized instance, and feature a high frequency version of the Intel Xeon Scalable processors (up to 4.0 GHz), along with local NVMe storage. M5zn instances are a general purpose instance, and feature a high frequency version of the 2nd Generation Intel Xeon Scalable processors up to 4.5 GHz), along with up to 100 Gbps networking performance, and support for EFA. M5zn instances offer improved price performance compared to z1d. /ec2/faqs/;What are EC2 High Memory instances?;Amazon EC2 High Memory instances offer 3, 6, 9, 12, 18, or 24 TiB of memory in a single instance. These instances are designed to run large in-memory databases, including production installations of SAP HANA, in the cloud. /ec2/faqs/;Are High Memory instances certified by SAP to run SAP HANA workloads?;High Memory instances are certified by SAP for running Business Suite on HANA, the next-generation Business Suite S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANin production environments. For details, see SAP's Certified and Supported SAP HANHardware Directory. /ec2/faqs/;Are High Memory instances only available as bare metal?;High Memory instances are available as both bare metal and virtualized instances, giving customers the choice to have direct access to the underlying hardware resources, or to take advantage of the additional flexibility that virtualized instances offer including On-Demand and 1-year and 3-year Savings Plan purchase options. /ec2/faqs/;What are the storage options available with High Memory instances?;High Memory instances support Amazon EBS volumes for storage. High Memory instances are EBS-optimized by default, and offer up to 38 Gbps of storage bandwidth.: /ec2/faqs/;Which storage interface is supported on High Memory instances?;High Memory instances access EBS volumes via PCI attached NVM Express (NVMe) interfaces. EBS volumes attached to High Memory instances appear as NVMe devices. NVMe is an efficient and scalable storage interface, which is commonly used for flash based SSDs and provides latency reduction and results in increased disk I/O and throughput. The EBS volumes are attached and detached by PCI hotplug. /ec2/faqs/;What network performance is supported on High Memory instances?;High Memory instances use the Elastic Network Adapter (ENA) for networking and enable Enhanced Networking by default. With ENA, High Memory instances can utilize up to 100 Gbps of network bandwidth. /ec2/faqs/;Can I run High Memory instances in my existing Amazon Virtual Private Cloud (VPC)?;You can run High Memory instances in your existing and new Amazon VPCs. /ec2/faqs/;What is the underlying hypervisor on High Memory instances?;High Memory instances use the lightweight Nitro Hypervisor that is based on core KVM technology. /ec2/faqs/;Do High Memory instances enable CPU power management state control?;Yes. You can configure C-states and P-states on High Memory instances. You can use C-states to enable higher turbo frequencies (as much as 4.0 GHz). You can also use P-states to lower performance variability by pinning all cores at P1 or higher P states, which is similar to disabling Turbo, and running consistently at the base CPU clock speed. /ec2/faqs/;What purchase options are available for High Memory instances?;EC2 High Memory bare metal instances (e.g. u-6tb1.metal) are only available as EC2 Dedicated Hosts on a 1-Yr and 3-Yr reservations. EC2 High Memory virtualized instances (e.g. u-6tb1.112xlarge) are available for purchase via 1-Yr and 3-Yr Savings Plan, On-Demand instances, and as Dedicated hosts. /ec2/faqs/;What is the lifecycle of a Dedicated Host?;"Once a Dedicated Host is allocated within your account, it will be standing by for your use. You can then launch an instance with a tenancy of ""host"" using the RunInstances API, and can also stop/start/terminate the instance through the API. You can use the AWS Management Console to manage the Dedicated Host and the instance." /ec2/faqs/;Can I launch, stop/start, and terminate High Memory instances using AWS CLI/SDK?;You can launch, stop/start, and terminate instances using AWS CLI/SDK. /ec2/faqs/;Which AMIs are supported with High memory instances?;EBS-backed HVM AMIs with support for ENnetworking can be used with High Memory instances. The latest Amazon Linux, Red Hat Enterprise Linux, SUSE Enterprise Linux Server, and Windows Server AMIs are supported. Operating system support for SAP HANworkloads on High Memory instances include: SUSE Linux Enterprise Server 12 SP3 for SAP, Red Hat Enterprise Linux 7.4 for SAP, Red Hat Enterprise Linux 7.5 for SAP, SUSE Linux Enterprise Server 12 SP4 for SAP, SUSE Linux Enterprise Server 15 for SAP, Red Had Enterprise Linux 7.6 for SAP. Refer to SAP's Certified and Supported SAP HANHardware Directory for latest detail on supported operating systems. /ec2/faqs/;Are there standard SAP HANA reference deployment frameworks available for the High Memory instance and the AWS Cloud?;You can use the AWS Quick Start reference SAP HANdeployments to rapidly deploy all the necessary SAP HANbuilding blocks on High Memory instances following SAP’s recommendations for high performance and reliability. AWS Quick Starts are modular and customizable, so you can layer additional functionality on top or modify them for your own implementations. /ec2/faqs/;When should I use memory-optimized instances?;Memory-optimized instances offer large memory size for memory intensive applications including in-memory applications, in-memory databases, in-memory analytics solutions, High Performance Computing (HPC), scientific computing, and other memory-intensive applications. /ec2/faqs/;What are Amazon EC2 R6g instances?;Amazon EC2 R6g instances are the next-generation of memory-optimized instances powered by Arm-based AWS Graviton2 Processors. R6g instances deliver up to 40% better price performance over R5 instances. They are built on the AWS Nitro System, a combination of dedicated hardware and Nitro hypervisor. /ec2/faqs/;What are some of the ideal use cases for R6g instances?;R6g instances deliver significant price performance benefits for memory-intensive workloads such as instances and are ideal for running memory-intensive workloads such as open-source databases, in-memory caches, and real time big data analytics. Customers deploying applications built on open source software across R instances will find the R6g instances an appealing option to realize the best price performance within the instance family. Arm developers can also build their applications directly on native Arm hardware as opposed to cross-compilation or emulation. /ec2/faqs/;What are the various storage options available on R6g instances?;R6g instances are EBS-optimized by default and offer up to 19,000 Mbps of dedicated EBS bandwidth to both encrypted and unencrypted EBS volumes. R6g instances only support Non-Volatile Memory Express (NVMe) interface to access EBS storage volumes. Additionally, options with local NVMe instance storage are also available through the R6gd instance types. /ec2/faqs/;Which network interface is supported on R6g instances?;R6g instances support ENbased Enhanced Networking. With ENA, R6g instances can deliver up to 25 Gbps of network bandwidth between instances when launched within a Placement Group. /ec2/faqs/;Will customers need to modify their applications and workloads to be able to run on the R6g instances?;The changes required are dependent on the application. Customers running applications built on open source software will find that the Arm ecosystem is well developed and already likely supports their applications. Most Linux distributions as well as containers (Docker, Kubernetes, Amazon ECS, Amazon EKS, Amazon ECR) support the Arm architecture. Customers will find Arm versions of commonly used software packages available for installation through the same mechanisms that they currently use. Applications that are based on interpreted languages (such as Java, Node, Python) not reliant on native CPU instruction sets should run with minimal to no changes. Applications developed using compiled languages (C, C++, GoLang) will need to be re-compiled to generate Arm binaries. The Arm architecture is well supported in these popular programming languages and modern code usually requires a simple ‘Make’ command. Refer to the Getting Started guide on Github for more details. /ec2/faqs/;Why should you choose R6i instances over R5 instances?;"Amazon R6i instances are powered by 3rd Generation Intel Xeon Scalable processors (Ice Lake) with an all-core turbo frequency of 3.5 GHz, offer up to 15% better compute price performance over R5 instances, and always-on memory encryption using Intel Total Memory Encryption (TME). Amazon EC2 R6i instances use a lower-case “i” to indicate they are Intel-powered instances. R6i instances provide a new instance size (r6i.32xlarge) with 128 vCPUs and 1,024 GiB of memory, 33% more than the largest R5 instance. They also provide up to 20% higher memory bandwidth per vCPU compared to R5 instances, allowing you to efficiently perform real-time analysis for data-intensive AI/ML, gaming, and high performance computing (HPC) applications. R6i instances also give you up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, twice that of R5 instances. With R6i instances, you can use Elastic Fabric Adapter allows customers to use Elastic Fabric Adapter (EFA) on the 32xlarge and metal sizes, enabling low-latency and high-scale inter-node communication. For optimal networking performance on these new instances, Elastic Network Adapter (ENA) driver update may be required. For more information about an optimal ENdriver for R6i, see ""What do I need to do before migrating my EC2 instance to a sixth-generation instance?"" on Knowledge Center." /ec2/faqs/;What are Amazon EC2 R5b instances?;R5b instances are EBS-optimized variants of memory-optimized R5 instances that deliver up to 3x better EBS performance compared to same sized R5 instances. R5b instances deliver up to 60 Gbps bandwidth and 260K IOPS of EBS performance, the fastest block storage performance on EC2. They are built on the AWS Nitro System, which is a combination of dedicated hardware and Nitro hypervisor. /ec2/faqs/;What are some of the ideal use cases for R5b instances?;R5b instances are ideal for large relational database workloads, including Microsoft SQL Server, SAP HANA, IBM DB2, and Oracle that run performance intensive applications such as commerce platforms, ERP systems, and health record systems. Customers looking to migrate large on-premises workloads with large storage performance requirements to AWS will find R5b instances to be a good fit. /ec2/faqs/;What are the various storage options available on R5b instances?;R5b instances are EBS-optimized by default and offer up to 60,000 Mbps of dedicated EBS bandwidth and 260K IOPS for both encrypted and unencrypted EBS volumes. R5b instances only support Non-Volatile Memory Express (NVMe) interface to access EBS storage volumes. R5b is supported by all volume types, with the exception of io2 volumes. /ec2/faqs/;When should I use R5b instances?;Customers running workloads such as large relational databases and data analytics that want to take advantage of the increased EBS storage network performance can use R5b instances to deliver higher performance and bandwidth. Customers can also lower costs by migrating their workloads to smaller size R5b instances or by consolidating workloads on fewer R5b instances. /ec2/faqs/;What are the storage options available with High Memory instances?;High Memory instances support Amazon EBS volumes for storage. High Memory instances are EBS-optimized by default, and offer up to 38Gbps of storage bandwidth to both encrypted and unencrypted EBS volumes. /ec2/faqs/;What are Amazon EC2 X2gd instances?;Amazon EC2 X2gd instances are the next generation of memory-optimized instances powered by AWS-designed Arm-based AWS Graviton2 processors. X2gd instances deliver up to 55% better price performance compared to x86-based X1 instances and offer the lowest cost per GiB of memory in Amazon EC2. They are the first of the X instances to be built on the AWS Nitro System, which is a combination of dedicated hardware and Nitro hypervisor. /ec2/faqs/;What workloads are suited for X2gd instances?;X2gd is ideal for customers with Arm-compatible memory bound scale-out workloads such as Redis and Memcached in-memory databases, that need low latency memory access and benefit from more memory per vCPU. X2gd is also well suited for relational databases such as PostgreSQL, MariaDB, MySQL, and RDS Aurora. Customers who run memory intensive workloads such as Apache Hadoop, real-time analytics, and real-time caching servers will benefit from 1:16 vCPU to memory ratio of X2gd. Single threaded workloads such as EDA backend verification jobs will benefit from physical core and more memory of X2gd instances, allowing them to consolidate more workloads on to a single instance. X2gd instance also feature local NVMe SSD block storage to improve response times by acting as a caching layer. /ec2/faqs/;When should I use X2gd instances compared to the X1, X2i, or R instances?;X2gd instances are suitable for Arm-compatible memory bound scale-out workloads such as in-memory databases, memory analytics applications, open-source relational database workloads, EDA workloads, and large caching servers. X2gd instances offer customers the lowest cost per gigabyte of memory within EC2, with sizes up to 1 TiB. X2iezn, X2idn, X2iedn, X1, and X1e instances use x86 processors and are suitable for memory-intensive enterprise-class, scale-up workloads such as Windows workloads, in-memory databases (e.g. SAP HANA), and relational databases (e.g. OracleDB). Customers can leverage the x86-based X instances for larger memory sizes up to 4 TiB. R6g and R6gd instances are suitable for workloads such as web applications, databases, and search indexing queries that need more vCPUs during times of heavy data processing. Customers running memory bound workloads that need less than 1 TiB memory and have dependency on x86 instruction set such as Windows applications, and applications like Oracle or SAP can leverage R5 instances and R6 instances. /ec2/faqs/;When should I use X2idn and X2iedn instances?;X2idn and X2iedn instances are powered by 3rd generation Intel Xeon Scalable processors with an all-core turbo frequency up to 3.5 GHz and deliver up to 50% higher compute price performance than comparable X1 instances. X2idn and X2iedn instances both include up to 3.8 TB of local NVMe SSD storage and up to 100 Gbps of networking bandwidth, while X2idn offers up to 2 TiB of memory and X2iedn offers up to 4 TiB of memory. X2idn and X2iedn instances are SAP-Certified and are a great fit for workloads such as small-to large-scale traditional and in-memory databases, and analytics. /ec2/faqs/;When should I use X2iezn instances?;X2iezn instances feature the fastest Intel Xeon Scalable processors in the cloud and are a great fit for workloads that need high single-threaded performance combined with a high memory-to-vCPU ratio and high speed networking. X2iezn instances have an all-core turbo frequency up to 4.5 GHz, feature a 32:1 ratio of memory to vCPU, and deliver up to 55% higher compute price performance compared to X1e instances. X2iezn instances are a great fit for electronic design automation (EDA) workloads like physical verification, static timing analysis, power signoff, and full chip gate-level simulation. /ec2/faqs/;Which operating systems/AMIs are supported on X2gd instances?;The following AMIs are supported: Amazon Linux 2, Ubuntu 18.04 or newer, Red Hat Enterprise Linux 8.2 or newer, and SUSE Enterprise Server 15 or newer. Customers will find additional AMIs such as Fedora, Debian, NetBSD, and CentOS available through community AMIs and the AWS Marketplace. For containerized applications, Amazon ECS and EKS optimized AMIs are available as well. /ec2/faqs/;When should I use X1 instances?;X1 instances are ideal for running in-memory databases like SAP HANA, big data processing engines like Apache Spark or Presto, and high performance computing (HPC) applications. X1 instances are certified by SAP to run production environments of the next-generation Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANon the AWS cloud. /ec2/faqs/;Do X1 and X1e instances enable CPU power management state control?;Yes. You can configure C-states and P-states on x1e.32xlarge, x1e.16xlarge, x1e.8xlarge, x1.32xlarge and x1.16xlarge instances. You can use C-states to enable higher turbo frequencies (as much as 3.1 GHz with one or two core turbo). You can also use P-states to lower performance variability by pinning all cores at P1 or higher P states, which is similar to disabling Turbo, and running consistently at the base CPU clock speed. /ec2/faqs/;Are there standard SAP HANA reference deployment frameworks available for the High Memory instance and the AWS?;You can use AWS Launch Wizard for SAP or AWS Quick Start reference SAP HANdeployments to rapidly deploy all the necessary SAP HANbuilding blocks on High Memory instances following recommendations from AWS and SAP for high performance and reliability. /ec2/faqs/;Why don’t I see M1, C1, CC2 and HS1 instances on the pricing pages any more?;These have been moved to the Previous Generation Instance page. /ec2/faqs/;Are these Previous Generation instances still being supported?;Yes. Previous Generation instances are still fully supported. /ec2/faqs/;Can I still use/add more Previous Generation instances?;Yes. Previous Generation instances are still available as On-Demand, Reserved Instances, and Spot Instance, from our APIs, CLI and EC2 Management Console interface. /ec2/faqs/;Are my Previous Generation instances going to be deleted?;No. Your C1, C3, CC2, CR1, G2, HS1, M1, M2, M3, R3 and T1 instances are still fully functional and will not be deleted because of this change. /ec2/faqs/;Are Previous Generation instances being discontinued soon?;Currently, there are no plans to end of life Previous Generation instances. However, with any rapidly evolving technology the latest generation will typically provide the best performance for the price and we encourage our customers to take advantage of technological advancements. /ec2/faqs/;Will my Previous Generation instances I purchased as a Reserved Instance be affected or changed?;No. Your Reserved Instances will not change, and the Previous Generation instances are not going away. /ec2/faqs/;What is a Dense-storage Instance?;Dense-storage instances are designed for workloads that require high sequential read and write access to very large data sets, such as Hadoop distributed computing, massively parallel processing data warehousing, and log processing applications. The Dense-storage instances offer the best price/GB-storage and price/disk-throughput across other EC2 instances. /ec2/faqs/;How do dense-storage instances compare to High I/O instances?;High I/O instances (Im4gn, Is4gen, I4i, I3, I3en) are targeted at workloads that demand low latency and high random I/O in addition to moderate storage density and provide the best price/IOPS across other EC2 instance types. Dense-storage instances (D3, D3en, D2) and HDD-storage instances (H1) are optimized for applications that require high sequential read/write access and low cost storage for very large data sets and provide the best price/GB-storage and price/disk-throughput across other EC2 instances. /ec2/faqs/;How much disk throughput can Dense-storage and HDD-storage instances deliver?;The largest current generation of Dense HDD-storage instances, d3en.12xlarge, can deliver up to 6.2 GiB/s read and 6.2 GiB/s write disk throughput with a 128k block size. Please see the product detail page for additional performance information. To ensure the best disk throughput performance from your D2, D3 and D3en instances on Linux, we recommend that you use the most recent version of the Amazon Linux AMI, or another Linux AMI with a kernel version of 3.8 or later that supports persistent grants - an extension to the Xen block ring protocol that significantly improves disk throughput and scalability. /ec2/faqs/;Do Dense-storage and HDD-storage instances provide any failover mechanisms or redundancy?;D2 and H1 instances provide notifications for hardware failures. Like all instance storage, Dense HDD-storage volumes persist only for the life of the instance. Hence, we recommend that you build a degree of redundancy (e.g. RAID 1/5/6) or use file systems (e.g. HDFS and MapR-FS) that support redundancy and fault tolerance. You can also back up data periodically to more data storage solutions such as Amazon Elastic Block Store (EBS) or Simple Storage Service (S3). /ec2/faqs/;How do dense HDD-storage instances differ from Amazon EBS?;Amazon EBS offers simple, elastic, reliable (replicated), and persistent block level storage for Amazon EC2 while abstracting the details of the underlying storage media in use. Amazon EC2 instance instances with local HDD or NVMe storage provide directly attached, high performance storage building blocks that can be used for a variety of storage applications. Dense-storage instances are specifically targeted at customers who want high sequential read/write access to large data sets on local storage, e.g. for Hadoop distributed computing and massively parallel processing data warehousing. /ec2/faqs/;Can I launch dense HDD-storage instances as Amazon EBS-optimized instances?;Each HDD-storage instance type (H1, D2, D3, and D3en) is EBS-optimized by default. Since this feature is always enabled, launching one of these instances explicitly as EBS-optimized will not affect the instance's behavior. For more information on EBS-optimized instances, see here. /ec2/faqs/;Can I launch D2 instances as Amazon EBS-optimized instances?;Each D2 instance type is EBS-optimized by default. D2 instances 500 Mbps to 4,000 Mbps to EBS above and beyond the general-purpose network throughput provided to the instance. Since this feature is always enabled on D2 instances, launching a D2 instance explicitly as EBS-optimized will not affect the instance's behavior. /ec2/faqs/;Are Dense-storage instances offered in EC2 Classic?;The current generation of Dense-storage instances (D2 instances) can be launched in both EC2-Classic and Amazon VPC. However, by launching a Dense-storage instance into a VPC, you can leverage a number of features that are available only on the Amazon VPC platform – such as enabling enhanced networking, assigning multiple private IP addresses to your instances, or changing your instances' security groups. For more information about the benefits of using a VPC, see Amazon EC2 and Amazon Virtual Private Cloud (Amazon VPC). You can take steps to migrate your resources from EC2-Classic to Amazon VPC. For more information, see Migrating a Linux Instance from EC2-Classic to a VPC. /ec2/faqs/;What is a High I/O instance?;High I/O instances use NVMe based local instance storage to deliver very high, low latency, I/O capacity to applications, and are optimized for applications that require millions of IOPS. Like Cluster instances, High I/O instances can be clustered via cluster placement groups for low latency networking. /ec2/faqs/;Are all features of Amazon EC2 available for High I/O instances?;High I/O instances support all Amazon EC2 features. Im4gn, Is4gen, I4i, I3 and I3en instances offer NVMe only storage, while previous generation I2 instances allow legacy blkfront storage access. /ec2/faqs/;AWS has other database and Big Data offerings. When or why should I use High I/O instances?;High I/O instances are ideal for applications that require access to millions of low latency IOPS, and can leverage data stores and architectures that manage data redundancy and availability. Example applications are: /ec2/faqs/;Do High I/O instances provide any failover mechanisms or redundancy?;Like other Amazon EC2 instance types, instance storage on Im4gn, Is4gen, I4i, I3 and I3en instances persists during the life of the instance. Customers are expected to build resilience into their applications. We recommend using databases and file systems that support redundancy and fault tolerance. Customers should back up data periodically to Amazon S3 for improved data durability. /ec2/faqs/;Do High I/O instances support TRIM?;The TRIM command allows the operating system to inform SSDs which blocks of data are no longer considered in use and can be wiped internally. In the absence of TRIM, future write operations to the involved blocks can slow down significantly. Im4gn, Is4gen, I4i, I3 and I3en instances support TRIM. /ec2/faqs/;How do D3 and D3en instances compare to D2 instances?;D3 and D3en instances offer improved specifications over D2 on the following compute, storage and network attributes: /ec2/faqs/;Do D3 and D3en instances encrypt storage volumes and network traffic?;"Yes; data written onto the storage volumes will be encrypted at rest using AES-256-XTS. Network traffic between D3 and D3en instances in the same VPC or a peered VPC are encrypted by default using a 256-bit key." /ec2/faqs/;What happens to my data when a system terminates?;"The data stored on a local instance store will persist only as long as that instance is alive. However, data that is stored on an Amazon EBS volume will persist independently of the life of the instance. Therefore, we recommend that you use the local instance store for temporary data and, for data requiring a higher level of durability, we recommend using Amazon EBS volumes or backing up the data to Amazon S3. If you are using an Amazon EBS volume as a root partition, you will need to set the Delete On Terminate flag to ""Nif you want your Amazon EBS volume to persist outside the life of the instance." /ec2/faqs/;What kind of performance can I expect from Amazon EBS volumes?;Amazon EBS provides four current generation volume types that are divided into two major categories: SSD-backed storage for transactional workloads and HDD-backed storage for throughput intensive workloads. These volume types differ in performance characteristics and price, allowing you to tailor your storage performance and cost to the needs of your applications. For more information, see the EBS product details page, and for additional information on performance, see the Amazon EC2 User Guide's EBS Performance section. /ec2/faqs/;What are Throughput Optimized HDD (st1) and Cold HDD (sc1) volume types?;ST1 volumes are backed by hard disk drives (HDDs) and are ideal for frequently accessed, throughput intensive workloads with large datasets and large I/O sizes, such as MapReduce, Kafka, log processing, data warehouse, and ETL workloads. These volumes deliver performance in terms of throughput, measured in MB/s, and include the ability to burst up to 250 MB/s per TB, with a baseline throughput of 40 MB/s per TB and a maximum throughput of 500 MB/s per volume. ST1 is designed to deliver the expected throughput performance 99% of the time and has enough I/O credits to support a full-volume scan at the burst rate. /ec2/faqs/;Which volume type should I choose?;Amazon EBS includes two major categories of storage: SSD-backed storage for transactional workloads (performance depends primarily on IOPS) and HDD-backed storage for throughput workloads (performance depends primarily on throughput, measured in MB/s). SSD-backed volumes are designed for transactional, IOPS-intensive database workloads, boot volumes, and workloads that require high IOPS. SSD-backed volumes include Provisioned IOPS SSD (io1 and io2) and General Purpose SSD (gp2 and gp3). HDD-backed volumes are designed for throughput-intensive and big-data workloads, large I/O sizes, and sequential I/O patterns. HDD-backed volumes include Throughput Optimized HDD (st1) and Cold HDD (sc1). For more information on Amazon EBS see the EBS product details page. /ec2/faqs/;Do you support multiple instances accessing a single volume?;Yes, you can enable Multi-Attach on an EBS Provisioned IOPS io1 volume to allow a volume to be concurrently attached to up to sixteen Nitro-based EC2 instances within the same Availability Zone. For more information on Amazon EBS Multi-Attach, see the EBS product page. /ec2/faqs/;Will I be able to access my EBS snapshots using the regular Amazon S3 APIs?;No, EBS snapshots are only available through the Amazon EC2 APIs. /ec2/faqs/;Do volumes need to be un-mounted in order to take a snapshot? Does the snapshot need to complete before the volume can be used again?;No, snapshots can be done in real time while the volume is attached and in use. However, snapshots only capture data that has been written to your Amazon EBS volume, which might exclude any data that has been locally cached by your application or OS. In order to ensure consistent snapshots on volumes attached to an instance, we recommend cleanly detaching the volume, issuing the snapshot command, and then reattaching the volume. For Amazon EBS volumes that serve as root devices, we recommend shutting down the machine to take a clean snapshot. /ec2/faqs/;Are snapshots versioned? Can I read an older snapshot to do a point-in-time recovery?;Each snapshot is given a unique identifier, and customers can create volumes based on any of their existing snapshots. /ec2/faqs/;What charges apply when using Amazon EBS shared snapshots?;If you share a snapshot, you won’t be charged when other users make a copy of your snapshot. If you make a copy of another user’s shared volume, you will be charged normal EBS rates. /ec2/faqs/;Can users of my Amazon EBS shared snapshots change any of my data?;Users who have permission to create volumes based on your shared snapshots will first make a copy of the snapshot into their account. Users can modify their own copies of the data, but the data on your original snapshot and any other volumes created by other users from your original snapshot will remain unmodified. /ec2/faqs/;How can I discover Amazon EBS snapshots that have been shared with me?;You can find snapshots that have been shared with you by selecting “Private Snapshots” from the viewing dropdown in the Snapshots section of the AWS Management Console. This section will list both snapshots you own and snapshots that have been shared with you. /ec2/faqs/;How can I find what Amazon EBS snapshots are shared globally?;You can find snapshots that have been shared globally by selecting “Public Snapshots” from the viewing dropdown in the Snapshots section of the AWS Management Console. /ec2/faqs/;Do you offer encryption on Amazon EBS volumes and snapshots?;Yes. EBS offers seamless encryption of data volumes and snapshots. EBS encryption better enables you to meet security and encryption compliance requirements. /ec2/faqs/;How can I find a list of Amazon Public Data Sets?;All information on Public Data Sets is available in our Public Data Sets Resource Center. You can also obtain a listing of Public Data Sets within the AWS Management Console by choosing “Amazon Snapshots” from the viewing dropdown in the Snapshots section. /ec2/faqs/;Where can I learn more about EBS?;You can visit the Amazon EBS FAQ page. /ec2/faqs/;How do I access a file system from an Amazon EC2 instance?;To access your file system, you mount the file system on an Amazon EC2 Linux-based instance using the standard Linux mount command and the file system’s DNname. Once you’ve mounted, you can work with the files and directories in your file system just like you would with a local file system. /ec2/faqs/;What Amazon EC2 instance types and AMIs work with Amazon EFS?;Amazon EFS is compatible with all Amazon EC2 instance types and is accessible from Linux-based AMIs. You can mix and match the instance types connected to a single file system. For a step-by-step example of how to access a file system from an Amazon EC2 instance, please see the Amazon EFS Getting Started guide. /ec2/faqs/;How do I load data into a file system?;You can load data into an Amazon EFS file system from your Amazon EC2 instances or from your on-premises datacenter servers. /ec2/faqs/;How do I access my file system from outside my VPC?;Amazon EC2 instances within your VPC can access your file system directly, and Amazon EC2 Classic instances outside your VPC can mount a file system via ClassicLink. On-premises servers can mount your file systems via an AWS Direct Connect connection to your VPC. /ec2/faqs/;How many Amazon EC2 instances can connect to a file system?;Amazon EFS supports one to thousands of Amazon EC2 instances connecting to a file system concurrently. /ec2/faqs/;Where can I learn more about EFS?;You can visit the Amazon EFS FAQ page. /ec2/faqs/;Is data stored on Amazon EC2 NVMe instance storage encrypted?;Yes, all data is encrypted in an AWS Nitro hardware module prior to being written on the locally attached SSDs offered via NVMe instance storage. /ec2/faqs/;What encryption algorithm is used to encrypt Amazon EC2 NVMe instance storage?;Amazon EC2 NVMe instance storage is encrypted using an XTS-AES-256 block cipher. /ec2/faqs/;Are encryption keys unique to an instance or a particular device for NVMe instance storage?;Encryption keys are securely generated within the Nitro hardware module, and are unique to each NVMe instance storage device that is provided with an EC2 instance. /ec2/faqs/;What is the lifetime of encryption keys on NVMe instance storage?;All keys are irrecoverably destroyed on any de-allocation of the storage, including instance stop and instance terminate actions. /ec2/faqs/;Can I disable NVMe instance storage encryption?;No, NVMe instance storage encryption is always on, and cannot be disabled. /ec2/faqs/;Do the published IOPS performance numbers on I3 and I3en include data encryption?;Yes, the documented IOPS numbers for Im4gn, Is4gen, I4i, I3 and I3en NVMe instance storage include encryption. /ec2/faqs/;Does Amazon EC2 NVMe instance storage support AWS Key Management Service (KMS)?;No, disk encryption on NVMe instance storage does not support integration with AWS KMS system. Customers cannot bring in their own keys to use with NVMe instance storage. /ec2/faqs/;What is ENA Express?;ENExpress is an enhancement on the Elastic Network Adapter that brings the Scalable Reliable Datagram (SRD) protocol to traditional TCP and UDP networking. Transparent to the application, ENExpress improves single flow bandwidths and reduces tail latencies in throughput intensive workloads. When configured, ENExpress works between any two supported instances in an Availability Zone (AZ). ENExpress detects compatibility between your EC2 instances and establishes an SRD connection when both communicating instances have ENExpress enabled. Once a connection is established, your traffic can take advantage of SRD and its performance benefits. /ec2/faqs/;When should I use ENA Express?;ENExpress works best for applications requiring high, single-flow throughput, like distributed storage systems and live media encoding. These workloads require high single flow bandwidth and low tail latency. /ec2/faqs/;How do I enable ENA Express?;ENExpress can be enabled on a per-ENbasis. While attaching a network card to an instance or while running a modify command, ENExpress can be enabled. ENExpress must be enabled on both communicating ENIs to establish point-to-point communication with it. Additionally, if you are using Jumbo Frames, you must adjust your maximum MTU to 8900 to use ENExpress. /ec2/faqs/;What protocols are supported by ENA Express?;ENExpress supports TCP by default. UDP can optionally be enabled through an API argument or within the management console. /ec2/faqs/;What instances are supported?;ENExpress is supported on C6gn.16xl. Support for more instance types and sizes will be added in the coming months. /ec2/faqs/;What is the difference between Elastic Fabric Adapter (EFA) and ENA Express?;EFA is a network interface built for HPC and ML applications, and it also leverages the SRD protocol. EFA requires a different network programming model, which uses the LibFabric interface to pass communication to the ENI. Unlike EFA, ENExpress helps you run your application transparently on TCP and UDP. Additionally, ENExpress allows for intra-Availability Zone (AZ) communication, while EFA is currently limited to communication within the same subnet. /ec2/faqs/;Why should I use EFA?;EFA brings the scalability, flexibility, and elasticity of cloud to tightly-coupled HPC applications. With EFA, tightly-coupled HPC applications have access to lower and more consistent latency and higher throughput than traditional TCP channels, enabling them to scale better. EFA support can be enabled dynamically, on-demand on any supported EC2 instance without pre-reservation, giving you the flexibility to respond to changing business/workload priorities. /ec2/faqs/;What types of applications can benefit from using EFA?;High Performance Computing (HPC) applications distribute computational workloads across a cluster of instances for parallel processing. Examples of HPC applications include computational fluid dynamics (CFD), crash simulations, and weather simulations. HPC applications are generally written using the Message Passing Interface (MPI) and impose stringent requirements for inter-instance communication in terms of both latency and bandwidth. Applications using MPI and other HPC middleware that supports the libfabric communication stack can benefit from EFA. /ec2/faqs/;How does EFA communication work?;EFA devices provide all ENdevices' functionalities plus a new OS bypass hardware interface that allows user-space applications to communicate directly with the hardware-provided reliable transport functionality. Most applications will use existing middleware, such as the Message Passing Interface (MPI), to interface with EFA. AWS has worked with a number of middleware providers to ensure support for the OS bypass functionality of EFA. Please note that communication using the OS bypass functionality is limited to instances within a single subnet of a Virtual Private Cloud (VPC). /ec2/faqs/;Which instance types support EFA?;EFA is currently available on the following instance sizes: m7g.16xlarge, m7g.metal, m6a.48xlarge, m6i.32xlarge, m6i.metal, m6id.32xlarge, m6id.metal, m6idn.32xlarge, m6idn.metal, m6in.32xlarge, m6in.metal, m5n.24xlarge, m5dn.24xlarge, m5n.metal, m5dn.metal, r7g.16xlarge, r7g.metal, r6idn.32xlarge, r6idn.metal, r6in.32xlarge, r6in.metal, r6a.48xlarge, r6a.metal, r6i.32xlarge, r6i.metal, r6id.32xlarge, r6id.metal, r5n.24xlarge, r5dn.24xlarge, r5n.metal, r5dn.metal, x2idn.32xlarge, x2iedn.32xlarge, c7g.16xlarge, c7g.metal, c7gn.16xlarge, c6a.48xlarge, c6i.32xlarge, c6i.metal, c6id.32xlarge, c6id.metal, c6in.32xlarge, c6in.metal, c5n.18xlarge, c5n.metal, p3dn.24xlarge, i3en.24xlarge, i3en.metal, hpc6a.48xlarge, and hpc6i.32xlarge instance sizes.. /ec2/faqs/;What are the differences between an EFA ENI and an ENA ENI?;An ENENprovides traditional IP networking features necessary to support VPC networking. An EFA ENprovides all the functionality of an ENENI, plus hardware support for applications to communicate directly with the EFA ENwithout involving the instance kernel (OS-bypass communication) using an extended programming interface. Due to the advanced capabilities of the EFA ENI, EFA ENIs can only be attached at launch or to stopped instances. /ec2/faqs/;What are the pre-requisites to enabling EFA on an instance?;EFA support can be enabled either at the launch of the instance or added to a stopped instance. EFA devices cannot be attached to a running instance. /ec2/faqs/;What networking capabilities are included in this feature?;We currently support enhanced networking capabilities using SR-IOV (Single Root I/O Virtualization). SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization compared to traditional implementations. For supported Amazon EC2 instances, this feature provides higher packet per second (PPS) performance, lower inter-instance latencies, and very low network jitter. /ec2/faqs/;Why should I use Enhanced Networking?;If your applications benefit from high packet-per-second performance and/or low latency networking, Enhanced Networking will provide significantly improved performance, consistence of performance and scalability. /ec2/faqs/;How can I enable Enhanced Networking on supported instances?;In order to enable this feature, you must launch an HVM AMI with the appropriate drivers. The instances listed as current generation use ENfor enhanced networking. Amazon Linux AMI includes both of these drivers by default. For AMIs that do not contain these drivers, you will need to download and install the appropriate drivers based on the instance types you plan to use. You can use Linux or Windows instructions to enable Enhanced Networking in AMIs that do not include the SR-IOV driver by default. Enhanced Networking is only supported in Amazon VPC. /ec2/faqs/;Do I need to pay an additional fee to use Enhanced Networking?;No, there is no additional fee for Enhanced Networking. To take advantage of Enhanced Networking you need to launch the appropriate AMI on a supported instance type in a VPC. /ec2/faqs/;Why is Enhanced Networking only supported in Amazon VPC?;Amazon VPC allows us to deliver many advanced networking features to you that are not possible in EC2-Classic. Enhanced Networking is another example of a capability enabled by Amazon VPC. /ec2/faqs/;Which instance types support Enhanced Networking?;Depending on your instance type, you can enable enhanced networking by using one of the following mechanisms: /ec2/faqs/;What load balancing options does the Elastic Load Balancing service offer?;Elastic Load Balancing offers two types of load balancers that both feature high availability, automatic scaling, and robust security. These include the Classic Load Balancer that routes traffic based on either application or network level information, and the Application Load Balancer that routes traffic based on advanced application level information that includes the content of the request. /ec2/faqs/;When should I use the Classic Load Balancer and when should I use the Application Load Balancer?;The Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while the Application Load Balancer is ideal for applications needing advanced routing capabilities, microservices, and container-based architectures. Please visit Elastic Load Balancing for more information. /ec2/faqs/;Why am I limited to 5 Elastic IP addresses per region?;Public (IPV4) internet addresses are a scarce resource. There is only a limited amount of public IP space available, and Amazon EC2 is committed to helping use that space efficiently. /ec2/faqs/;Why am I charged when my Elastic IP address is not associated with a running instance?;In order to help ensure our customers are efficiently using the Elastic IP addresses, we impose a small hourly charge for each address when it is not associated with a running instance. /ec2/faqs/;Do I need one Elastic IP address for every instance that I have running?;No. You do not need an Elastic IP address for all your instances. By default, every instance comes with a private IP address and an internet routable public IP address. The private IP address remains associated with the network interface when the instance is stopped and restarted, and is released when the instance is terminated. The public address is associated exclusively with the instance until it is stopped, terminated or replaced with an Elastic IP address. These IP addresses should be adequate for many applications where you do not need a long lived internet routable end point. Compute clusters, web crawling, and backend services are all examples of applications that typically do not require Elastic IP addresses. /ec2/faqs/;How long does it take to remap an Elastic IP address?;The remap process currently takes several minutes from when you instruct us to remap the Elastic IP until it fully propagates through our system. /ec2/faqs/;Can I configure the reverse DNS record for my Elastic IP address?;All Elastic IP addresses come with reverse DNS, in a standard template of the form ec2-1-2-3-4.region.compute.amazonaws.com. For customers requiring custom reverse DNsettings for internet-facing applications that use IP-based mutual authentication (such as sending email from EC2 instances), you can configure the reverse DNrecord of your Elastic IP address by filling out this form. Alternatively, please contact AWS Customer Support if you want AWS to delegate the management of the reverse DNfor your Elastic IPs to your authoritative DNname servers (such as Amazon Route 53), so that you can manage your own reverse DNPTR records to support these use-cases. Note that a corresponding forward DNrecord pointing to that Elastic IP address must exist before we can create the reverse DNrecord. /ec2/faqs/;How do I prevent other people from viewing my systems?;You have complete control over the visibility of your systems. The Amazon EC2 security systems allow you to place your running instances into arbitrary groups of your choice. Using the web services interface, you can then specify which groups may communicate with which other groups, and also which IP subnets on the Internet may talk to which groups. This allows you to control access to your instances in our highly dynamic environment. Of course, you should also secure your instance as you would any other server. /ec2/faqs/;Can I get a history of all EC2 API calls made on my account for security analysis and operational troubleshooting purposes?;Yes. To receive a history of all EC2 API calls (including VPC and EBS) made on your account, you simply turn on CloudTrail in the AWS Management Console. For more information, visit the CloudTrail home page. /ec2/faqs/;Where can I find more information about security on AWS?;For more information on security on AWS please refer to our Amazon Web Services: Overview of Security Processes white paper and to our Amazon EC2 running Windows Security Guide. /ec2/faqs/;What is the minimum time interval granularity for the data that Amazon CloudWatch receives and aggregates?;Metrics are received and aggregated at 1 minute intervals. /ec2/faqs/;Which operating systems does Amazon CloudWatch support?;Amazon CloudWatch receives and provides metrics for all Amazon EC2 instances and should work with any operating system currently supported by the Amazon EC2 service. /ec2/faqs/;Will I lose the metrics data if I disable monitoring for an Amazon EC2 instance?;You can retrieve metrics data for any Amazon EC2 instance up to 2 weeks from the time you started to monitor it. After 2 weeks, metrics data for an Amazon EC2 instance will not be available if monitoring was disabled for that Amazon EC2 instance. If you want to archive metrics beyond 2 weeks you can do so by calling mon-get-stats command from the command line and storing the results in Amazon S3 or Amazon SimpleDB. /ec2/faqs/;Can I access the metrics data for a terminated Amazon EC2 instance or a deleted Elastic Load Balancer?;Yes. Amazon CloudWatch stores metrics for terminated Amazon EC2 instances or deleted Elastic Load Balancers for 2 weeks. /ec2/faqs/;Does the Amazon CloudWatch monitoring charge change depending on which type of Amazon EC2 instance I monitor?;No, the Amazon CloudWatch monitoring charge does not vary by Amazon EC2 instance type. /ec2/faqs/;Why does the graphing of the same time window look different when I view in 5 minute and 1 minute periods?;If you view the same time window in a 5 minute period versus a 1 minute period, you may see that data points are displayed in different places on the graph. For the period you specify in your graph, Amazon CloudWatch will find all the available data points and calculates a single, aggregate point to represent the entire period. In the case of a 5 minute period, the single data point is placed at the beginning of the 5 minute time window. In the case of a 1 minute period, the single data point is placed at the 1 minute mark. We recommend using a 1 minute period for troubleshooting and other activities that require the most precise graphing of time periods. /ec2/faqs/;Can I automatically scale Amazon EC2 Auto Scaling Groups?;Yes. Amazon EC2 Auto Scaling is a fully managed service designed to launch or terminate Amazon EC2 instances automatically to help ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. EC2 Auto Scaling helps you maintain application availability through fleet management for EC2 instances, which detects and replaces unhealthy instances, and by scaling your Amazon EC2 capacity up or down automatically according to conditions you define. You can use EC2 Auto Scaling to automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs. /ec2/faqs/;Why should I hibernate an instance?;You can hibernate an instance to get your instance and applications up and running quickly, if they take a long time to bootstrap (e.g. load memory caches). You can start instances, bring them to a desired state and hibernate them. These “pre-warmed” instances can then be resumed to reduce the time it takes for an instance to return to service. Hibernation retains memory state across Stop/Start cycles. /ec2/faqs/;What happens when I hibernate my instance?;When you hibernate an instance, data from your EBS root volume and any attached EBS data volumes is persisted. Additionally, contents from the instance’s memory (RAM) are persisted to EBS root volume. When the instance is restarted, it returns to its previous state and reloads the RAM contents. /ec2/faqs/;What is the difference between hibernate and stop?;In the case of hibernate, your instance gets hibernated and the RAM data persisted. In the case of Stop, your instance gets shut down and RAM is cleared. /ec2/faqs/;How much does it cost to hibernate an instance?;Hibernating instances are charged at standard EBS rates for storage. As with a stopped instance, you do not incur instance usage fees while an instance is hibernating. /ec2/faqs/;How can I hibernate an instance?;Hibernation needs to be enabled when you launch the instance. Once enabled, you can use the StopInstances API with an additional ‘Hibernate’ parameter to trigger hibernation. You can also do this through the console by selecting your instance, then clicking Actions> Instance State > Stop - Hibernate. For more information on using hibernation, refer to the user guide. /ec2/faqs/;How can I resume a hibernating instance?;You can resume by calling the StartInstances API as you would for a regular stopped instance. You can also do this through the console by selecting your instance, then clicking Actions > Instance State > Start. /ec2/faqs/;Can I enable hibernation on an existing instance?;No, you cannot enable hibernation on an existing instance (running or stopped). This needs to be enabled during instance launch. /ec2/faqs/;How can I tell that an instance is hibernated?;You can tell that an instance is hibernated by looking at the state reason. It should be ‘Client.UserInitiatedHibernate’. This is visible on the console under “Instances - Details” view or in the DescribeInstances API response as the “reason” field. /ec2/faqs/;What is the state of an instance when it is hibernating?;Hibernated instances are in ‘Stopped’ state. /ec2/faqs/;What data is saved when I hibernate an instance?;EBS volume storage (boot volume and attached data volumes) and memory (RAM) are saved. Your private IP address remains the same (for VPC), as does your elastic IP address (if applicable). The network layer behavior will be similar to that of EC2 Stop-Start workflow. /ec2/faqs/;Where is my data stored when I hibernate an instance?;As with the Stop feature, root device and attached device data are stored on the corresponding EBS volumes. Memory (RAM) contents are stored on the EBS root volume. /ec2/faqs/;Is my memory (RAM) data encrypted when it is moved to EBS?;Yes, RAM data is always encrypted when it is moved to the EBS root volume. Encryption on the EBS root volume is enforced at instance launch time. This is to ensure protection for any sensitive content that is in memory at the time of hibernation. /ec2/faqs/;How long can I keep my instance hibernated?;We do not support keeping an instance hibernated for more than 60 days. You need to resume the instance and go through Stop and Start (without hibernation) if you wish to keep the instance around for a longer duration. We are constantly working to keep our platform up-to-date with upgrades and security patches, some of which can conflict with the old hibernated instances. We will notify you for critical updates that require you to resume the hibernated instance to perform a shutdown or a reboot. /ec2/faqs/;What are the prerequisites to hibernate an instance?;To use hibernation, the root volume must be an encrypted EBS volume. The instance needs to be configured to receive the ACPID signal for hibernation (or use the Amazon published AMIs that are configured for hibernation). Additionally, your instance should have sufficient space available on your EBS root volume to write data from memory. /ec2/faqs/;Which instances and operating systems support hibernation?;For instances running Amazon Linux, Amazon Linux 2, Ubuntu, and Windows, Hibernation is supported across C3, C4, C5, C5d, I3, M3, M4, M5, M5a, M5ad, M5d, R3, R4, R5, R5a, R5ad, R5d, T2, T3, and T3a instances. /ec2/faqs/;Should I use specific Amazon Machine Image (AMIs) if I want to hibernate my instance?;You can use any AMI that is configured to support hibernation. You can use AWS published AMIs that support hibernation by default. Alternatively, you can create a custom image from an instance after following the hibernation pre-requisite checklist and configuring your instance appropriately. /ec2/faqs/;What if my EBS root volume is not large enough to store memory state (RAM) for hibernation?;To enable hibernation, space is allocated on the root volume to store the instance memory (RAM). Make sure that the root volume is large enough to store the RAM contents and accommodate your expected usage, e.g. OS, applications. If the EBS root volume does not have enough space, hibernation will fail and the instance will get shut down instead. /ec2/faqs/;What is VM Import/Export?;VM Import/Export enables customers to import Virtual Machine (VM) images in order to create Amazon EC2 instances. Customers can also export previously imported EC2 instances to create VMs. Customers can use VM Import/Export to leverage their previous investments in building VMs by migrating their VMs to Amazon EC2. /ec2/faqs/;What operating systems are supported?;VM Import/Export currently supports Windows and Linux VMs, including Windows Server 2003, Windows Server 2003 R2, Windows Server 2008, Windows Server 2012 R1, Red Hat Enterprise Linux (RHEL) 5.1-6.5 (using Cloud Access), Centos 5.1-6.5, Ubuntu 12.04, 12.10, 13.04, 13.10, and Debian 6.0.0-6.0.8, 7.0.0-7.2.0. For more details on VM Import, including supported file formats, architectures, and operating system configurations, please see the VM Import/Export section of the Amazon EC2 User Guide. /ec2/faqs/;What virtual machine file formats are supported?;You can import VMware ESX VMDK images, Citrix Xen VHD images, Microsoft Hyper-V VHD images and RAW images as Amazon EC2 instances. You can export EC2 instances to VMware ESX VMDK, VMware ESX OVA, Microsoft Hyper-V VHD or Citrix Xen VHD images. For a full list of supported operating systems, please see What operating systems are supported? /ec2/faqs/;What is VMDK?;VMDK is a file format that specifies a virtual machine hard disk encapsulated within a single file. It is typically used by virtual IT infrastructures such as those sold by VMware, Inc. /ec2/faqs/;How do I prepare a VMDK file for import using the VMware vSphere client?;The VMDK file can be prepared by calling File-Export-Export to OVF template in VMware vSphere Client. The resulting VMDK file is compressed to reduce the image size and is compatible with VM Import/Export. Nspecial preparation is required if you are using the Amazon EC2 VM Import Connector vApp for VMware vCenter. /ec2/faqs/;What is VHD?;VHD (Virtual Hard Disk) is a file format that specifies a virtual machine hard disk encapsulated within a single file. The VHD image format is used by virtualization platforms such as Microsoft Hyper-V and Citrix Xen. /ec2/faqs/;How do I prepare a VHD file for import from Citrix Xen?;"Open Citrix XenCenter and select the virtual machine you want to export. Under the Tools menu, choose ""Virtual Appliance Tools"" and select ""Export Appliance"" to initiate the export task. When the export completes, you can locate the VHD image file in the destination directory you specified in the export dialog." /ec2/faqs/;How do I prepare a VHD file for import from Microsoft Hyper-V?;"Open the Hyper-V Manager and select the virtual machine you want to export. In the Actions pane for the virtual machine, select ""Export"" to initiate the export task. Once the export completes, you can locate the VHD image file in the destination directory you specified in the export dialog." /ec2/faqs/;Are there any other requirements when importing a VM into Amazon EC2?;The virtual machine must be in a stopped state before generating the VMDK or VHD image. The VM cannot be in a paused or suspended state. We suggest that you export the virtual machine with only the boot volume attached. You can import additional disks using the ImportVolume command and attach them to the virtual machine using AttachVolume. Additionally, encrypted disks (e.g. Bit Locker) and encrypted image files are not supported. You are also responsible for ensuring that you have all necessary rights and licenses to import into AWS and run any software included in your VM image. /ec2/faqs/;Does the virtual machine need to be configured in any particular manner to enable import to Amazon EC2?;Ensure Remote Desktop (RDP) or Secure Shell (SSH) is enabled for remote access and verify that your host firewall (Windows firewall, iptables, or similar), if configured, allows access to RDP or SSH. Otherwise, you will not be able to access your instance after the import is complete. Please also ensure that Windows VMs are configured to use strong passwords for all users including the administrator and that Linux VMs are configured with a public key for SSH access. /ec2/faqs/;How do I import a virtual machine to an Amazon EC2 instance?;You can import your VM images using the Amazon EC2 API tools: /ec2/faqs/;How do I export an Amazon EC2 instance back to my on-premise virtualization environment?;You can export your Amazon EC2 instance using the Amazon EC2 CLI tools: /ec2/faqs/;Are there any other requirements when exporting an EC2 instance using VM Import/Export?;You can export running or stopped EC2 instances that you previously imported using VM Import/Export. If the instance is running, it will be momentarily stopped to snapshot the boot volume. EBS data volumes cannot be exported. EC2 instances with more than one network interface cannot be exported. /ec2/faqs/;Can I export Amazon EC2 instances that have one or more EBS data volumes attached?;Yes, but VM Import/Export will only export the boot volume of the EC2 instance. /ec2/faqs/;What does it cost to import a virtual machine?;You will be charged standard Amazon S3 data transfer and storage fees for uploading and storing your VM image file. Once your VM is imported, standard Amazon EC2 instance hour and EBS service fees apply. If you no longer wish to store your VM image file in S3 after the import process completes, use the ec2-delete-disk-image command line tool to delete your disk image from Amazon S3. /ec2/faqs/;What does it cost to export a virtual machine?;You will be charged standard Amazon S3 storage fees for storing your exported VM image file. You will also be charged standard S3 data transfer charges when you download the exported VM file to your on-premise virtualization environment. Finally, you will be charged standard EBS charges for storing a temporary snapshot of your EC2 instance. To minimize storage charges, delete the VM image file in S3 after downloading it to your virtualization environment. /ec2/faqs/;When I import a VM of Windows Server 2003 or 2008, who is responsible for supplying the operating system license?;When you launch an imported VM using Microsoft Windows Server 2003 or 2008, you will be charged standard instance hour rates for Amazon EC2 running the appropriate Windows Server version, which includes the right to utilize that operating system within Amazon EC2. You are responsible for ensuring that all other installed software is properly licensed. /ec2/faqs/;Can I continue to use the AWS-provided Microsoft Windows license key after exporting an EC2 instance back to my on-premise virtualization environment?;No. After an EC2 instance has been exported, the license key utilized in the EC2 instance is no longer available. You will need to reactivate and specify a new license key for the exported VM after it is launched in your on-premise virtualization platform. /ec2/faqs/;When I import a VM with Red Hat Enterprise Linux (RHEL), who is responsible for supplying the operating system license?;When you import Red Hat Enterprise Linux (RHEL) VM images, you can use license portability for your RHEL instances. With license portability, you are responsible for maintaining the RHEL licenses for imported instances, which you can do using Cloud Access subscriptions for Red Hat Enterprise Linux. Please contact Red Hat to learn more about Cloud Access and to verify your eligibility. /ec2/faqs/;How long does it take to import a virtual machine?;The length of time to import a virtual machine depends on the size of the disk image and your network connection speed. As an example, a 10 GB Windows Server 2008 SP2 VMDK image takes approximately 2 hours to import when it’s transferred over a 10 Mbps network connection. If you have a slower network connection or a large disk to upload, your import may take significantly longer. /ec2/faqs/;In which Amazon EC2 regions can I use VM Import/Export?;Visit the Region Table page to see product service availability by region. /ec2/faqs/;How many simultaneous import or export tasks can I have?;Each account can have up to five active import tasks and five export tasks per region. /ec2/faqs/;Can I run imported virtual machines in Amazon Virtual Private Cloud (VPC)?;Yes, you can launch imported virtual machines within Amazon VPC. /ec2/faqs/;Can I use the AWS Management Console with VM Import/Export?;No. VM Import/Export commands are available via EC2 CLI and API. You can also use the AWS Management Portal for vCenter to import VMs into Amazon EC2. Once imported, the resulting instances are available for use via the AWS Management Console. /ec2/faqs/;How will I be charged and billed for my use of Amazon EC2?;You pay only for what you use. Displayed pricing is an hourly rate but depending on which instances you choose, you pay by the hour or second (minimum of 60 seconds) for each instance type. Partial instance-hours consumed are billed based on instance usage. Data transferred between AWS services in different regions is charged at standard inter-region data transfer rates. Usage for other Amazon Web Services is billed separately from Amazon EC2. /ec2/faqs/;When does billing of my Amazon EC2 systems begin and end?;"Billing commences when Amazon EC2 initiates the boot sequence of an AMI instance. Billing ends when the instance terminates, which could occur through a web services command, by running ""shutdown -h"", or through instance failure. When you stop an instance, we shut it down but don't charge hourly usage for a stopped instance, or data transfer fees, but we do charge for the storage for any Amazon EBS volumes. To learn more, visit the AWS Documentation." /ec2/faqs/;What defines billable EC2 instance usage?;"Instance usages are billed for any time your instances are in a ""running"" state. If you no longer wish to be charged for your instance, you must ""stop"" or ""terminate"" the instance to avoid being billed for additional instance usage. Billing starts when an instance transitions into the running state." /ec2/faqs/;If I have two instances in different availability zones, how will I be charged for regional data transfer?;"Each instance is charged for its data in and data out at corresponding Data Transfer rates. Therefore, if data is transferred between these two instances, it is charged at ""Data Transfer Out from EC2 to Another AWS Region"" for the first instance and at ""Data Transfer In from Another AWS Region"" for the second instance. Please refer to this page for detailed data transfer pricing." /ec2/faqs/;If I have two instances in different regions, how will I be charged for data transfer?;Each instance is charged for its data in and data out at Inter-Region Data Transfer rates. Therefore, if data is transferred between these two instances, it is charged at Inter-Region Data Transfer Out for the first instance and at Inter-Region Data Transfer In for the second instance. /ec2/faqs/;How will my monthly bill show per-second versus per-hour?;Although EC2 charges in your monthly bill will now be calculated based on a per second basis, for consistency, the monthly EC2 bill will show cumulative usage for each instance that ran in a given month in decimal hours. For example, an instance running for 1 hour 10 minutes and 4 seconds would look like 1.1677. Read this blog for an example of the detailed billing report. /ec2/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /ec2/faqs/;What is a Convertible RI?;A Convertible RI is a type of Reserved Instance with attributes that can be changed during the term. /ec2/faqs/;When should I purchase a Convertible RI instead of a Standard RI?;The Convertible RI is useful for customers who can commit to using EC2 instances for a three-year term in exchange for a significant discount on their EC2 usage, are uncertain about their instance needs in the future, or want to benefit from changes in price. /ec2/faqs/;What term length options are available on Convertible RIs?;Like Standard RIs, Convertible RIs are available for purchase for a one-year or three-year term. /ec2/faqs/;Can I exchange my Convertible RI to benefit from a Convertible RI matching a different instance type, operating system, tenancy, or payment option?;Yes, you can select a new instance type, operating system, tenancy, or payment option when you exchange your Convertible RIs. You also have the flexibility to exchange a portion of your Convertible RI or merge the value of multiple Convertible RIs in a single exchange. /ec2/faqs/;Can I transfer a Convertible or Standard RI from one region to another?;No, an RI is associated with a specific region, which is fixed for the duration of the reservation's term. /ec2/faqs/;How do I change the configuration of a Convertible RI?;You can change the configuration of your Convertible RI using the EC2 Management Console or the GetReservedInstancesExchangeQuote API. You also have the flexibility to exchange a portion of your Convertible RI or merge the value of multiple Convertible RIs in a single exchange. Click here to learn more about exchanging Convertible RIs. /ec2/faqs/;Do I need to pay a fee when I exchange my Convertible RIs?;No, you do not pay a fee when you exchange your RIs. However you may need to pay a one-time true-up charge that accounts for differences in pricing between the Convertible RIs that you have and the Convertible RIs that you want. /ec2/faqs/;How do Convertible RI exchanges work?;When you exchange one Convertible RI for another, EC2 ensures that the total value of the Convertible RIs is maintained through a conversion. So, if you are converting your RI with a total value of $1000 for another RI, you will receive a quantity of Convertible RIs with a value that’s equal to or greater than $1000. You cannot convert your Convertible RI for Convertible RI(s) of a lesser total value. /ec2/faqs/;Can you define total value?;The total value is the sum of all expected payments that you’d make during the term for the RI. /ec2/faqs/;Can you walk me through how the true-up cost is calculated for a conversion between two All Upfront Convertible RIs?;Sure, let’s say you purchased an All Upfront Convertible RI for $1000 upfront, and halfway through the term you decide to change the attributes of the RI. Since you’re halfway through the RI term, you have $500 left of prorated value remaining on the RI. The All Upfront Convertible RI that you want to convert into costs $1,200 upfront today. Since you only have half of the term left on your existing Convertible RI, there is $600 of value remaining on the desired new Convertible RI. The true-up charge that you’ll pay will be the difference in upfront value between original and desired Convertible RIs, or $100 ($600 - $500). /ec2/faqs/;Can you walk me through a conversion between No Upfront Convertible RIs?;Unlike conversions between Convertible RIs with an upfront value, since you’re converting between RIs without an upfront cost, there will not be a true-up charge. However, the amount you pay on an hourly basis before the exchange will need to be greater than or equal to the amount you pay on a total hourly basis after the exchange. /ec2/faqs/;Can I customize the number of instances that I receive as a result of a Convertible RI exchange?;No, EC2 uses the value of the Convertible RIs you’re trading in to calculate the minimal number of Convertible RIs you’ll receive while ensuring the result of the exchange gives you Convertible RIs of equal or greater value. /ec2/faqs/;Are there exchange limits for Convertible RIs?;No, there are no exchange limits for Convertible RIs. /ec2/faqs/;Do I have the freedom to choose any instance type when I exchange my Convertible RIs?;No, you can only exchange into Convertible RIs that are currently offered by AWS. /ec2/faqs/;Can I upgrade the payment option associated with my Convertible RI?;Yes, you can upgrade the payment option associated with your RI. For example, you can exchange your NUpfront RIs for Partial or All Upfront RIs to benefit from better pricing. You cannot change the payment option from All Upfront to NUpfront, and cannot change from Partial Upfront to NUpfront. /ec2/faqs/;Do Convertible RIs allow me to benefit from price reductions when they happen?;Yes, you can exchange your RIs to benefit from lower pricing. For example, if the price of new Convertible RIs reduces by 10%, you can exchange your Convertible RIs and benefit from the 10% reduction in price. /ec2/faqs/;What is Amazon EC2 Fleet?;With a single API call, EC2 Fleet lets you provision compute capacity across different instance types, Availability Zones and across On-Demand, Reserved Instances (RI) and Spot Instances purchase models to help optimize scale, performance and cost. /ec2/faqs/;If I currently use Amazon EC2 Spot Fleet should I migrate to Amazon EC2 Fleet?;If you are leveraging Amazon EC2 Spot Instances with Spot Fleet, you can continue to use that. Spot Fleet and EC2 Fleet offer the same functionality. There is no requirement to migrate. /ec2/faqs/;Can I use Reserved Instance (RI) discounts with Amazon EC2 Fleet?;Yes. Similar to other EC2 APIs or other AWS services that launch EC2 instances, if the On-Demand instance launched by EC2 Fleet matches an existing RI, that instance will receive the RI discount. For example, if you own Regional RIs for M4 instances and you have specified only M4 instances in your EC2 Fleet, RI discounts will be automatically applied to this usage of M4. /ec2/faqs/;Will Amazon EC2 Fleet failover to On-Demand if EC2 Spot capacity is not fully fulfilled?;No, EC2 Fleet will continue to attempt to meet your desired Spot capacity based on the number of Spot instances you requested in your Fleet launch specification. /ec2/faqs/;What is the pricing for Amazon EC2 Fleet?;"EC2 Fleet comes at no additional charge; you only pay for the underlying resources that EC2 Fleet launches." /ec2/faqs/;Can you provide a real world example of how I can use Amazon EC2 Fleet?;There are a number of ways to take advantage of Amazon EC2 Fleet, such as in big data workloads, containerized application, grid processing workloads, etc. In this example of a genomic sequencing workload, you can launch a grid of worker nodes with a single API call: select your favorite instances, assign weights for these instances, specify target capacity for On-Demand and Spot Instances, and build a fleet within seconds to crunch through genomic data quickly. /ec2/faqs/;How can I allocate resources in an Amazon EC2 Fleet?;By default, EC2 Fleet will launch the On-Demand option that is the lowest price. For Spot Instances, EC2 Fleet provides three allocation strategies: capacity-optimized, lowest price and diversified. The capacity-optimized allocation strategy attempts to provision Spot Instances from the most available Spot Instance pools by analyzing capacity metrics. This strategy is a good choice for workloads that have a higher cost of interruption such as big data and analytics, image and media rendering, machine learning, and high performance computing. /ec2/faqs/;Can I submit a multi-region Amazon EC2 Fleet request?;No, we do not support multi-region EC2 Fleet requests. /ec2/faqs/;Can I tag an Amazon EC2 Fleet?;Yes. You can tag an EC2 Fleet request to create business-relevant tag groupings to organize resources along technical, business, and security dimensions. /ec2/faqs/;Can I modify my Amazon EC2 Fleet?;Yes, you can modify the total target capacity of your EC2 Fleet when in maintain mode. You may need to cancel the request and submit a new one to change other request configuration parameters. /ec2/faqs/;Can I specify a different AMI for each instance type that I want to use?;Yes, simply specify the AMI you’d like to use in each launch specification you provide in your EC2 Fleet. /ec2/faqs/;How much do Capacity Reservations cost?;When the Capacity Reservation is active, you will pay equivalent instance charges whether you run the instances or not. If you do not use the reservation, the charge will show up as unused reservation on your EC2 bill. When you run an instance that matches the attributes of a reservation, you just pay for the instance and nothing for the reservation. There are no upfront or additional charges. /ec2/faqs/;Can I get a discount for Capacity Reservation usage?;Yes. Savings Plans or Regional RI (RI scoped to a region) discounts apply to Capacity Reservations. When you are running an instance within your reservation, you are not charged for the reservation. Savings Plans or Regional RIs will apply to this usage as if it were On-Demand usage. When the reservation is not used, AWS Billing will automatically apply your discount when the attributes of the unused Capacity Reservation match the attributes of an active Savings Plan or Regional RI. /ec2/faqs/;When should I use Savings Plans, EC2 RIs, and Capacity Reservations?;Use Savings Plans or Regional RIs to reduce your bill while committing to a one- or three-year term. Savings Plans offer significant savings over On Demand, just like EC2 RIs, but automatically reduce customers’ bills on compute usage across any AWS region, even as usage changes. Use Capacity Reservations if you need the additional confidence in your ability to launch instances. Capacity Reservations can be created for any duration and can be managed independently of your Savings Plans or RIs. If you have Savings Plans or Regional RIs, they will automatically apply to matching Capacity Reservations. This gives you the flexibility to selectively add Capacity Reservations to a portion of your instance footprint and still reduce your bill for that usage. /ec2/faqs/;I have a Zonal RI (RI scoped to an Availability Zone) that also provides a capacity reservation. How does this compare with a Capacity Reservation?;A Zonal RI provides both a discount and a capacity reservation in a specific Availability Zone in return for a 1-to-3 year commitment. Capacity Reservation allows you to create and manage reserved capacity independently of your RI commitment and term length. /ec2/faqs/;I created a Capacity Reservation. How can I use it?;A Capacity Reservation is tied to a specific Availability Zone and, by default automatically utilized by running instances in that Availability Zone. When you launch new instances that match the reservation attributes, they will automatically match to the reservation. /ec2/faqs/;How many instances am I allowed to reserve?;The number of instances you are allowed to reserve is based on your account's On-Demand instance limit. You can reserve as many instances as that limit allows, minus the number of instances that are already running. /ec2/faqs/;Can I modify a Capacity Reservation after it has started?;Yes. You can reduce the number of instances you reserved at any time. You can also increase the number of instances (subject to availability). You can also modify the end time of your reservation. You cannot modify a Capacity Reservation that has ended or has been deleted. /ec2/faqs/;Can I end a Capacity Reservation after it has started?;Yes. You can end a Capacity Reservation by canceling it using the console or API/SDK, or by modifying your reservation to specify an end time that makes it expire automatically. Running instances are unaffected by changes to your Capacity Reservation including deletion or expiration of a reservation. /ec2/faqs/;Where can I find more information about using Capacity Reservations?;Refer to Linux or windows technical documentation to learn about creating and using a Capacity Reservation. /ec2/faqs/;Can I share a Capacity Reservation with another AWS Account?;Yes, you can share Capacity Reservations with other AWS accounts or within your AWS Organization via AWS Resource Access Manager service. You can share EC2 Capacity Reservations in three easy steps: create a Resource Share using AWS Resource Access Manager, add resources (Capacity Reservations) to the Resource Share, and specify the target accounts that you wish to share the resources with. /ec2/faqs/;What happens when I share a Capacity Reservation with another AWS account?;When a Capacity Reservation is shared with other accounts, those accounts can consume the reserved capacity to run their EC2 Instances. The exact behavior depends on the preferences set on the Capacity Reservation. By default, Capacity Reservations automatically match existing and new instances from other accounts that have shared access to the reservation. You can also target a Capacity Reservation for specific workloads/instances. Individual accounts can control which of their instances consume Capacity Reservations. Refer to Linux or windows technical documentation to learn more about the instance matching options. /ec2/faqs/;Is there an additional charge for sharing a reservation?;There is no additional charge for sharing a reservation. /ec2/faqs/;Who gets charged when a Capacity Reservation is shared across multiple accounts?;If multiple accounts are consuming a Capacity Reservation, each account gets charged for its own instance usage. Unused reserved capacity, if any, gets charged to the account that owns the Capacity Reservation. If there is a consolidated billing arrangement among the accounts that share a Capacity Reservation, the primary account gets billed for instance usage across all the linked accounts. /ec2/faqs/;Can I prioritize access to Capacity Reservation among the AWS accounts that have shared access?;No. Instance spots in a Capacity Reservation are available on a first-come-first-serve basis to any account that has shared access. /ec2/faqs/;How can I communicate the Availability Zone (AZ) of a CR with another account, given AZ name mappings could be different across AWS accounts?;You can now use Availability Zone ID (AZ ID) instead of AZ name. Availability Zone ID is a static reference and provides a consistent way of identifying the location of a resource across all your accounts. This makes it easier for you to provision resources centrally in a single account and share them across multiple accounts. /ec2/faqs/;Can I stop sharing my Capacity Reservation once I have shared it?;Yes, you can stop sharing a reservation after you have shared it. When you stop sharing a CR with specific accounts or stop sharing entirely, other account(s) lose the ability to launch new instances into the CR. Any capacity occupied by instances running from other accounts will be restored to the CR for your use (subject to availability). /ec2/faqs/;Where can I find more information about sharing Capacity Reservations?;Refer to Linux or windows technical documentation to learn about sharing Capacity Reservations. /ec2/faqs/;Can I get a discount for Capacity Reservation usage?;Yes. Savings Plans or Regional RI discounts apply to Capacity Reservations. AWS Billing automatically applies the discount when the attributes of a Capacity Reservation match the attributes of a Savings Plan or a Regional RI. When a Capacity Reservation is used by an instance, you are only charged for the instance (with Savings Plan or RI discounts applied). Discounts are preferentially applied to instance usage before covering unused Capacity Reservations. /ec2/faqs/;What is a Reserved Instance?;A Reserved Instance (RI) is an EC2 offering that provides you with a significant discount on EC2 usage when you commit to a one-year or three-year term. /ec2/faqs/;What are the differences between Standard RIs and Convertible RIs?;Standard RIs offer a significant discount on EC2 instance usage when you commit to a particular instance family. Convertible RIs offer you the option to change your instance configuration during the term, and still receive a discount on your EC2 usage. For more information on Convertible RIs, please click here. /ec2/faqs/;Do RIs provide a capacity reservation?;Yes, when a Standard or Convertible RI is scoped to a specific Availability Zone (AZ), instance capacity matching the exact RI configuration is reserved for your use (these are referred to as “zonal RIs”). Zonal RIs give you additional confidence in your ability to launch instances when you need them. /ec2/faqs/;When should I purchase a zonal RI?;If you want to take advantage of the capacity reservation, then you should buy an RI in a specific Availability Zone. /ec2/faqs/;When should I purchase a regional RI?;If you do not require the capacity reservation, then you should buy a regional RI. Regional RIs provide AZ and instance size flexibility, which offer broader applicability of the RI’s discounted rate. /ec2/faqs/;What are Availability Zone and instance size flexibility?;Availability Zone and instance size flexibility make it easier for you to take advantage of your regional RI’s discounted rate. Availability Zone flexibility applies your RI’s discounted rate to usage in any Availability Zone in a region, while instance size flexibility applies your RI’s discounted rate to usage of any size within an instance family. Let’s say you own an m5.2xlarge Linux/Unix regional RI with default tenancy in US East (N.Virginia). Then this RI’s discounted rate can automatically apply to two m5.xlarge instances in us-east-1a or four m5.large instances in us-east-1b. /ec2/faqs/;What types of RIs provide instance size flexibility?;Linux/Unix regional RIs with the default tenancy provide instance size flexibility. Instance size flexibility is not available on RIs of other platforms such as Windows, Windows with SQL Standard, Windows with SQL Server Enterprise, Windows with SQL Server Web, RHEL, and SLES or G4 instances. /ec2/faqs/;Do I need to take any action to take advantage of Availability Zone and instance size flexibility?;Regional RIs do not require any action to take advantage of Availability Zone and instance size flexibility. /ec2/faqs/;I own zonal RIs. How do I assign them to a region?;You can assign your Standard zonal RIs to a region by modifying the scope of the RI from a specific Availability Zone to a region from the EC2 console or by using the ModifyReservedInstances API. /ec2/faqs/;How do I purchase an RI?;To get started, you can purchase an RI from the EC2 console or by using the AWS CLI. Simply specify the instance type, platform, tenancy, term, payment option, and region or Availability Zone. /ec2/faqs/;Can I purchase an RI for a running instance?;Yes, AWS will automatically apply an RI’s discounted rate to any applicable instance usage from the time of purchase. Visit the Getting Started page to learn more. /ec2/faqs/;Can I control which instances are billed at the discounted rate?;No. AWS automatically optimizes which instances are charged at the discounted rate to ensure you always pay the lowest amount. For information about billing, and how it applies to RIs, see Billing Benefits and Payment Options. /ec2/faqs/;How does instance size flexibility work?;EC2 uses the scale shown below to compare different sizes within an instance family. In the case of instance size flexibility on RIs, this scale is used to apply the discounted rate of RIs to the normalized usage of the instance family. For example, if you have an m5.2xlarge RI that is scoped to a region, then your discounted rate could apply towards the usage of 1 m5.2xlarge or 2 m5.xlarge instances. /ec2/faqs/;Can I change my RI during its term?;Yes, you can modify the Availability Zone of the RI, change the scope of the RI from Availability Zone to region (and vice-versa), change the network platform from EC2-VPC to EC2-Classic (and vice versa) or modify instance sizes within the same instance family (on the Linux/Unix platform). /ec2/faqs/;Can I change the instance type of my RI during its term?;Yes. Convertible RIs offer you the option to change the instance type, operating system, tenancy or payment option of your RI during its term. Please refer to the Convertible RI section of the FAQ for additional information. /ec2/faqs/;What are the different payment options for RIs?;You can choose from three payment options when you purchase an RI. With the All Upfront option, you pay for the entire RI term with one upfront payment. With the Partial Upfront option, you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the RI term. The NUpfront option does not require any upfront payment and provides a discounted hourly rate for the duration of the term. /ec2/faqs/;When are RIs activated?;"The billing discount and capacity reservation (if applicable) is activated once your payment has successfully been authorized. You can view the status (pending | active | retired) of your RIs on the ""Reserved Instances"" page of the Amazon EC2 Console." /ec2/faqs/;Do RIs apply to Spot instances or instances running on a Dedicated Host?;No, RIs do not apply to Spot instances or instances running on Dedicated Hosts. To lower the cost of using Dedicated Hosts, purchase Dedicated Host Reservations. /ec2/faqs/;How do RIs work with Consolidated Billing?;Our system automatically optimizes which instances are charged at the discounted rate to ensure that the consolidated accounts always pay the lowest amount. If you own RIs that apply to an Availability Zone, then only the account which owns the RI will receive the capacity reservation. However, the discount will automatically apply to usage in any account across your consolidated billing family. /ec2/faqs/;Can I get a discount on RI purchases?;Yes, EC2 provides tiered discounts on RI purchases. These discounts are determined based on the total list value (non-discounted price) for the active RIs you have per region. Your total list value is the sum of all expected payments for an RI within the term, including both the upfront and recurring hourly payments. The tier ranges and corresponding discounts are shown below. /ec2/faqs/;Can you help me understand how volume discounts are applied to my RI purchases?;Sure. Let's assume that you currently have $400,000 worth of active RIs in the US-east-1 region. Now, if you purchase RIs worth $150,000 in the same region, then the first $100,000 of this purchase would not receive a discount. However, the remaining $50,000 of this purchase would be discounted by 5 percent, so you would only be charged $47,500 for this portion of the purchase over the term based on your payment option. /ec2/faqs/;How do I calculate the list value of an RI?;Here is a sample list value calculation for three-year Partial Upfront Reserved Instances: /ec2/faqs/;How are volume discounts calculated if I use Consolidated Billing?;If you leverage Consolidated Billing, AWS will use the aggregate total list price of active RIs across all of your consolidated accounts to determine which volume discount tier to apply. Volume discount tiers are determined at the time of purchase, so you should activate Consolidated Billing prior to purchasing RIs to ensure that you benefit from the largest possible volume discount that your consolidated accounts are eligible to receive. /ec2/faqs/;Do Convertible RIs qualify for Volume Discounts?;No, but the value of each Convertible RI that you purchase contributes to your volume discount tier standing. /ec2/faqs/;How do I determine which volume discount tier applies to me?;To determine your current volume discount tier, please consult the Understanding Reserved Instance Discount Pricing Tiers portion of the Amazon EC2 User Guide. /ec2/faqs/;Will the cost of my RIs change, if my future volume qualifies me for other discount tiers?;No. Volume discounts are determined at the time of purchase, therefore the cost of your RIs will continue to remain the same as you qualify for other discount tiers. Any new purchase will be discounted according to your eligible volume discount tier at the time of purchase. /ec2/faqs/;Do I need to take any action at the time of purchase to receive volume discounts?;No, you will automatically receive volume discounts when you use the existing PurchaseReservedInstance API or EC2 Management Console interface to purchase RIs. If you purchase more than $10M worth of RIs contact us about receiving discounts beyond those that are automatically provided. /ec2/faqs/;What is the Reserved Instance Marketplace?;The Reserved Instance Marketplace is an online marketplace that provides AWS customers the flexibility to sell their Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances to other businesses and organizations. Customers can also browse the Reserved Instance Marketplace to find an even wider selection of Reserved Instance term lengths and pricing options sold by other AWS customers. /ec2/faqs/;When can I list a Reserved Instance on the Reserved Instance Marketplace?;You can list a Reserved Instance when: /ec2/faqs/;How will I register as a seller for the Reserved Instance Marketplace?;"To register for the Reserved Instance Marketplace, you can enter the registration workflow by selling a Reserved Instance from the EC2 Management Console or setting up your profile from the ""Account Settings"" page on the AWS portal. Nmatter the route, you will need to complete the following steps:" /ec2/faqs/;How will I know when I can start selling on the Reserved Instance Marketplace?;You can start selling on the Reserved Instance Marketplace after you have added a bank account through the registration pipeline. Once activation is complete, you will receive a confirmation email. However, it is important to note that you will not be able to receive disbursements until we are able to receive verification from your bank, which may take up to two weeks, depending on the bank you use. /ec2/faqs/;How do I list a Reserved Instance for sale?;To list a Reserved Instance, simply complete these steps in the Amazon EC2 Console: /ec2/faqs/;Which Reserved Instances can I list for sale?;You can list any Reserved Instances that have been active for at least 30 days, and for which we have received payment. Typically, this means that you can list your reservations once they are in the active state. It is important to note that if you are an invoice customer, your Reserved Instance can be in the active state prior to AWS receiving payment. In this case, your Reserved Instance will not be listed until we have received your payment. /ec2/faqs/;How are listed Reserved Instances displayed to buyers?;"Reserved Instances (both third-party and those offered by AWS) that have been listed on the Reserved Instance Marketplace can be viewed in the ""Reserved Instances"" section of the Amazon EC2 Console. You can also use the DescribeReservedInstancesListings API call." /ec2/faqs/;How much of my Reserved Instance term can I list?;You can sell a Reserved Instance for the term remaining, rounded down to the nearest month. For example, if you had 9 months and 13 days remaining, you will list it for sale as a 9-month-term Reserved Instance. /ec2/faqs/;Can I remove my Reserved Instance after I’ve listed it for sale?;Yes, you can remove your Reserved Instance listings at any point until a sale is pending (meaning a buyer has bought your Reserved Instance and confirmation of payment is pending). /ec2/faqs/;Which pricing dimensions can I set for the Reserved Instances I want to list?;Using the Reserved Instance Marketplace, you can set an upfront price you’d be willing to accept. You cannot set the hourly price (which will remain the same as was set on the original Reserved Instance), and you will not receive any funds collected from payments associated with the hourly prices. /ec2/faqs/;Can I still use my reservation while it is listed on the Reserved Instance Marketplace?;Yes, you will continue to receive the capacity and billing benefit of your reservation until it is sold. Once sold, any running instance that was being charged at the discounted rate will be charged at the On-Demand rate until and unless you purchase a new reservation, or terminate the instance. /ec2/faqs/;Can I resell a Reserved Instance that I purchased from the Reserved Instance Marketplace?;Yes, you can resell Reserved Instances purchased from the Reserved Instance Marketplace just like any other Reserved Instance. /ec2/faqs/;Are there any restrictions when selling Reserved Instances?;Yes, you must have a US bank account to sell Reserved Instances in the Reserved Instance Marketplace. Support for non-US bank accounts will be coming soon. Also, you may not sell Reserved Instances in the US GovCloud region. /ec2/faqs/;Can I sell Reserved Instances purchased from the public volume pricing tiers?;No, this capability is not yet available. /ec2/faqs/;Is there a charge for selling Reserved Instances on the Reserved Instance Marketplace?;Yes, AWS charges a service fee of 12% of the total upfront price of each Reserved Instance you sell in the Reserved Instance Marketplace. /ec2/faqs/;Can AWS sell subsets of my listed Reserved Instances?;Yes, AWS may potentially sell a subset of the quantity of Reserved Instances that you have listed. For example, if you list 100 Reserved instances, we may only have a buyer interested in purchasing 50 of them. We will sell those 50 instances and continue to list your remaining 50 Reserved Instances until and unless you decide not to list them any longer. /ec2/faqs/;How do buyers pay for Reserved Instances that they've purchased?;Payment for completed Reserved Instance sales are done via ACH wire transfers to a US bank account. /ec2/faqs/;When will I receive my money?;Once AWS has received funds from the customer that has bought your reservation, we will disburse funds via wire transfer to the bank account you specified when you registered for the Reserved Instance Marketplace. /ec2/faqs/;If I sell my Reserved Instance in the Reserved Instance Marketplace, will I get refunded for the Premium Support I was charged too?;No, you will not receive a pro-rated refund for the upfront portion of the AWS Premium Support Fee. /ec2/faqs/;Will I be notified about Reserved Instance Marketplace activities?;Yes, you will receive a single email once a day that details your Reserved Instance Marketplace activity whenever you create or cancel Reserved Instance listings, buyers purchase your listings, or AWS disburses funds to your bank account. /ec2/faqs/;What information is exchanged between the buyer and seller to help with the transaction tax calculation?;The buyer’s city, state, zip+4, and country information will be provided to the seller via a disbursement report. This information will enable sellers to calculate any necessary transaction taxes they need to remit to the government (e.g., sales tax, value-added tax, etc.). The legal entity name of the seller will also be provided on the purchase invoice. /ec2/faqs/;Are there any restrictions on the customers when purchasing third-party Reserved Instances?;Yes, you cannot purchase your own listed Reserved Instances, including those in any of your linked accounts (via Consolidated Billing). /ec2/faqs/;Do I have to pay for Premium Support when purchasing Reserved Instances from the Reserved Instance Marketplace?;Yes, if you are a Premium Support customer, you will be charged for Premium Support when you purchase a Reserved Instance through the Reserved Instance Marketplace. /ec2/faqs/;What is Savings Plans?;Savings Plans is a flexible pricing model that offers low prices on EC2, Lambda and Fargate usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. When you sign up for Savings Plans, you will be charged the discounted Savings Plans price for your usage up to your commitment. For example, if you commit to $10 of compute usage an hour, you will get the Savings Plans prices on that usage up to $10 and any usage beyond the commitment will be charged On Demand rates. /ec2/faqs/;What types of Savings Plans does AWS offer?;AWS offers two types of Savings Plans: /ec2/faqs/;How do Savings Plans compare to EC2 RIs?;Savings Plans offers significant savings over On Demand, just like EC2 RIs, but automatically reduce your bills on compute usage across any AWS region, even as usage changes. This provides you the flexibility to use the compute option that best suits your needs and continue to save money, all without having to perform exchanges or modifications. /ec2/faqs/;Do Savings Plans provide capacity reservations for EC2 instances?;No, Savings Plans do not provide a capacity reservation. You can however reserve capacity with On Demand Capacity Reservations and pay lower prices on them with Savings Plans. /ec2/faqs/;How do I get started with Savings Plans?;You can get started with Savings Plans from AWS Cost Explorer in the Management Console or by using the API/CLI. You can easily make a commitment to a Savings Plan by using the recommendations provided in AWS Cost Explorer, to realize the biggest savings. The recommended hourly commitment is based on your historical On Demand usage and your choice of plan type, term length, and payment option. Once you sign up for a Savings Plan, your compute usage will automatically be charged at the discounted Savings Plan prices and any usage beyond your commitment will be charged at regular On Demand rates. /ec2/faqs/;Can I continue to purchase EC2 RIs?;Yes. You can continue purchasing RIs to maintain compatibility with your existing cost management processes, and your RIs will work alongside Savings Plans to reduce your overall bill. However as your RIs expire we encourage you to sign up for Savings Plans as they offer the same savings as RIs, but with additional flexibility. /ec2/faqs/;What is a Spot Instance?;Spot Instances are spare EC2 capacity that can save you up to 90% off of On-Demand prices that AWS can interrupt with a 2-minute notification. Spot uses the same underlying EC2 instances as On-Demand and Reserved Instances, and is best suited for fault-tolerant, flexible workloads. Spot Instances provides an additional option for obtaining compute capacity and can be used along with On-Demand and Reserved Instances. /ec2/faqs/;How is a Spot Instance different than an On-Demand instance or Reserved Instance?;While running, Spot Instances are exactly the same as On-Demand or Reserved instances. The main differences are that Spot Instances typically offer a significant discount off the On-Demand prices, your instances can be interrupted by Amazon EC2 for capacity requirements with a 2-minute notification, and Spot prices adjust gradually based on long term supply and demand for spare EC2 capacity. /ec2/faqs/;How do I purchase and start up a Spot instance?;Spot instances can be launched using the same tools you use to launch instances today, including AWS Management Console, Auto-Scaling Groups, Run Instances and Spot Fleet. In addition many AWS services support launching Spot instances such as EMR, ECS, Datapipeline, CloudFormation and Batch. /ec2/faqs/;How many Spot Instances can I request?;You can request Spot Instances up to your Spot limit for each region. Note that customers new to AWS might start with a lower limit. To learn more about Spot Instance limits, please refer to the Amazon EC2 User Guide. /ec2/faqs/;What price will I pay for a Spot Instance?;You pay the Spot price that’s in effect at the beginning of each instance-hour for your running instance. If Spot price changes after you launch the instance, the new price is charged against the instance usage for the subsequent hour. /ec2/faqs/;What is a Spot capacity pool?;A Spot capacity pool is a set of unused EC2 instances with the same instance type, operating system, Availability Zone, and network platform (EC2-Classic or EC2-VPC). Each spot capacity pool can have a different price based on supply and demand. /ec2/faqs/;What are the best practices to use Spot Instances?;We highly recommend using multiple Spot capacity pools to maximize the amount of Spot capacity available to you. EC2 provides built-in automation to find the most cost-effective capacity across multiple Spot capacity pools using EC2 Auto Scaling, EC2 Fleet or Spot Fleet. For more information, please see Spot Best Practices. /ec2/faqs/;How can I determine the status of my Spot request?;You can determine the status of your Spot request via Spot Request Status code and message. You can access Spot Request Status information on the Spot Instance page of the EC2 console of the AWS Management Console, API and CLI. For more information, please visit the Amazon EC2 Developer guide. /ec2/faqs/;Are Spot Instances available for all instance families and sizes and in all regions?;Spot Instances are available in all public AWS regions. Spot is available for nearly all EC2 instance families and sizes, including the newest compute-optimized instances, accelerated graphics, and FPGA instance types. A full list of instance types supported in each region is listed here. /ec2/faqs/;Which operating systems are available as Spot Instances?;Linux/UNIX, Windows Server and Red Hat Enterprise Linux (RHEL) are available. Windows Server with SQL Server is not currently available. /ec2/faqs/;Can I use a Spot Instance with a paid AMI for third-party software (such as IBM’s software packages)?;Not at this time. /ec2/faqs/;Can I stop my running Spot Instances?; You can tell that the Spot Instance has been stopped by you or interrupted by looking at the Spot Request Status code. This is visible as Spot Request Status on the Spot Requests page of the AWS Management Console or in the DescribeSpotInstanceRequests API response as “status-code” field. If the Spot request status code is “instance-stopped-by-user”, it means that you have stopped your spot instance. /ec2/faqs/;How will I be charged if my Spot instance is stopped or interrupted?;If your Spot instance is terminated or stopped by Amazon EC2 in the first instance hour, you will not be charged for that usage. However, if you stop or terminate the Spot instance yourself, you will be charged to the nearest second. If the Spot instance is terminated or stopped by Amazon EC2 in any subsequent hour, you will be charged for your usage to the nearest second. If you are running on Windows or Red Hat Enterprise Linux (RHEL) and you stop or terminate the Spot instance yourself, you will be charged for an entire hour. /ec2/faqs/;When would my Spot Instance get interrupted?;Over the last 3 months, 92% of Spot Instance interruptions were from a customer manually terminating the instance because the application had completed its work. /ec2/faqs/;What happens to my Spot instance when it gets interrupted?;You can choose to have your Spot instances terminated, stopped or hibernated upon interruption. Stop and hibernate options are available for persistent Spot requests and Spot Fleets with the “maintain” option enabled. By default, your instances are terminated. /ec2/faqs/;What is the difference between Stop and Hibernate interruption behaviors?;In the case of Hibernate, your instance gets hibernated and the RAM data persisted. In the case of Stop, your instance gets shut down and RAM is cleared. /ec2/faqs/;What if my EBS root volume is not large enough to store memory state (RAM) for Hibernate?;You should have sufficient space available on your EBS root volume to write data from memory. If the EBS root volume does not have enough space, hibernation will fail and the instance will get shut down instead. Ensure that your EBS volume is large enough to persist memory data before choosing the hibernate option. /ec2/faqs/;What is the benefit if Spot hibernates my instance on interruption?;With hibernate, Spot instances will pause and resume around any interruptions so your workloads can pick up from exactly where they left off. You can use hibernation when your instance(s) need to retain instance state across shutdown-startup cycles, i.e. when your applications running on Spot depend on contextual, business, or session data stored in RAM. /ec2/faqs/;What do I need to do to enable hibernation for my Spot instances?;Refer to Spot Hibernation to learn about enabling hibernation for your Spot instances. /ec2/faqs/;Do I have to pay for hibernating my Spot instance?;There is no additional charge for hibernating your instance beyond the EBS storage costs and any other EC2 resources you may be using. You are not charged instance usage fees once your instance is hibernated. /ec2/faqs/;Can I resume a hibernated instance?;No, you will not be able to resume a hibernated instance directly. Hibernate-resume cycle is controlled by Amazon EC2. If an instance is hibernated by Spot, it will be resumed by Amazon EC2 when the capacity becomes available. /ec2/faqs/;Which instances and operating systems support hibernation?;Spot Hibernation is currently supported for Amazon Linux AMIs, Ubuntu and Microsoft Windows operating systems running on any instance type across C3, C4, C5, M4, M5, R3, R4 instances with memory (RAM) size less than 100 GiB. /ec2/faqs/;How am I charged if Spot price changes while my instance is running?;You will pay the price per instance-hour set at the beginning of each instance-hour for the entire hour, billed to the nearest second. /ec2/faqs/;Where can I see my usage history for Spot instances and see how much I was billed?;The AWS Management Console makes a detailed billing report available which shows Spot instance start and termination/stop times for all instances. Customers can check the billing report against historical Spot prices via the API to verify that the Spot price they were billed is correct. /ec2/faqs/;Are Spot blocks (Fixed Duration Spot instances) ever interrupted?;Spot blocks are designed not to be interrupted and will run continuously for the duration you select, independent of Spot market price. In rare situations, Spot blocks may be interrupted due to AWS capacity needs. In these cases, we will provide a two-minute warning before we terminate your instance (termination notice), and you will not be charged for the affected instance(s). /ec2/faqs/;What is a Spot fleet?;A Spot Fleet allows you to automatically request and manage multiple Spot instances that provide the lowest price per unit of capacity for your cluster or application, like a batch processing job, a Hadoop workflow, or an HPC grid computing job. You can include the instance types that your application can use. You define a target capacity based on your application needs (in units including instances, vCPUs, memory, storage, or network throughput) and update the target capacity after the fleet is launched. Spot fleets enable you to launch and maintain the target capacity, and to automatically request resources to replace any that are disrupted or manually terminated. Learn more about Spot fleets. /ec2/faqs/;Is there any additional charge for making Spot Fleet requests?;No, there is no additional charge for Spot Fleet requests. /ec2/faqs/;What limits apply to a Spot Fleet request?;Visit the Spot Fleet Limits section of the Amazon EC2 User Guide to learn about the limits that apply to your Spot Fleet request. /ec2/faqs/;What happens if my Spot Fleet request tries to launch Spot instances but exceeds my regional Spot request limit?;"If your Spot Fleet request exceeds your regional Spot instance request limit, individual Spot instance requests will fail with a ""Spot request limit exceeded request status"". Your Spot Fleet request’s history will show any Spot request limit errors that the Fleet request received. Visit the Monitoring Your Spot Fleet section of the Amazon EC2 User Guide to learn how to describe your Spot Fleet request's history." /ec2/faqs/;Are Spot fleet requests guaranteed to be fulfilled?;No. Spot fleet requests allow you to place multiple Spot Instance requests simultaneously, and are subject to the same availability and prices as a single Spot Instance request. For example, if no resources are available for the instance types listed in your Spot Fleet request, we may be unable to fulfill your request partially or in full. We recommend that you to include all the possible instance types and availability zones that are suitable for your workloads in the Spot Fleet. /ec2/faqs/;Can I submit a multi-Availability Zone Spot Fleet request?;Yes, visit the Spot Fleet Examples section of the Amazon EC2 User Guide to learn how to submit a multi-Availability Zone Spot Fleet request. /ec2/faqs/;Can I submit a multi-region Spot Fleet request?;No, we do not support multi-region Fleet requests. /ec2/faqs/;How does Spot Fleet allocate resources across the various Spot Instance pools specified in the launch specifications?;The RequestSpotFleet API provides three allocation strategies: capacity-optimized, lowestPrice and diversified. The capacity-optimized allocation strategy attempts to provision Spot Instances from the most available Spot Instance pools by analyzing capacity metrics. This strategy is a good choice for workloads that have a higher cost of interruption such as big data and analytics, image and media rendering, machine learning, and high performance computing. /ec2/faqs/;Can I tag a Spot Fleet request?;You can request to launch Spot Instances with tags via Spot Fleet. The Fleet by itself cannot be tagged. /ec2/faqs/;How can I see which Spot fleet owns my Spot Instances?;You can identify the Spot Instances associated with your Spot Fleet by describing your fleet request. Fleet requests are available for 48 hours after all its Spot Instances have been terminated. See the Amazon EC2 User Guide to learn how to describe your Spot Fleet request. /ec2/faqs/;Can I modify my Spot Fleet request?;Yes, you can modify the target capacity of your Spot Fleet request. You may need to cancel the request and submit a new one to change other request configuration parameters. /ec2/faqs/;Can I specify a different AMI for each instance type that I want to use?;Yes, simply specify the AMI you’d like to use in each launch specification you provide in your Spot Fleet request. /ec2/faqs/;Can I use Spot Fleet with Elastic Load Balancing, Auto Scaling, or Elastic MapReduce?;You can use Auto Scaling features with Spot Fleet such as target tracking, health checks, CloudWatch metrics, etc., and can attach instances to your Elastic load balancers (both classic and application load balancers). Elastic MapReduce has a feature named “Instance fleets” that provides capabilities similar to Spot Fleet. /ec2/faqs/;Does a Spot Fleet request terminate Spot Instances when they are no longer running in the lowest priced or capacity-optimized Spot pools and relaunch them?;No, Spot Fleet requests do not automatically terminate and relaunch instances while they are running. However, if you terminate a Spot Instance, Spot Fleet will replenish it with a new Spot Instance in the new lowest priced pool or capacity-optimized pool based on your allocation strategy. /ec2/faqs/;Can I use stop or Hibernation interruption behaviors with Spot Fleet?;Yes, stop-start and hibernate-resume are supported with Spot Fleet with “maintain” fleet option enabled. /ec2/faqs/;How do I use this service?;The service provides an NTP endpoint at a link-local IP address (169.254.169.123) accessible from any instance running in a VPC. Instructions for configuring NTP clients are available for Linux and Windows. /ec2/faqs/;What are the key benefits of using this service?;A consistent and accurate reference time source is crucial for many applications and services. The Amazon Time Sync Service provides a time reference that can be securely accessed from an instance without requiring VPC configuration changes and updates. It is built on Amazon’s proven network infrastructure and uses redundant reference time sources to ensure high accuracy and availability. /ec2/faqs/;Which instance types are supported for this service?;All instances running in a VPC can access the service. /ec2/faqs/;How isolated are Availability Zones from one another?;Each Availability Zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of failures like generators and cooling equipment are not shared across Availability Zones. Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone. /ec2/faqs/;Is Amazon EC2 running in more than one region?;Yes. Please refer to Regional Products and Services for more details of our product and service availability by region. /ec2/faqs/;How can I make sure that I am in the same Availability Zone as another developer?;We do not currently support the ability to coordinate launches into the same Availability Zone across AWS developer accounts. One Availability Zone name (for example, us-east-1a) in two AWS customer accounts may relate to different physical Availability Zones. /ec2/faqs/;If I transfer data between Availability Zones using public IP addresses, will I be charged twice for Regional Data Transfer (once because it’s across zones, and a second time because I’m using public IP addresses)?;No. Regional Data Transfer rates apply if at least one of the following is true, but you are only charged once for a given instance even if both are true: /ec2/faqs/;What is a Cluster Compute Instance?;Cluster Compute Instances combine high compute resources with high performance networking for High Performance Compute (HPC) applications and other demanding network-bound applications. Cluster Compute Instances provide similar functionality to other Amazon EC2 instances but have been specifically engineered to provide high performance networking. /ec2/faqs/;What kind of network performance can I expect when I launch instances in a cluster placement group?;The bandwidth an EC2 instance can utilize in a cluster placement group depends on the instance type and its networking performance specification. Inter-instance traffic within the same region can utilize 5 Gbps for single-flow and up to 25 Gbps for multi-flow traffic. When launched in a placement group, select EC2 instances can utilize up to 10 Gbps for single-flow traffic. /ec2/faqs/;What is a Cluster GPU Instance?;Cluster GPU Instances provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefiting from highly parallelized processing that can be accelerated by GPUs using the CUDA and OpenCL programming models. Common applications include modeling and simulation, rendering and media processing. /ec2/faqs/;What is a High Memory Cluster Instance?;High Memory Cluster Instances provide customers with large amounts of memory and CPU capabilities per instance in addition to high network capabilities. These instance types are ideal for memory intensive workloads including in-memory analytics systems, graph analysis and many science and engineering applications. /ec2/faqs/;Does use of Cluster Compute and Cluster GPU Instances differ from other Amazon EC2 instance types?;Cluster Compute and Cluster GPU Instances use differs from other Amazon EC2 instance types in two ways. /ec2/faqs/;What is a cluster placement group?;A cluster placement group is a logical entity that enables creating a cluster of instances by launching instances as part of a group. The cluster of instances then provides low latency connectivity between instances in the group. Cluster placement groups are created through the Amazon EC2 API or AWS Management Console. /ec2/faqs/;Are all features of Amazon EC2 available for Cluster Compute and Cluster GPU Instances?;Currently, Amazon DevPay is not available for Cluster Compute or Cluster GPU Instances. /ec2/faqs/;Is there a limit on the number of Cluster Compute or Cluster GPU Instances I can use and/or the size of cluster I can create by launching Cluster Compute Instances or Cluster GPU into a cluster placement group?;There is no limit specific for Cluster Compute Instances. For Cluster GPU Instances, you can launch 2 Instances on your own. If you need more capacity, please complete the Amazon EC2 instance request form (selecting the appropriate primary instance type). /ec2/faqs/;Are there any ways to optimize the likelihood that I receive the full number of instances I request for my cluster via a cluster placement group?;We recommend that you launch the minimum number of instances required to participate in a cluster in a single launch. For very large clusters, you should launch multiple placement groups, e.g. two placement groups of 128 instances, and combine them to create a larger, 256 instance cluster. /ec2/faqs/;Can Cluster GPU and Cluster Compute Instances be launched into a single cluster placement group?;While it may be possible to launch different cluster instance types into a single placement group, at this time we only support homogenous placement groups. /ec2/faqs/;If an instance in a cluster placement group is stopped then started again, will it maintain its presence in the cluster placement group?;Yes. A stopped instance will be started as part of the cluster placement group it was in when it stopped. If capacity is not available for it to start within its cluster placement group, the start will fail. /ec2/faqs/;What CPU options are available on EC2 instances?;EC2 instances offer a variety of CPU options to help customers balance performance and cost requirements. Depending on the instance type, EC2 offers a choice in CPU including AWS Graviton/Graviton2 processors (Arm), AMD processors (x86), and Intel processors (x86). /ec2/faqs/;What kind of hardware will my application stack run on?;Visit Amazon EC2 Instance Type for a list of EC2 instances available by region. /ec2/faqs/;How does EC2 perform maintenance?;AWS regularly performs routine hardware, software, power, and network maintenance with minimal disruption across all EC2 instance types. This is achieved by a combination of technologies and methods across the entire AWS Global infrastructure, such as live update and live migration as well as redundant and concurrently maintainable systems. Non-intrusive maintenance technologies such as live update and live migration do not require instances to be stopped or rebooted. Customers are not required to take any action prior to, during or after live migration or live update. These technologies help improve application uptime and reduce your operational effort. Amazon EC2 uses live update to deploy software to servers quickly with minimal impact to customer instances. Live update ensures that customers’ workloads run on servers with software that is up-to-date with security patches, new instance features and performance improvements. Amazon EC2 uses live migration when running instances need to be moved from one server to another for hardware maintenance or to optimize placement of instances or to dynamically manage CPU resources. Amazon EC2 has been expanding the scope and coverage of non-intrusive maintenance technologies over the years so that scheduled maintenance events are a fallback option rather than the primary means of enabling routine maintenance. /ec2/faqs/;How do I select the right instance type?;"Amazon EC2 instances are grouped into 5 families: General Purpose, Compute Optimized, Memory Optimized, Storage Optimized and Accelerated Computing instances. General Purpose Instances have memory to CPU ratios suitable for most general purpose applications and come with fixed performance or burstable performance; Compute Optimized instances have proportionally more CPU resources than memory (RAM) and are well suited for scale out compute-intensive applications and High Performance Computing (HPC) workloads; Memory Optimized Instances offer larger memory sizes for memory-intensive applications, including database and memory caching applications; Accelerated Computing instances use hardware accelerators, or co-processors, to perform functions such as floating point number calculations, graphics processing, or data pattern matching, more efficiently than is possible in software running on CPUs; Storage Optimized Instances provide low latency, I/O capacity using SSD-based local instance storage for I/O-intensive applications, as well as dense HDD-storage instances, which provide local high storage density and sequential I/O performance for data warehousing, Hadoop and other data-intensive applications. When choosing instance types, you should consider the characteristics of your application with regards to resource utilization (i.e. CPU, Memory, Storage) and select the optimal instance family and instance size." /ec2/faqs/;What is an “EC2 Compute Unit” and why did you introduce it?;Transitioning to a utility computing model fundamentally changes how developers have been trained to think about CPU resources. Instead of purchasing or leasing a particular processor to use for several months or years, you are renting capacity by the hour. Because Amazon EC2 is built on commodity hardware, over time there may be several different types of physical hardware underlying EC2 instances. Our goal is to provide a consistent amount of CPU capacity no matter what the actual underlying hardware. /ec2/faqs/;How does EC2 ensure consistent performance of instance types over time?;"AWS conducts yearly performance benchmarking of Linux and Windows compute performance on EC2 instance types. Benchmarking results, a test suite that customers can use to conduct independent testing, and guidance on expected performance variance is available under NDA for M,C,R, T and z1d instances; please contact your sales representative to request them." /ec2/faqs/;What is the regional availability of Amazon EC2 instance types?;For a list of all instances and regional availability, visit Amazon EC2 Pricing. /ec2/faqs/;How much compute power do Micro instances provide?;Micro instances provide a small amount of consistent CPU resources and allow you to burst CPU capacity up to 2 ECUs when additional cycles are available. They are well suited for lower throughput applications and web sites that consume significant compute cycles periodically but very little CPU at other times for background processes, daemons, etc. Learn more about using this instance type. /ec2/faqs/;How does a Micro instance compare in compute power to a Standard Small instance?;At steady state, Micro instances receive a fraction of the compute resources that Small instances do. Therefore, if your application has compute-intensive or steady state needs we recommend using a Small instance (or larger, depending on your needs). However, Micro instances can periodically burst up to 2 ECUs (for short periods of time). This is double the number of ECUs available from a Standard Small instance. Therefore, if you have a relatively low throughput application or web site with an occasional need to consume significant compute cycles, we recommend using Micro instances. /ec2/faqs/;How can I tell if an application needs more CPU resources than a Micro instance is providing?;The CloudWatch metric for CPU utilization will report 100% utilization if the instance bursts so much that it exceeds its available CPU resources during that CloudWatch monitored minute. CloudWatch reporting 100% CPU utilization is your signal that you should consider scaling – manually or via Auto Scaling – up to a larger instance type or scale out to multiple Micro instances. /ec2/faqs/;Are all features of Amazon EC2 available for Micro instances?;Currently Amazon DevPay is not available for Micro instances. /ec2/faqs/;What is the Nitro Hypervisor?;The launch of C5 instances introduced a new hypervisor for Amazon EC2, the Nitro Hypervisor. As a component of the Nitro system, the Nitro Hypervisor primarily provides CPU and memory isolation for EC2 instances. VPC networking and EBS storage resources are implemented by dedicated hardware components, Nitro Cards that are part of all current generation EC2 instance families. The Nitro Hypervisor is built on core Linux Kernel-based Virtual Machine (KVM) technology, but does not include general-purpose operating system components. /ec2/faqs/;How does the Nitro Hypervisor benefit customers?;The Nitro Hypervisor provides consistent performance and increased compute and memory resources for EC2 virtualized instances by removing host system software components. It allows AWS to offer larger instance sizes (like c5.18xlarge) that provide practically all of the resources from the server to customers. Previously, C3 and C4 instances each eliminated software components by moving VPC and EBS functionality to hardware designed and built by AWS. This hardware enables the Nitro Hypervisor to be very small and uninvolved in data processing tasks for networking and storage. /ec2/faqs/;Will all EC2 instances use the Nitro Hypervisor?;Eventually all new instance types will use the Nitro Hypervisor, but in the near term, some new instance types will use Xen depending on the requirements of the platform. /ec2/faqs/;Will AWS continue to invest in its Xen-based hypervisor?;Yes. As AWS expands its global cloud infrastructure, EC2’s use of its Xen-based hypervisor will also continue to grow. Xen will remain a core component of EC2 instances for the foreseeable future. AWS is a founding member of the Xen Project since its establishment as a Linux Foundation Collaborative Project and remains an active participant on its Advisory Board. As AWS expands its global cloud infrastructure, EC2’s Xen-based hypervisor also continues to grow. Therefore EC2’s investment in Xen continues to grow, not shrink. /ec2/faqs/;How many EBS volumes and Elastic Network Interfaces (ENIs) can be attached to instances running on the Nitro Hypervisor?;Instances running on the Nitro Hypervisor support a maximum of 27 additional PCI devices for EBS volumes and VPC ENIs. Each EBS volume or VPC ENuses a PCI device. For example, if you attach 3 additional network interfaces to an instance that uses the Nitro Hypervisor, you can attach up to 24 EBS volumes to that instance. /ec2/faqs/;Will the Nitro Hypervisor change the APIs used to interact with EC2 instances?;No, all the public facing APIs for interacting with EC2 instances that run using the Nitro Hypervisor will remain the same. For example, the “hypervisor” field of the DescribeInstances response will continue to report “xen” for all EC2 instances, even those running under the Nitro Hypervisor. This field may be removed in a future revision of the EC2 API. /ec2/faqs/;Which AMIs are supported on instances that use the Nitro Hypervisor?;EBS backed HVM AMIs with support for ENnetworking and booting from NVMe storage can be used with instances that run under the Nitro Hypervisor. The latest Amazon Linux AMI and Windows AMIs provided by Amazon are supported, as are the latest AMI of Ubuntu, Debian, Red Hat Enterprise Linux, SUSE Enterprise Linux, CentOS, and FreeBSD. /ec2/faqs/;Will I notice any difference between instances using Xen hypervisor and those using the Nitro Hypervisor?;Yes. For example, instances running under the Nitro Hypervisor boot from EBS volumes using an NVMe interface. Instances running under Xen boot from an emulated IDE hard drive, and switch to the Xen paravirtualized block device drivers. /ec2/faqs/;How are instance reboot and termination EC2 API requests implemented by the Nitro Hypervisor?;The Nitro Hypervisor signals the operating system running in the instance that it should shut down cleanly by industry standard ACPI methods. For Linux instances, this requires that acpid be installed and functioning correctly. If acpid is not functioning in the instance, termination events will be delayed by multiple minutes and will then execute as a hard reset or power off. /ec2/faqs/;How do EBS volumes behave when accessed by NVMe interfaces?;There are some important differences in how operating system NVMe drivers behave compared to Xen paravirtual (PV) block drivers. /ec2/faqs/;What is Optimize CPUs?;Optimize CPUs gives you greater control of your EC2 instances on two fronts. First, you can specify a custom number of vCPUs when launching new instances to save on vCPU-based licensing costs. Second, you can disable Intel Hyper-Threading Technology (Intel HT Technology) for workloads that perform well with single-threaded CPUs, such as certain high-performance computing (HPC) applications. /ec2/faqs/;Why should I use Optimize CPUs feature?;You should use Optimize CPUs if: /ec2/faqs/;How will the CPU optimized instances be priced?;CPU optimized instances will be priced the same as equivalent full-sized instances. /ec2/faqs/;How will my application performance change when using Optimize CPUs on EC2?;Your application performance change with Optimize CPUs will be largely dependent on the workloads you are running on EC2. We encourage you to benchmark your application performance with Optimize CPUs to arrive at the right number of vCPUs and optimal hyper-threading behavior for your application. /ec2/faqs/;Can I use Optimize CPUs on EC2 Bare Metal instance types (such as i3.metal)?;No. You can use Optimize CPUs with only virtualized EC2 instances. /ec2/faqs/;How can I get started with using Optimize CPUs for EC2 Instances?;For more information on how to get started with Optimize CPUs and supported instance types, please visit the Optimize CPUs documentation page here. /ec2/faqs/;How am I billed for my use of Amazon EC2 running IBM?;You pay only for what you use and there is no minimum fee. Pricing is per instance-hour consumed for each instance type. Partial instance-hours consumed are billed as full hours. Data transfer for Amazon EC2 running IBM is billed and tiered separately from Amazon EC2. There is no Data Transfer charge between two Amazon Web Services within the same region (i.e. between Amazon EC2 US West and another AWS service in the US West). Data transferred between AWS services in different regions will be charged as Internet Data Transfer on both sides of the transfer. /ec2/faqs/;Can I use Amazon DevPay with Amazon EC2 running IBM?;No, you cannot use DevPay to bundle products on top of Amazon EC2 running IBM at this time. /ec2/faqs/;Can I use my existing Windows Server license with EC2?;Yes you can. After you’ve imported your own Windows Server machine images using the ImportImage tool, you can launch instances from these machine images on EC2 Dedicated Hosts and effectively manage instances and report usage. Microsoft typically requires that you track usage of your licenses against physical resources such as sockets and cores and Dedicated Hosts helps you to do this. Visit the Dedicated Hosts detail page for more information on how to use your own Windows Server licenses on Amazon EC2 Dedicated Hosts. /ec2/faqs/;What software licenses can I bring to the Windows environment?;Specific software license terms vary from vendor to vendor. Therefore, we recommend that you check the licensing terms of your software vendor to determine if your existing licenses are authorized for use in Amazon EC2. /ec2/faqs/;What are Amazon EC2 Mac instances?;Amazon EC2 Mac instances allow customers to run on-demand macOS workloads in the cloud for the first time, extending the flexibility, scalability, and cost benefits of AWS to all Apple developers. With EC2 Mac instances, developers creating apps for iPhone, iPad, Mac, Apple Watch, Apple TV, and Safari can provision and access macOS environments within minutes, dynamically scale capacity as needed, and benefit from AWS’s pay-as-you-go pricing. /ec2/faqs/;What workloads should you run on EC2 Mac instances?;Amazon EC2 Mac instances are designed to build, test, sign, and publish applications for Apple platforms such as iOS, iPadOS, watchOS, tvOS, macOS, and Safari. Customers such as Pinterest, Intuit, FlipBoard, Twitch, and Goldman Sachs have seen up to 75% better build performance, up to 80% lower build failure rates, and up to 5x the number of parallel builds compared to running macOS on premises. /ec2/faqs/;What are EC2 x86 Mac instances?;x86-based EC2 Mac instances are built on Apple Mac mini computers featuring Intel Core i7 processors and are powered by the AWS Nitro System. They offer customers a choice of macOS Mojave (10.14), macOS Catalina (10.15), macOS Big Sur (11), and macOS Monterey (12) as Amazon Machine Images (AMIs). x86-based EC2 Instances are available in 12 Regions: US East (Ohio, NVirginia), US West (Oregon), Europe (Amsterdam, Frankfurt, Ireland, London), and Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo). Learn more and get started with x86-based EC2 Mac instances here. /ec2/faqs/;What are EC2 M1 Mac instances?;EC2 M1 Mac instances are built on Apple M1 Mac mini computers and are powered by the AWS Nitro System. They deliver up to 60 percent better price performance over x86-based EC2 Mac instances for iOS and macOS application build workloads. EC2 M1 Mac instances enable ARM64 macOS environments for the first time in AWS, and support macOS Big Sur (11) and macOS Monterey (12) as Amazon Machine Images (AMIs). EC2 M1 Mac instances are available in 4 Regions: US East (NVirginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore). Learn more and get started with EC2 M1 Mac instances here. /ec2/faqs/;What pricing models are available for EC2 Mac instances?;Amazon EC2 Mac instances are available as Dedicated Hosts through both On-Demand and Savings Plans pricing models. The Dedicated Host is the unit of billing for EC2 Mac instances. Billing is per second, with a 24-hour minimum allocation period for the Dedicated Host to comply with the Apple macOS Software License Agreement. At the end of the 24-hour minimum allocation period, the host can be released at any time with no further commitment. Both Compute and Instance Savings Plans are available for EC2 Mac instances and offer up to 44 percent off On-Demand pricing. Visit the Dedicated Host pricing page for more information. (Note: Please select “Dedicated Host” tenancy and “Linux” operating system to view details.) You can also access EC2 Mac instances pricing on the AWS Pricing Calculator for Dedicated Hosts. /ec2/faqs/;How do you release a Dedicated Host?;The minimum allocation period for an EC2 Mac instance Dedicated Host is 24 hours. After the allocation period has exceeded 24 hours, first stop or terminate the instance running on the host, then release the host using the aws ec2 release-hosts CLI command or the AWS Management Console. /ec2/faqs/;Can you share EC2 Mac Dedicated Hosts with other AWS accounts in your organization?;Yes. You can share EC2 Mac Dedicated Hosts with AWS accounts inside your AWS organization, an organizational unit inside your AWS organization, or your entire AWS organization via AWS Resource Access Manager. For more information, please refer to the AWS Resource Access Manager documentation. /ec2/faqs/;How many EC2 Mac instances can you run on an EC2 Mac Dedicated Host?;EC2 Mac instances leverage the full power of the underlying Mac mini hardware. You can run 1 EC2 Mac instance on each EC2 Mac Dedicated Host. /ec2/faqs/;Can you update the EFI NVRAM variables on an EC2 Mac instance?;Yes, you can update certain EFI NVRAM variables on an EC2 Mac instance that will persist across reboots. However, EFI NVRAM variables will be reset if the instance is stopped or terminated. Please see the EC2 Mac instances documentation for more information. /ec2/faqs/;Can you use FileVault to encrypt the Amazon Elastic Block Store (Amazon EBS) boot volume on EC2 Mac instances?;FileVault requires a login before booting into macOS and before remote access can be enabled. If FileVault is enabled, you will lose access to your data on the boot volume at instance reboot, stop, or terminate. We strongly recommend you do not enable FileVault. Instead, we recommend using Amazon EBS encryption for both boot and data EBS volumes on EC2 Mac instances. /ec2/faqs/;Can you access to the microphone input or audio output on an EC2 Mac instance?;There is no access to the microphone input on an EC2 Mac instance. The built-in Apple Remote Desktop VNserver does not support audio output. Third party remote desktop software, such as Teradici CAS, supports remote audio on macOS. /ec2/faqs/;What macOS-based Amazon Machine Images (AMIs) are available for EC2 Mac instances?;EC2 Mac instances use physical Mac mini hardware to run macOS. Apple hardware only supports the macOS version shipped with the hardware (or later). x86-based EC2 Mac instances use the 2018 Intel Core i7 Mac mini, which means macOS Mojave (10.14.x) is as 'far back' as you can go, since the 2018 Mac mini shipped with Mojave. EC2 M1 Mac instances use 2020 M1 Mac mini, which shipped with macOS Big Sur (11.x). To see which latest versions of macOS are available as EC2 Mac AMIs, please visit the documentation. /ec2/faqs/;How can you run older versions of macOS on EC2 Mac instances?;EC2 Mac instances are bare metal instances and do not use the Nitro hypervisor. You can install and run a type-2 virtualization layer on x86-based EC2 Mac instances to get access to macOS High Sierra, Sierra, or older macOS versions. On EC2 M1 Mac instances, as macOS Big Sur is the first macOS version to support Apple Silicon, older macOS versions will not run even under virtualization. /ec2/faqs/;How do you install Xcode on an EC2 M1 Mac instance?;AWS provides base macOS AMIs without any prior Xcode IDE installation. You can install Xcode (and accept the EULA) just like you would on any other macOS system. You can install the latest Xcode IDE from the App Store, or earlier Xcode versions from the Apple Developer website. Once you have Xcode installed, we recommend creating a snapshot of your AMI for future use. /ec2/faqs/;What is the release cadence of macOS AMIs?;We make new macOS AMIs available on a best effort basis. You can subscribe to SNnotifications for updates. We are targeting 30-60 days after a macOS minor version update and 90-120 days after a macOS major version update to release official macOS AMIs. /ec2/faqs/;What agents and packages are included in EC2 macOS AMIs?;The following agents and packages are included by default in EC2 macOS AMIs: /ec2/faqs/;Can you update the agents and packages included in macOS AMIs?;There is a public Github repository of the Homebrew tap for all agents and packages added to the base macOS image. You can use Homebrew to install the latest versions of agents and packages on macOS instances. /ec2/faqs/;Can you apply OS and software updates to your Mac instances directly from Apple Update Servers?;Automatic macOS software updates are disabled on EC2 Mac instances. We recommend using our officially vended macOS AMIs to launch the version of macOS you need. On x86-based EC2 Mac instances, you can update the version of macOS via the Software Update preferences pane, or via the software update CLI command. We do not support macOS updates on EC2 M1 Mac instances at this time. On both EC2 Mac instances, you can install and update applications and any other user-space software. /ec2/faqs/;How many Amazon Elastic Block Store (Amazon EBS) volumes and Elastic Network Interfaces (ENIs) are supported by EC2 Mac instances?;x86-based EC2 Mac instances support 16 EBS volumes and 8 ENattachments, and EC2 M1 Mac instances support up to 10 EBS volumes and 8 ENattachments. /ec2/faqs/;Do EC2 Mac instances support EBS?;EC2 Mac instances are EBS optimized by default and offer up to 8 Gbps of dedicated EBS bandwidth to both encrypted and unencrypted EBS volumes. /ec2/faqs/;Do EC2 Mac instances support booting from local storage?;EC2 Mac instances can only boot from EBS-backed macOS AMIs. The internal SSD of the Mac mini is present in Disk Utility, but is not bootable. /ec2/faqs/;Do EC2 Mac instances support Amazon FSx?;Yes. EC2 Mac instances support FSx using the SMB protocol. You will need to enroll the EC2 Mac instance into a supported directory service (such as Active Directory or the AWS Directory Service) to enable FSx on EC2 Mac instances. For more information on FSx, visit the product page. /ec2/faqs/;Do EC2 Mac instances support Amazon Elastic File System (Amazon EFS)?;Yes, EC2 Mac instances support EFS over the NFSv4 protocol. For more information on EFS, visit the product page. /ec2/faqs/;What is Nitro System Support for Older Generation instances?;The AWS Nitro System now will provide its modern hardware and software components for previous generation EC2 instances to extend the length of service beyond the typical lifetime of the underlying hardware. With the Nitro System support customers can continue running their workloads and applications on the instance families they were built on. /ec2/faqs/;Which previous generation instances will receive Nitro System support and within what time frame?;Starting 2022 the following previous generation instances will receive Nitro System support: M1, M2 and M3. Customers of these instances will receive a maintenance notification of migration to Nitro System. We will add support for additional instance types in 2023. /ec2/faqs/;What actions do I need to take to migrate my existing previous generation instances?;Customers do not need to take any action for migrating active previous generation instances running on older generation hardware. For instances that are on older generation hardware, each customer account ID mapped to instance(s) will receive an email notification 2 weeks prior to the scheduled maintenance. /ec2/faqs/;What will happen to my instance during this maintenance event?;We will work in conjunction with the customer as a part of our standard AWS maintenance process. Several AWS teams have already migrated and are running previous generation instances on Nitro hardware. During maintenance, the instance will be rebooted which can take up to 30 minutes depending upon the instance size and attributes. For example: Instances with local disk take longer to migrate than instances without local disk. After the reboot, your instance retains its IP address, DNname, and any data on local instance-store volumes. /ec2/faqs/;Do I need to rebuild/recertify workloads to run on previous generation instances migrated to AWS Nitro System?;No, customers don’t need to rebuild/recertify workloads on previous generation instances migrated to AWS Nitro System. /ec2/faqs/;Will there be any changes to my instance specifications once migrated to AWS Nitro System?;There will be no change to instance specification of previous generation instances when instances are migrated to AWS Nitro System. /ec2/faqs/;Will all features and AMIs on my previous generation instances be supported as a part of this migration?;Yes, all existing features and AMIs supported on previous generation instances will be supported as we migrate these instances to AWS Nitro System. However, please note that classic networking that has been announced for retirement and will not be supported on previous generation instances running on the Nitro System. We will migrate previous generation of instances running classic networking only after customers have moved to VPC. /ec2/faqs/;Will there be changes to pricing and billing when previous generation instances are migrated to AWS Nitro System?;There will be no change to billing and pricing. We will continue to support the same pricing models we support today for the previous generation instances (on-demand, 1yr/3yr Reserved Instance, Savings Plan, Spot). /eks/faqs/;What is Amazon Elastic Kubernetes Service (Amazon EKS)?;Amazon EKS is a managed service that makes it easy for you to run Kubernetes on AWS without installing and operating your own Kubernetes control plane or worker nodes. /eks/faqs/;What is Kubernetes?;Kubernetes is an open-source container orchestration system allowing you to deploy and manage containerized applications at scale. Kubernetes arranges containers into logical groupings for management and discoverability, then launches them onto clusters of Amazon Elastic Compute Cloud (Amazon EC2) instances. Using Kubernetes, you can run containerized applications including microservices, batch processing workers, and platforms as a service (PaaS) using the same toolset on premises and in the cloud. /eks/faqs/;Why should I use Amazon EKS?;Amazon EKS provisions and scales the Kubernetes control plane, including the application programming interface (API) servers and backend persistence layer, across multiple AWS Availability Zones (AZs) for high availability and fault tolerance. Amazon EKS automatically detects and replaces unhealthy control plane nodes and patches the control plane. You can run EKS using AWS Fargate, which provides serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. /eks/faqs/;How does Amazon EKS work?;Amazon EKS works by provisioning (starting) and managing the Kubernetes control plane and worker nodes for you. At a high level, Kubernetes consists of two major components: a cluster of 'worker nodes' running your containers, and the control plane managing when and where containers are started on your cluster while monitoring their status. /eks/faqs/;Which operating systems does Amazon EKS support?;Amazon EKS supports Kubernetes-compatible Linux x86, ARM, and Windows Server operating system distributions. Amazon EKS provides optimized AMIs for Amazon Linux 2 and Windows Server 2019. At this time, there is no Amazon EKS optimized AMI for AL2023. EKS- optimized AMIs for other Linux distributions, such as Ubuntu, are available from their respective vendors. /eks/faqs/;I have a feature request, who do I tell?;Please let us know what we can add or do better by opening a feature request on the AWS Container Services Public Roadmap /eks/faqs/;Does Amazon EKS work with my existing Kubernetes applications and tools?;Amazon EKS runs the open-source Kubernetes software, so you can use all the existing plug-ins and tooling from the Kubernetes community. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modifications. /eks/faqs/;Does Amazon EKS work with AWS Fargate?;Yes. You can run Kubernetes applications as serverless containers using AWS Fargate and Amazon EKS. /eks/faqs/;What are Amazon EKS add-ons?;EKS Add-Ons let you enable and manage Kubernetes operational software, which provides capabilities like observability, scaling, networking, and AWS cloud resource integrations for your EKS clusters. At launch, EKS add-ons supports controlling the launch and version of the AWS VPC CNplugin through the EKS API. /eks/faqs/;Why should I use Amazon EKS add-ons?;Amazon EKS add-ons provides one-click installation and management of Kubernetes operational software. Go from cluster creation to running applications in a single command, while easily keeping the operational software required for your cluster up to date. This ensures your Kubernetes clusters are secure and stable and reduces the amount of work needed to start and manage production-ready Kubernetes clusters on AWS. /eks/faqs/;Which Kubernetes versions does Amazon EKS support?;See the Amazon EKS documentation for currently supported Kubernetes versions. Amazon EKS will continue to add support for additional Kubernetes versions in the future. /eks/faqs/;Can I update my Kubernetes cluster to a new version?;Yes. Amazon EKS performs managed, in-place cluster upgrades for both Kubernetes and Amazon EKS platform versions. This simplifies cluster operations and lets you take advantage of the latest Kubernetes features, as well as the updates to Amazon EKS configuration and security patches. /eks/faqs/;What is an EKS platform version?;Amazon EKS platform versions represent the capabilities of the cluster control plane, such as which Kubernetes API server flags are enabled, as well as the current Kubernetes patch version. Each Kubernetes minor version has one or more associated Amazon EKS platform versions. The platform versions for different Kubernetes minor versions are independent. /eks/faqs/;Why would I want manual control over Kubernetes version updates?;New versions of Kubernetes introduce significant change to the Kubernetes API, which can change application behavior. Manual control over Kubernetes cluster versioning lets you test applications against new versions of Kubernetes before upgrading production clusters. Amazon EKS offers the ability to choose when you introduce changes to your EKS cluster. /eks/faqs/;How do I update my worker nodes?;AWS publishes EKS-optimized Amazon Machine Images (AMIs) that include the necessary worker node binaries (Docker and Kubelet). This AMI is updated regularly and includes the most up-to-date version of these components. You can update your EKS managed nodes to the latest versions of the EKS-optimized AMIs with a single command in the EKS console, API, or CLI. /eks/faqs/;Where is Amazon EKS available?;Please visit the AWS global infrastructure region table for the most up-to-date information on Amazon EKS Regional availability. /eks/faqs/;What is Amazon EKS Service Level Agreement (SLA)?;The Amazon EKS SLA can be found here. /batch/faqs/;What is AWS Batch?;" Batch computing is the execution of a series of programs (""jobs"") on one or more computers without manual intervention. Input parameters are pre-defined through scripts, command-line arguments, control files, or job control language. A given batch job may depend on the completion of preceding jobs, or on the availability of certain inputs, making the sequencing and scheduling of multiple jobs important, and incompatible with interactive processing." /batch/faqs/;What are the benefits of batch computing?;It can shift the time of job processing to periods when greater or less expensive capacity is available. It avoids idling compute resources with frequent manual intervention and supervision. It increases efficiency by driving higher utilization of compute resources. It enables the prioritization of jobs, aligning resource allocation with business objectives. /batch/faqs/;When should I run my jobs in Fargate vs. EC2?;You should run your jobs on Fargate when you want AWS Batch to handle provisioning of compute completely abstracted from EC2 infrastructure. You should run your jobs on EC2 if you need access to particular instance configurations (particular processors, GPUs, or architecture) or for very-large scale workloads. /batch/faqs/;Can I spill over from a Fargate CE to a Fargate Spot CE, or vice versa?;Yes. You can set Fargate CE’s to have a max vCPU, which is the total amount of vCPU of all the jobs currently running in that CE. When your vCPU count hits max vCPU in a CE, Batch will begin scheduling jobs on the next Fargate CE in order attached to the queue, if there is one. This is useful if, for example, you want to set a Fargate CE to some minimum business requirement, then run the rest of your workload on Fargate Spot. /batch/faqs/;Why should I use AWS Batch?; AWS Batch is optimized for batch computing and applications that scale through the execution of multiple jobs in parallel. Deep learning, genomics analysis, financial risk models, Monte Carlo simulations, animation rendering, media transcoding, image processing, and engineering simulations are all excellent examples of batch computing applications. /batch/faqs/;What are the key features of AWS Batch?;AWS Batch manages compute environments and job queues, allowing you to easily run thousands of jobs of any scale using Amazon ECS, Amazon EKS, and AWS Fargate with an option between Spot or on-demand resources. You simply define and submit your batch jobs to a queue. In response, AWS Batch chooses where to run the jobs, launching additional AWS capacity if needed. AWS Batch carefully monitors the progress of your jobs. When capacity is no longer needed, AWS Batch will remove it. AWS Batch also provides the ability to submit jobs that are part of a pipeline or workflow, enabling you to express any interdependencies that exist between them as you submit jobs. /batch/faqs/;What types of batch jobs does AWS Batch support?; An AWS Batch Compute Resource is an EC2 instance or AWS Fargate compute resource. /batch/faqs/;What is a Compute Resource?;" An AWS Batch Compute Environment is a collection of compute resources on which jobs are executed. AWS Batch supports two types of Compute Environments; Managed Compute Environments which are provisioned and managed by AWS and Unmanaged Compute Environments which are managed by customers. Unmanaged Compute Environments provide a mechanism to leverage specialized resources such as Dedicated Hosts, larger storage configurations, and Amazon EFS." /batch/faqs/;What is a Compute Environment?; A Job Definition describes the job to be executed, parameters, environmental variables, compute requirements, and other information that is used to optimize the execution of a job. Job Definitions are defined in advance of submitting a job and can be shared with others. /batch/faqs/;What is a Job Definition?; AWS Batch uses Amazon ECS to execute containerized jobs and therefore requires the ECS Agent to be installed on compute resources within your AWS Batch Compute Environments. The ECS Agent is pre-installed in Managed Compute Environments. /batch/faqs/;What is the Amazon ECS Agent and how is it used by AWS Batch?; AWS Batch Compute Environments can be comprised of EC2 Spot Instances. When creating a Managed Compute Environment, simplify specify that you would like to use EC2 Spot Instances and provide a percentage of On-Demand pricing that you are willing to pay and AWS Batch will take care of the rest. Unmanaged Compute Environments can also include Spot Instances that you launch, including those launched by EC2 Spot Fleet. /batch/faqs/;Can I use accelerators with AWS Batch?; By using accelerators with Batch, you can dynamically schedule and provision your jobs according to their accelerator needs, and Batch will ensure that the appropriate number of accelerators are reserved against your jobs. Batch will scale up your EC2 Accelerated Instances when you need them, and scale them down when you’re done, allowing you to focus on your applications. Batch has native integration with the EC2 Spot, meaning your accelerated jobs can take advantage of up to 90% savings when using accelerated instances. /batch/faqs/;What accelerators can I use with AWS Batch?;Currently you can use GPU’s on P and G accelerated instances. /batch/faqs/;How do I submit jobs requiring accelerated instances to Batch?;You can specify the number and type of accelerators in the Job Definition. You specify the accelerator by describing the accelerator type (e.g., GPU – currently the only supported accelerator) and the number of that type your job requires. Your specified accelerator type must be present on one of the instance types specified in your Compute Environments. For example, if your job needs 2 GPUs, also make sure that you have specified a P instance in your Compute Environment. /batch/faqs/;Can accelerator variables in the job definition be overwritten at job submission?;If you submit a job to a CE that only allows Batch to launch accelerated instances, Batch will run the jobs on those instances, regardless of their accelerator needs. /batch/faqs/;Can accelerated instances be used for jobs that don't need the accelerators?;If you submit a job to a CE that only allows Batch to launch accelerated instances, Batch will run the jobs on those instances, regardless of their accelerator needs. /batch/faqs/;How do I get started?; There is no need to manually launch your own compute resources in order to get started. The AWS Batch web console will guide you through the process of creating your first Compute Environment and Job Queue so that you can submit your first job. Resources within your compute environment will scale up as additional jobs are ready to run and scale down as the number of runnable jobs decreases. /data-exchange/faqs/;Who are the primary users of AWS Data Exchange?;AWS Data Exchange makes it easy for AWS customers to securely exchange and use third-party data on AWS. Data analysts, product managers, portfolio managers, data scientists, quants, clinical trial technicians, and developers in nearly every industry would like access to more data to drive analytics, train machine learning (ML) models, and make data-driven decisions. But there is no one place to find data from multiple providers and no consistency in how providers deliver data, leaving them to deal with a mix of shipped physical media, FTP credentials, and bespoke API calls. Conversely, many organizations would like to make their data available for either research or commercial purposes, but it’s too hard and expensive to build and maintain data delivery, entitlement, and billing technology, which further depresses the supply of valuable data. /data-exchange/faqs/;What is AWS Data Exchange for APIs?;AWS Data Exchange for APIs is a feature that enables customers to find, subscribe to, and use third-party API products from providers on AWS Data Exchange. With AWS Data Exchange for APIs, customers can use AWS-native authentication and governance, explore consistent API documentation, and utilize supported AWS SDKs to make API calls. Now, by adding their APIs to the AWS Data Exchange catalog, data providers can reach millions of AWS customers that consume API-based data and more easily manage subscriber authentication, entitlement, and billing. /data-exchange/faqs/;How do I know that data I subscribe to is free of any malware?;Security and Compliance is a shared responsibility (https://aws.amazon.com/compliance/shared-responsibility-model/) between AWS and the customer. To promote a safe, secure, and trustworthy service for everyone, AWS Data Exchange scans data files hosted in S3 buckets it manages before making them available to subscribers. If AWS detects malware, AWS will remove the affected file(s). AWS Data Exchange does not guarantee that the data you consume as a subscriber is free of any potential malware. Customers are encouraged to conduct their own additional due diligence to ensure compliance with their internal security controls. You can explore many third-party anti-malware and security products in AWS Marketplace. /data-exchange/faqs/;How do I find API products in the AWS Data Exchange catalog?;You can find products containing APIs in the AWS Marketplace catalog, or you can navigate to the AWS Data Exchange catalog and select API under the Data available through filter. /data-exchange/faqs/;How do I subscribe to an API product?;After you’ve found the API that you want to subscribe to, select the product to learn more on the product detail page. Next, choose Continue to subscribe, review the subscription terms, and then choose the Subscribe button at the bottom of the page. Note: You may be asked to submit information to the data provider before you can request to subscribe to their product. For more information on subscribing and using an API product, see the Subscribing to a product containing APIs topic in the AWS Data Exchange User Guide. /data-exchange/faqs/;How do I make an API call?;First, ensure you have successfully subscribed to a product containing an API data set. Then, navigate to the product’s asset detail page to view API schemas and code snippets that will help you structure your API call. You can also utilize the AWS SDK to automatically sign your API requests with your AWS credentials. For more information about how to structure API calls to AWS Data Exchange products containing API data sets, see the Making an API call (console) in the AWS Data Exchange User Guide. /data-exchange/faqs/;Does AWS Data Exchange for APIs have a Service Level Agreement (SLA)?;AWS Data Exchange for APIs does not currently offer an SLA. /data-exchange/faqs/;Are there any AWS Data Exchange for APIs-specific SDKs that I should be aware of?;We have added AWS Data Exchange for APIs-specific operations to the following SDKs: /data-exchange/faqs/;How do I publish a product containing APIs?;As a provider, you first need to set up an AWS account and register as an AWS Marketplace seller. You can then publish an API product by following the steps detailed in the Publishing a product containing APIs topic in the AWS Data Exchange User Guide. /data-exchange/faqs/;Do providers have to offer a Service Level Agreement (SLA) to AWS Data Exchange when offering an API product?;AWS Data Exchange for APIs does not require providers to offer an uptime or availability SLA. Providers and subscribers can negotiate custom terms as part of a DSA. See Publishing Products for further information. /data-exchange/faqs/;What guidelines do I need to follow as a provider on AWS Data Exchange for APIs?;In addition to following guidelines under the Terms and Conditions for AWS Marketplace Sellers and the AWS Customer Agreement, providers of products containing APIs must respond to subscriber support inquiries within 1 business day, as set forth in the AWS Data Exchange User Guide. Not following the guidelines may result in products being removed from AWS Data Exchange. For more information, see Publishing guidelines topic in the AWS Data Exchange User Guide. /data-exchange/faqs/;Can products contain APIs, Amazon S3 objects, and Amazon Redshift data shares?;Yes. As a data provider, you can publish products containing multiple data set types. /elasticbeanstalk/faqs/;What is AWS Elastic Beanstalk?;AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. /elasticbeanstalk/faqs/;Who should use AWS Elastic Beanstalk?;Those who want to deploy and manage their applications within minutes in the AWS Cloud. You don’t need experience with cloud computing to get started. AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker web applications. /elasticbeanstalk/faqs/;Which languages and development stacks does AWS Elastic Beanstalk support?;AWS Elastic Beanstalk supports the following languages and development stacks: /elasticbeanstalk/faqs/;Will AWS Elastic Beanstalk support other languages?;Yes. AWS Elastic Beanstalk is designed so that it can be extended to support multiple development stacks and programming languages in the future. AWS is working with solution providers on the APIs and capabilities needed to create additional Elastic Beanstalk offerings. /elasticbeanstalk/faqs/;What can developers now do with AWS Elastic Beanstalk that they could not before?;AWS Elastic Beanstalk automates the details of capacity provisioning, load balancing, auto scaling, and application deployment, creating an environment that runs a version of your application. You can simply upload your deployable code (e.g., WAR file), and AWS Elastic Beanstalk does the rest. The AWS Toolkit for Visual Studio and the AWS Toolkit for Eclipse allow you to deploy your application to AWS Elastic Beanstalk and manage it without leaving your IDE. Once your application is running, Elastic Beanstalk automates management tasks–such as monitoring, application version deployment, a basic health check–and facilitates log file access. By using Elastic Beanstalk, developers can focus on developing their application and are freed from deployment-oriented tasks, such as provisioning servers, setting up load balancing, or managing scaling. /elasticbeanstalk/faqs/;How is AWS Elastic Beanstalk different from existing application containers or platform-as-a-service solutions?;Most existing application containers or platform-as-a-service solutions, while reducing the amount of programming required, significantly diminish developers’ flexibility and control. Developers are forced to live with all the decisions predetermined by the vendor–with little to no opportunity to take back control over various parts of their application’s infrastructure. However, with AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application. If developers decide they want to manage some (or all) of the elements of their infrastructure, they can do so seamlessly by using Elastic Beanstalk’s management capabilities. /elasticbeanstalk/faqs/;What elements of my application can I control when using AWS Elastic Beanstalk?;With AWS Elastic Beanstalk, you can: /elasticbeanstalk/faqs/;What are the Cloud resources powering my AWS Elastic Beanstalk application?;AWS Elastic Beanstalk uses proven AWS features and services, such as Amazon EC2, Amazon RDS, Elastic Load Balancing, Auto Scaling, Amazon S3, and Amazon SNS, to create an environment that runs your application. The current version of AWS Elastic Beanstalk uses the Amazon Linux AMI or the Windows Server 2019. /elasticbeanstalk/faqs/;What kinds of applications are supported by AWS Elastic Beanstalk?;AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker, and is ideal for web applications. However, due to Elastic Beanstalk’s open architecture, non-web applications can also be deployed using Elastic Beanstalk. We expect to support additional application types and programming languages in the future. See Supported Platforms to learn more. /elasticbeanstalk/faqs/;Which operating systems does AWS Elastic Beanstalk use?;AWS Elastic Beanstalk runs on the Amazon Linux AMI and the Windows Server AMI. Both AMIs are supported and maintained by Amazon Web Services and are designed to provide a stable, secure, and high-performance execution environment for Amazon EC2 Cloud computing. /elasticbeanstalk/faqs/;How do I sign up for AWS Elastic Beanstalk?;"To sign up for AWS Elastic Beanstalk, choose the Sign Up Now button on the Elastic Beanstalk detail page. You must have an Amazon Web Services account to access this service; if you do not already have one, you will be prompted to create one when you begin the Elastic Beanstalk process. After signing up, please refer to the AWS Elastic Beanstalk Getting Started Guide." /elasticbeanstalk/faqs/;Why am I asked to verify my phone number when signing up for AWS Elastic Beanstalk?;AWS Elastic Beanstalk registration requires you to have a valid phone number and email address on file with AWS in case we ever need to contact you. Verifying your phone number takes only a few minutes and involves receiving an automated phone call during the registration process and entering a PIN number using the phone key pad. /elasticbeanstalk/faqs/;How do I get started after I have signed up?;The best way to get started with AWS Elastic Beanstalk is to work through the AWS Elastic Beanstalk Getting Started Guide, part of our technical documentation. Within a few minutes, you will be able to deploy and use a sample application or upload your own application. /elasticbeanstalk/faqs/;Is there a sample application that I can use to check out AWS Elastic Beanstalk?;Yes. AWS Elastic Beanstalk includes a sample application that you can use to test drive the offering and explore its functionality. /elasticbeanstalk/faqs/;Does AWS Elastic Beanstalk store anything in Amazon S3?;Yes. AWS Elastic Beanstalk stores your application files and, optionally, server log files in Amazon S3. If you are using the AWS Management Console, the AWS Toolkit for Visual Studio, or AWS Toolkit for Eclipse, an Amazon S3 bucket will be created in your account for you and the files you upload will be automatically copied from your local client to Amazon S3. Optionally, you may configure Elastic Beanstalk to copy your server log files every hour to Amazon S3. You do this by editing the environment configuration settings. /elasticbeanstalk/faqs/;Can I use Amazon S3 to store application data, like images?;Yes. You can use Amazon S3 for application storage. The easiest way to do this is by including the AWS SDK as part of your application’s deployable file. For example, you can include the AWS SDK for Java as part of your application's WAR file. /elasticbeanstalk/faqs/;What database solutions can I use with AWS Elastic Beanstalk?;AWS Elastic Beanstalk does not restrict you to any specific data persistence technology. You can choose to use Amazon Relational Database Service (Amazon RDS) or Amazon DynamoDB, or use Microsoft SQL Server, Oracle, or other relational databases running on Amazon EC2. /elasticbeanstalk/faqs/;How do I set up a database for use with AWS Elastic Beanstalk?;Elastic Beanstalk can automatically provision an Amazon RDS DB instance. The information about connectivity to the DB instance is exposed to your application by environment variables. To learn more about how to configure RDS DB instances for your environment, see the Elastic Beanstalk Developer Guide. /elasticbeanstalk/faqs/;Does this mean I need to modify the application code when moving from test to production?;Not with AWS Elastic Beanstalk. With Elastic Beanstalk, you can specify the connection information in the environment configuration. By extracting the connection string from the application code, you can easily configure different Elastic Beanstalk environments to use different databases. /elasticbeanstalk/faqs/;How do I make my application private?;By default, your application is available publicly at myapp.elasticbeanstalk.com for anyone to access. You can use Amazon VPC to provision a private, isolated section of your application in a virtual network that you define. This virtual network can be made private through specific security group rules, network ACLs, and custom route tables. You can also easily control what other incoming traffic, such as SSH, is delivered or not to your application servers by changing the EC2 security group settings. /elasticbeanstalk/faqs/;Can I run my application inside a Virtual Private Cloud (VPC)?;Yes, you can run your applications in a VPC. For more details, see the AWS Elastic Beanstalk Developer Guide. /elasticbeanstalk/faqs/;Where can I find more information about security and running applications on AWS?;For more information about security on AWS, please refer to our Amazon Web Services: Overview of Security Processes document and visit our Security Center. /elasticbeanstalk/faqs/;Is it possible to use Identity & Access Management (IAM) with AWS Elastic Beanstalk?;Yes. IAM users with the appropriate permissions can now interact with AWS Elastic Beanstalk. /elasticbeanstalk/faqs/;Why should I use IAM with AWS Elastic Beanstalk?;IAM allows you to manage users and groups in a centralized manner. You can control which IAM users have access to AWS Elastic Beanstalk, and limit permissions to read-only access to Elastic Beanstalk for operators who should not be able to perform actions against Elastic Beanstalk resources. All user activity within your account will be aggregated under a single AWS bill. /elasticbeanstalk/faqs/;How do I create IAM users?;You can use the IAM console, IAM command line interface (CLI), or IAM API to provision IAM users. By default, IAM users have no access to AWS services until permissions are granted. /elasticbeanstalk/faqs/;How do I grant an IAM user access to AWS Elastic Beanstalk?;You can grant IAM users access to services by using policies. To simplify the process of granting access to AWS Elastic Beanstalk, you can use one of the policy templates in the IAM console to help you get started. Elastic Beanstalk offers two templates: a read-only access template and a full-access template. The read-only template grants read access to Elastic Beanstalk resources. The full-access template grants full access to all Elastic Beanstalk operations, as well as permissions to manage dependent resources, such as Elastic Load Balancing, Auto Scaling, and Amazon S3. You can also use the AWS Policy Generator to create custom policies. For more details, see the AWS Elastic Beanstalk Developer Guide. /elasticbeanstalk/faqs/;Can I restrict access to specific AWS Elastic Beanstalk resources?;Yes. You can allow or deny permissions to specific AWS Elastic Beanstalk resources, such as applications, application versions, and environments. /elasticbeanstalk/faqs/;Who gets billed for the AWS resources that an IAM user creates?;All resources created by IAM users under a root account are owned and billed to the root account. /elasticbeanstalk/faqs/;Who has access to an AWS Elastic Beanstalk environment launched by an IAM user?;The root account has full access to all AWS Elastic Beanstalk environments launched by any IAM user under that account. If you use the Elastic Beanstalk template to grant read-only access to an IAM user, that user will be able to view all applications, application versions, environments, and any associated resources in that account. If you use the Elastic Beanstalk template to grant full access to an IAM user, that user will be able to create, modify, and terminate any Elastic Beanstalk resources under that account. /elasticbeanstalk/faqs/;Can an IAM user access the AWS Elastic Beanstalk console?;Yes. An IAM user can access the AWS Elastic Beanstalk console using their username and password. /elasticbeanstalk/faqs/;Can an IAM user call the AWS Elastic Beanstalk API?;Yes. An IAM user can use their access key and secret key to perform operations using the Elastic Beanstalk API. /elasticbeanstalk/faqs/;Can an IAM user use the AWS Elastic Beanstalk command line interface?;Yes. An IAM user can use their access key and secret key to perform operations using the AWS Elastic Beanstalk command line interface (CLI). /elasticbeanstalk/faqs/;How can I keep the underlying platform of the environment running my application automatically up-to-date?;You can opt-in to having your AWS Elastic Beanstalk environments automatically updated to the latest version of the underlying platform running your application during a specified maintenance window. Elastic Beanstalk regularly releases new versions of supported platforms (Java, PHP, Ruby, Node.js, Python, .NET, Go, and Docker) with operating system, web and application server, and language and framework updates. /elasticbeanstalk/faqs/;How can I get started with managed platform updates?;To let Elastic Beanstalk automatically manage your platform updates, you must enable managed platform updates in the Configuration tab of the Elastic Beanstalk console or use the EB CLI or API. After you have enabled the feature, you can configure which types of updates to allow and when updates can occur. /elasticbeanstalk/faqs/;What kinds of platform version updates will managed platform updates apply?;AWS Elastic Beanstalk can automatically perform platform updates for new patch and minor platform versions. Elastic Beanstalk will not automatically perform major platform version updates (e.g., Java 7 Tomcat 7 to Java 8 Tomcat 8) because they include backwards incompatible changes and require additional testing. In these cases, you must manually initiate the update. /elasticbeanstalk/faqs/;How does AWS Elastic Beanstalk distinguish between “major,” “minor,” and “patch” version releases?;AWS Elastic Beanstalk platforms are versioned using this pattern: MAJOR.MINOR.PATCH (e.g., 2.0.0). Each portion is incremented as follows: /elasticbeanstalk/faqs/;When and how can I perform major version updates?;You can perform major version updates at any time using the AWS Elastic Beanstalk management console, API, or CLI. You have the following options to perform a major version update: /elasticbeanstalk/faqs/;How does Elastic Beanstalk apply managed platform updates?;The updates are applied using an immutable deployment mechanism that ensures that no changes are made to the existing environment until a parallel fleet of Amazon EC2 instances, with the updates installed, is ready to be swapped with the existing instances, which are then terminated. In addition, if the Elastic Beanstalk health system detects any issues during the update, traffic is redirected to the existing fleet of instances, ensuring minimal impact to end users of your application. /elasticbeanstalk/faqs/;Will my application be available during the maintenance windows?;Since managed platform updates use an immutable deployment mechanism to perform the updates, your application will be available during the maintenance window and consumers of your application will not be impacted by the update. /elasticbeanstalk/faqs/;What does it cost to use managed platform updates?;There is no additional charge for the managed platform updates feature. You simply pay for the additional EC2 instances necessary to perform the update for the duration of the update. /elasticbeanstalk/faqs/;What is a maintenance window?;A maintenance window is a weekly two-hour-long time slot during which AWS Elastic Beanstalk will initiate platform updates if managed platform updates is enabled and a new version of the platform is available. For example, if you select a maintenance window that begins every Sunday at 2 AM, AWS Elastic Beanstalk will initiate the platform update sometime between 2-4 AM every Sunday. It is important to note that, depending on the configuration of your applications, updates could complete outside of the maintenance window. /elasticbeanstalk/faqs/;How will I be notified of the availability of new platform versions?;You will be notified about the availability of new platform versions through the AWS Management Console, forum announcements, and release notes. /elasticbeanstalk/faqs/;Where can I find details of changes between platform versions?;Details on changes between platform versions can be found on the AWS Elastic Beanstalk Release Notes page. /elasticbeanstalk/faqs/;What operations can I perform on the environment while a managed update is in progress?;The only action available to you while a managed platform update is in-progress is ‘abort’. This will allow you to stop the update immediately and roll back to the previous version. /elasticbeanstalk/faqs/;Which platform version will my environment be updated to if there are multiple new versions released in between maintenance windows?;Your environment will always be updated to the latest version available based on the level (minor plus patch or patch only) you have selected. /elasticbeanstalk/faqs/;Where can I find details of all the managed platform updates that have been performed on my environment?;Details for every managed platform update are available on the events page and are tagged with an event type of “MAINTENANCE.” /elasticbeanstalk/faqs/;How often are platform version updates released?;The number of version releases in a given year varies based on the frequency and content of releases and patches from the language/framework’s vendor or core team, and the outcome of a thorough vetting of these releases and patches by our platform engineering team. /elasticbeanstalk/faqs/;How do I deploy a new workload with the Graviton processor from the Elastic Beanstalk console?;To deploy your application with arm64-based processors on the Elastic Beanstalk console, you can select processor architecture and instance type from capacity tab in Configure more options settings. /elasticbeanstalk/faqs/;How do I deploy a new workload with the Graviton processor from the AWS CLI, Elastic Beanstalk CLI, or infrastructure as code services?;To deploy your application using the Elastic Beanstalk CLI, AWS CLI, CFNor AWS CDK, refer to Elastic Beanstalk Developer Guide. /elasticbeanstalk/faqs/;Do I need to recompile my workload before migrating to Graviton?;If your workload is on an interpreted programming language such as Node.js, Python, Tomcat, PHP, or Ruby, you do not need to recompile your workload to use Graviton. If you are using Go or .Net Core for your workload, you need to update the build command for the arm64 instance type. You also need to recompile binary dependencies or use an arm64 compatible release of binary dependencies. If you are using Docker, your Docker image must be multi-architecture and support deploying to both x86 and arm64. /elasticbeanstalk/faqs/;Which platform branches are supported by Graviton on Elastic Beanstalk?;Elastic Beanstalk supports Graviton on 64bit Amazon Linux 2 for a variety of platform and branches. See the documentation for a full list. /elasticbeanstalk/faqs/;What are the use cases where I can use the Graviton processor?;"You can easily transition your workload to Graviton and take advantage of performance and cost benefits in the following use cases: Linux-based workloads built primarily on open-source technologies; containerized and microservices-based applications such as Docker and MC Docker; applications written in portable programming languages such as Java, Python, .NET Core, node.js, and PHP; Compiled C/C++, Rust, or Go applications; .NET Core (v3.1+) workloads running on Linux; multi-threaded workloads;non-uniform memory access (NUMA) sensitive workloads; and arm64-native software development and testing." /elasticbeanstalk/faqs/;How much does AWS Elastic Beanstalk cost?;There is no additional charge for AWS Elastic Beanstalk–you pay only for the AWS resources actually used to store and run your application. /elasticbeanstalk/faqs/;How much do the AWS resources powering my application on AWS Elastic Beanstalk cost?;You pay only for what you use, and there is no minimum fee for the use of any AWS resources. For Amazon EC2 pricing information, please visit the pricing section on the EC2 detail page. For Amazon S3 pricing information, please visit the pricing section on the S3 detail page. You can use the AWS simple calculator to estimate your bill for different application sizes. /elasticbeanstalk/faqs/;How do I check how many AWS resources have been used by my application and access my bill?;You can view your charges for the current billing period at any time on the Amazon Web Services web site by logging into your Amazon Web Services account and choosing Account Activity under Your Web Services Account. /elasticbeanstalk/faqs/;Does AWS Support cover AWS Elastic Beanstalk?;Yes. AWS Support covers issues related to your use of AWS Elastic Beanstalk. For further details and pricing, see the AWS Support page. /elasticbeanstalk/faqs/;What other support options are available?;You can tap into the breadth of existing AWS community knowledge to help you with your development through the AWS Elastic Beanstalk discussion forum. /fargate/faqs/;What is AWS Fargate?;AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). AWS Fargate makes it easy to focus on building your applications. Fargate eliminates the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. /fargate/faqs/;Why should I use AWS Fargate?;AWS Fargate enables you to focus on your applications. You define your application content, networking, storage, and scaling requirements. There is no provisioning, patching, cluster capacity management, or infrastructure management required. /fargate/faqs/;What use cases does AWS Fargate support?;AWS Fargate supports all of the common container use cases including microservices architecture applications, batch processing, machine learning applications, and migrating on-premises applications to the cloud. /fargate/faqs/;How does AWS Fargate work with Amazon ECS and Amazon EKS?;Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and Amazon Elastic Kubernetes Service (EKS) is a fully managed Kubernetes service. Both ECS and EKS use containers provisioned by Fargate to automatically scale, load balance, and optimize container availability through managed scheduling, providing an easier way to build and operate containerized applications. /fargate/faqs/;Can I run my Arm-based applications on AWS Fargate?;Yes. AWS Fargate allows you to run your Arm-based applications by using Arm-compatible container images or multi-architecture container images in Amazon Elastic Container Registry (Amazon ECR). You can simply specify the CPU Architecture as ARM64 in your Amazon ECS Task Definition to target AWS Fargate powered by Arm-based AWS Graviton2 Processors. /fargate/faqs/;Why should I use AWS Fargate powered by Graviton2 processors?;AWS Graviton2 processors are custom built by Amazon Web Services using 64-bit Arm Neoverse cores to deliver the best price performance for your cloud workloads. AWS Fargate powered by AWS Graviton2 processors delivers up to 40% improved price/performance at 20% lower cost over comparable Intel x86-based Fargate for a variety of workloads such as application servers, web services, high-performance computing, and media processing. You get the same serverless benefits of AWS Fargate while optimizing performance and cost for running your containerized workloads. /fargate/faqs/;Can I run my Amazon ECS Windows containers on AWS Fargate?;Yes. AWS Fargate offers a serverless approach for running your Windows containers. It removes the need to provision and manage servers and lets you specify and pay for resources per application. Fargate provides task-level isolation and handles the necessary patching and updating to help provide a secure compute environment. /fargate/faqs/;Which Windows Server versions are supported with AWS Fargate?;Fargate supports Windows Server 2019 Long-Term Servicing Channel (LTSC) release on Fargate Windows Platform Version 1.0.0 or later. /fargate/faqs/;What is changing?;AWS Fargate is transitioning service quotas from the current Amazon ECS task and Amazon EKS pod count based concurrent quotas to vCPU-based quotas for On-Demand and Spot usage. The new vCPU-based quotas will replace the existing tasks and pods count-based quotas. With vCPU-based quotas, we are simplifying the service quotas experience as your accounts’ usage against these quotas is now measured using vCPUs, the primary resource provisioned by your applications. /fargate/faqs/;How do vCPU-based quotas benefit me?;With vCPU-based quotas, Fargate uses the number of vCPUs provisioned by a task or pod as the quota unit. You can now more easily forecast, manage, and request quotas based on the vCPUs provisioned by your applications. Currently, you manage quotas on Fargate using task and pod count, undifferentiated by vCPUs your applications need. For example, an account with service quota of 250 tasks can launch up to 250 0.25 vCPU or 250 4vCPU tasks. With the new vCPU-based service quotas, a quota of 1,000 vCPUs allows you to concurrently launch up to 4,000 0.25 vCPU or up to 250 4 vCPUs tasks. With vCPU-based quotas, On-Demand tasks or pods and Spot tasks usage against the vCPU quotas are measured in terms of the number of vCPUs configured for your running tasks or pods. /fargate/faqs/;When can I start using vCPU-based quotas?;Fargate provides you the option to opt-in to vCPU quotas starting September 8, 2022. By opting in, you give yourself valuable time to make modifications to your limit management tools and minimize the risk of impact to your systems. Starting October 10, 2022, Fargate will automatically begin switching over accounts to use the new vCPU quotas in a phased manner. You will still have the option to opt-out of vCPU quotas until end of October 2022. Starting November 1, 2022, Fargate will switch all remaining accounts to vCPU quotas, regardless of opt out status, and task and pod count-based quotas will no longer be supported. /fargate/faqs/;How do I opt in and out of vCPU-based quotas?;If you use Amazon ECS with Fargate, you can easily and quickly opt in and opt out of vCPU-based quotas by changing your ECS account setting using the CLI as documented here. If you use Amazon EKS with Fargate, you can file a request with a request in the AWS Support Center console. You opt in or out of the vCPU-based quotas for each of your AWS accounts. Once your request to opt-in to vCPU quotas is processed, your task and pod count’s applied limit will be marked zero on the Service Quotas Console, only your vCPU-based quotas will be displayed. You should now start managing your Service Quotas using vCPU-based quotas. /fargate/faqs/;Are vCPU-based quotas regional?;Yes. Like task and pod count-based quotas, vCPU-based quotas for an AWS account are on a per-region basis. /fargate/faqs/;How can I view my current task and pod count-based quotas and new vCPU based-quotas?;You can find your current task and pod count quotas on the Service Quotas Console and by using the Service Quota API. Starting September 8, 2022, you will be able to view both current task and pod count-based quotas and new vCPU-based quotas on Service Quotas Console. /fargate/faqs/;Will I be able to view actual usage against these new quotas?;Yes. You can track and inspect your vCPU usage against these quotas in Service Quotas. Service Quotas also enables customers to use CloudWatch for configuring new alarms to warn customers of approaching their vCPU-based quotas. /fargate/faqs/;Will the migration to vCPU quotas affect running tasks and pods?;No, opting in to and out of vCPU-based quotas during this transition period will not affect any running tasks or pods. /fargate/faqs/;What if I run into issues with vCPU-based quotas?;If you run into issues with vCPU-based quotas, you can opt back out of vCPU quotas and remediate your systems. However, your account will automatically be transitioned back to vCPU quotas beginning November 2022. It is important for you to test your systems with vCPU quotas before November 2022. /fargate/faqs/;What are the changes I should be aware of with the migration to vCPU-based quotas?;If you integrate with the current quotas’ limit exceeded error, we recommend testing your systems before the transition period ends. For instance, with vCPU quotas, Fargate will return a new error message when exceeding your new vCPU quotas. This new error message for On-Demand quotas is: “You’ve reached the limit on the number of vCPUs you can run concurrently” and for Spot quotas is: ”You’ve reached the limit on the number of vCPUs you can run as spot tasks concurrently”. We recommend reviewing your system for changes if you have integration with Service Quotas, Service Quota APIs, or templates. With Amazon CloudWatch metrics integration in Service Quotas, you can monitor Fargate usage against the new vCPU-based quotas by configuring new alarms to warn about approaching quotas. /fargate/faqs/;How can I request a quota increase for vCPU-based quotas?;You continue to request limit increases using the Service Quotas console. To request a limit increase, select “Request Limit Increase” in Service Quota console and state your requirement in vCPUs. If you continue to use task and pod count-based quotas, you can request a limit increase against the existing task and pod count quotas. /fargate/faqs/;Can I still launch the same number of tasks and pods?;Yes, vCPU-based quotas allow you to launch at least the same number of task or pods as you do today with task and pod count-based quotas. If your account already has an approved quota increase, you will continue to be able to launch at least the same number of tasks or pods. Like today, new AWS accounts may start with lower quotas than the default, and these quotas can increase over time. Read our documentation for more details. /fargate/faqs/;What happens to my quotas if I opt out of vCPU quotas during the transition period?;If you decide to opt out during the transition period, your quotas will revert to task and pod count-based limit values you had before you opted in. Note that Fargate will however automatically switch your accounts to vCPU quotas beginning November 1, 2022. /fargate/faqs/;What will happen if I take no action?;Your accounts will automatically begin to use vCPU-based quotas starting October 10, 2022 as we migrate your accounts to vCPU-based quotas in a phased manner. By testing and opting in earlier, you give yourself valuable time to make modifications to your limit management tools and minimize the risk of impact to your systems. /fargate/faqs/;Will these new quotas have an impact on my monthly bill?;No. Fargate’s pricing remains the same regardless of task and pod count-based, or vCPU-based quotas. /fargate/faqs/;With which compliance programs does AWS Fargate conform?;AWS Fargate meets the standards for PCI DSS Level 1, ISO 9001, ISO 27001, ISO 27017, ISO 27018, SOC 1, SOC 2, SOC 3, and HIPAA eligibility. /fargate/faqs/;Can I use AWS Fargate for Protected Health Information (PHI) and other HIPAA regulated workloads?;Yes. AWS Fargate is HIPAA-eligible. If you have an executed Business Associate Addendum (BAA) with AWS, you can process encrypted Protected Health Information (PHI) using Docker containers deployed onto Fargate. /fargate/faqs/;Can I use AWS Fargate for US Government-regulated workloads or processing sensitive Controlled Unclassified Information (CUI)?;Yes. Fargate is available in AWS GovCloud (US) Regions. AWS GovCloud (US) is Amazon's isolated cloud infrastructure and services designed to address specific regulatory and compliance requirements of US Government agencies, as well as contractors, educational institutions, and other US customers that run sensitive workloads in the cloud. For a full list of AWS Regions where Fargate is available, please visit our Region table. /fargate/faqs/;Which Windows Server versions are supported with AWS Fargate?;Fargate supports Windows Server 2019 Long-Term Servicing Channel (LTSC) release on Fargate Windows Platform Version 1.0.0 or later. /fargate/faqs/;What does the AWS Fargate SLA guarantee?;Our Compute SLA guarantees a Monthly Uptime Percentage of at least 99.99% for AWS Fargate. /fargate/faqs/;How do I know if I qualify for an SLA Service Credit?;You are eligible for an AWS Fargate SLA credit under the Compute SLA if more than one Availability Zone in which you are running a task, within the same region, has a Monthly Uptime Percentage of less than 99.99% during any monthly billing cycle. /lambda/faqs/;What is AWS Lambda?;AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code, and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. /lambda/faqs/;What is serverless computing?;Serverless computing allows you to build and run applications and services without thinking about servers. With serverless computing, your application still runs on servers, but all the server management is done by AWS. At the core of serverless computing is AWS Lambda, which lets you run your code without provisioning or managing servers. /lambda/faqs/;What events can trigger an AWS Lambda function?;Please see our documentation for a complete list of event sources. /lambda/faqs/;When should I use AWS Lambda versus Amazon EC2?;Amazon Web Services offers a set of compute services to meet a range of needs. /lambda/faqs/;What kind of code can run on AWS Lambda?;AWS Lambda offers an easy way to accomplish many activities in the cloud. For example, you can use AWS Lambda to build mobile back-ends that retrieve and transform data from Amazon DynamoDB, handlers that compress or transform objects as they are uploaded to Amazon S3, auditing and reporting of API calls made to any Amazon Web Service, and server-less processing of streaming data using Amazon Kinesis. /lambda/faqs/;What languages does AWS Lambda support?;AWS Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API which allows you to use any additional programming languages to author your functions. Please read our documentation on using Node.js, Python, Java, Ruby, C#, Go, and PowerShell. /lambda/faqs/;Can I access the infrastructure that AWS Lambda runs on?;No. AWS Lambda operates the compute infrastructure on your behalf, allowing it to perform health checks, apply security patches, and do other routine maintenance. /lambda/faqs/;How does AWS Lambda isolate my code?;Each AWS Lambda function runs in its own isolated environment, with its own resources and file system view. AWS Lambda uses the same techniques as Amazon EC2 to provide security and separation at the infrastructure and execution levels. /lambda/faqs/;How does AWS Lambda secure my code?;AWS Lambda stores code in Amazon S3 and encrypts it at rest. AWS Lambda performs additional integrity checks while your code is in use. /lambda/faqs/;What AWS regions are available for AWS Lambda?;Please refer to the AWS Global Infrastructure Region Table. /lambda/faqs/;What is an AWS Lambda function?;The code you run on AWS Lambda is uploaded as a “Lambda function”. Each function has associated configuration information, such as its name, description, entry point, and resource requirements. The code must be written in a “stateless” style i.e. it should assume there is no affinity to the underlying compute infrastructure. Local file system access, child processes, and similar artifacts may not extend beyond the lifetime of the request, and any persistent state should be stored in Amazon S3, Amazon DynamoDB, Amazon EFS, or another Internet-available storage service. Lambda functions can include libraries, even native ones. /lambda/faqs/;Will AWS Lambda reuse function instances?;To improve performance, AWS Lambda may choose to retain an instance of your function and reuse it to serve a subsequent request, rather than creating a new copy. To learn more about how Lambda reuses function instances, visit our documentation. Your code should not assume that this will always happen. /lambda/faqs/;What if I need scratch space on disk for my AWS Lambda function?;You can configure each Lambda function with its own ephemeral storage between 512MB and 10,240MB, in 1MB increments. The ephemeral storage is available in each function’s /tmp directory. /lambda/faqs/;How do I configure my application to use AWS Lambda ephemeral storage?;You can configure each Lambda function with its own ephemeral storage between 512MB and 10,240MB, in 1MB increments by using the AWS Lambda console, AWS Lambda API, or AWS CloudFormation template during function creation or update. /lambda/faqs/;Is AWS Lambda ephemeral storage encrypted?;Yes. All data stored in ephemeral storage is encrypted at rest with a key managed by AWS. /lambda/faqs/;What metrics can I use to monitor my AWS Lambda ephemeral storage usage?;You can use AWS CloudWatch Lambda Insight metrics to monitor your ephemeral storage usage. To learn more, see the AWS CloudWatch Lambda Insights documentation. /lambda/faqs/;When should I use Amazon S3, Amazon EFS, or AWS Lambda ephemeral storage for my serverless applications?;If your application needs durable, persistent storage, consider using Amazon S3 or Amazon EFS. If your application requires storing data needed by code in a single function invocation, consider using AWS Lambda ephemeral storage as a transient cache. To learn more, please see Choosing between AWS Lambda data storage options in web apps. /lambda/faqs/;Can I use ephemeral storage while Provisioned Concurrency is enabled for my function?;Yes. However, if you application needs persistent storage, consider using Amazon EFS or Amazon S3. When you enable Provisioned Concurrency for your function, your function's initialization code runs during allocation and every few hours, as running instances of your function are recycled. You can see the initialization time in logs and traces after an instance processes a request. However, initialization is billed even if the instance never processes a request. This Provisioned Concurrency initialization behavior may affect how your function interacts with data you store in ephemeral storage, even when your function isn’t processing requests. To learn more about Provisioned Concurrency, please see the relevant documentation. /lambda/faqs/;How do I configure my application to use AWS Lambda ephemeral storage?;You can configure each Lambda function with its own ephemeral storage between 512MB and 10,240MB, in 1MB increments by using the AWS Lambda console, AWS Lambda API, or AWS CloudFormation template during function creation or update. /lambda/faqs/;Is AWS Lambda ephemeral storage encrypted?;Yes. All data stored in ephemeral storage is encrypted at rest with a key managed by AWS. /lambda/faqs/;What metrics can I use to monitor my AWS Lambda ephemeral storage usage?;You can use AWS CloudWatch Lambda Insight metrics to monitor your ephemeral storage usage. To learn more, see the AWS CloudWatch Lambda Insights documentation. /lambda/faqs/;Why must AWS Lambda functions be stateless?;Keeping functions stateless enables AWS Lambda to rapidly launch as many copies of the function as needed to scale to the rate of incoming events. While AWS Lambda’s programming model is stateless, your code can access stateful data by calling other web services, such as Amazon S3 or Amazon DynamoDB. /lambda/faqs/;Can I use threads and processes in my AWS Lambda function code?;Yes. AWS Lambda allows you to use normal language and operating system features, such as creating additional threads and processes. Resources allocated to the Lambda function, including memory, execution time, disk, and network use, must be shared among all the threads/processes it uses. You can launch processes using any language supported by Amazon Linux. /lambda/faqs/;What restrictions apply to AWS Lambda function code?;Lambda attempts to impose as few restrictions as possible on normal language and operating system activities, but there are a few activities that are disabled: Inbound network connections are blocked by AWS Lambda, and for outbound connections, only TCP/IP and UDP/IP sockets are supported, and ptrace (debugging) system calls are blocked. TCP port 25 traffic is also blocked as an anti-spam measure. /lambda/faqs/;How do I create an AWS Lambda function using the Lambda console?;If you are using Node.js or Python, you can author the code for your function using code editor in the AWS Lambda console, which lets you author and test your functions, and view the results of function executions in a robust, IDE-like environment. Go to the console to get started. /lambda/faqs/;How do I create an AWS Lambda function using the Lambda CLI?;You can package the code (and any dependent libraries) as a ZIP and upload it using the AWS CLI from your local environment, or specify an Amazon S3 location where the ZIP file is located. Uploads must be no larger than 50MB (compressed). Visit the Lambda Getting Started guide to get started. /lambda/faqs/;Does AWS Lambda support environment variables?;Yes. You can easily create and modify environment variables from the AWS Lambda Console, CLI, or SDKs. To learn more about environment variables, see the documentation. /lambda/faqs/;Can I store sensitive information in environment variables?;For sensitive information, such as database passwords, we recommend you use client-side encryption using AWS Key Management Service and store the resulting values as ciphertext in your environment variable. You will need to include logic in your AWS Lambda function code to decrypt these values. /lambda/faqs/;How can I manage my AWS Lambda functions?;You can easily list, delete, update, and monitor your Lambda functions using the dashboard in the AWS Lambda console. You can also use the AWS CLI and AWS SDK to manage your Lambda functions. Visit the Lambda Developer Guide to learn more. /lambda/faqs/;Can I share code across functions?;Yes, you can package any code (frameworks, SDKs, libraries, and more) as a Lambda Layer and manage and share them easily across multiple functions. /lambda/faqs/;How do I monitor an AWS Lambda function?;AWS Lambda automatically monitors Lambda functions on your behalf, reporting real-time metrics through Amazon CloudWatch, including total requests, account-level and function-level concurrency usage, latency, error rates, and throttled requests. You can view statistics for each of your Lambda functions via the Amazon CloudWatch console or through the AWS Lambda console. You can also call third-party monitoring APIs in your Lambda function. /lambda/faqs/;How do I troubleshoot failures in an AWS Lambda function?;AWS Lambda automatically integrates with Amazon CloudWatch logs, creating a log group for each Lambda function and providing basic application lifecycle event log entries, including logging the resources consumed for each use of that function. You can easily insert additional logging statements into your code. You can also call third-party logging APIs in your Lambda function. Visit Troubleshooting Lambda functions to learn more. Amazon CloudWatch Logs rates will apply. /lambda/faqs/;How do I scale an AWS Lambda function?;You do not have to scale your Lambda functions – AWS Lambda scales them automatically on your behalf. Every time an event notification is received for your function, AWS Lambda quickly locates free capacity within its compute fleet and runs your code. Since your code is stateless, AWS Lambda can start as many copies of your function as needed without lengthy deployment and configuration delays. There are no fundamental limits to scaling a function. AWS Lambda will dynamically allocate capacity to match the rate of incoming events. /lambda/faqs/;How are compute resources assigned to an AWS Lambda function?;In the AWS Lambda resource model, you choose the amount of memory you want for your function, and are allocated proportional CPU power and other resources. For example, choosing 256MB of memory allocates approximately twice as much CPU power to your Lambda function as requesting 128MB of memory and half as much CPU power as choosing 512MB of memory. To learn more, see our Function Configuration documentation. You can set your memory from 128MB to 10,240MB. /lambda/faqs/;How long can an AWS Lambda function execute?;AWS Lambda functions can be configured to run up to 15 minutes per execution. You can set the timeout to any value between 1 second and 15 minutes. /lambda/faqs/;How will I be charged for using AWS Lambda functions?;AWS Lambda is priced on a pay-per-use basis. Please see the AWS Lambda pricing page for details. /lambda/faqs/;Can I save money on AWS Lambda with a Compute Savings Plan?;Yes. In addition to saving money on Amazon EC2 and AWS Fargate, you can also use Compute Savings Plans to save money on AWS Lambda. Compute Savings Plans offer up to 17% discount on Duration, Provisioned Concurrency, and Duration (Provisioned Concurrency). Compute Savings Plans do not offer a discount on Requests in your Lambda bill. However, your Compute Savings Plans commitment can apply to Requests at regular rates. /lambda/faqs/;Does AWS Lambda support versioning?;Yes. By default, each AWS Lambda function has a single, current version of the code. Clients of your Lambda function can call a specific version or get the latest implementation. Please read our documentation on versioning Lambda functions. /lambda/faqs/;How long after uploading my code will my AWS Lambda function be ready to call?;Deployment times may vary with the size of your code, but AWS Lambda functions are typically ready to call within seconds of upload. /lambda/faqs/;Can I use my own version of a supported library?;Yes. You can include your own copy of a library (including the AWS SDK) in order to use a different version than the default one provided by AWS Lambda. /lambda/faqs/;How does tiered pricing work?;AWS Lambda offers discounted pricing tiers for monthly on-demand function duration above certain thresholds. Tiered pricing is available for functions running on both x86 and Arm architectures. Lambda pricing tiers are applied to aggregate monthly on-demand duration of your functions running on the same architecture (x86 or Arm, respectively), in the same region, within the account. If you’re using consolidated billing in AWS Organizations, pricing tiers are applied to the aggregate monthly duration of your functions running on the same architecture, in the same region, across the accounts in the organization. For example, if you are running x86 Lambda functions in the US East (Ohio) region, you will pay $0.0000166667 for every GB-second for the first 6 billion GB-seconds per month, $0.0000150000 for every GB-second for the next 9 billion GB-seconds per month, and $0.0000133334 for every GB-second over 15 billion GB-seconds per month, in that region. Pricing for Requests, Provisioned Concurrency, and Provisioned Concurrency Duration remains unchanged. For more information, please see AWS Lambda Pricing. /lambda/faqs/;Can I take advantage of both tiered pricing, and Compute Savings Plans?;Yes. Lambda usage that is covered by your hourly savings plan commitment is billed at the applicable CSP rate and discount. The remaining usage that is not covered by this commitment will be billed at the rate corresponding to the tier your monthly aggregate function duration falls in. /lambda/faqs/;What is an event source?;An event source is an AWS service or developer-created application that produces events that trigger an AWS Lambda function to run. Some services publish these events to Lambda by invoking the cloud function directly (for example, Amazon S3). Lambda can also poll resources in other services that do not publish events to Lambda. For example, Lambda can pull records from an Amazon Kinesis stream or an Amazon SQS queue and execute a Lambda function for each fetched message. /lambda/faqs/;What event sources can be used with AWS Lambda?;Please see our documentation for a complete list of event sources. /lambda/faqs/;How are events represented in AWS Lambda?;Events are passed to a Lambda function as an event input parameter. For event sources where events arrive in batches, such as Amazon SQS, Amazon Kinesis, and Amazon DynamoDB Streams, the event parameter may contain multiple events in a single call, based on the batch size you request. To learn more about Amazon S3 event notifications, visit Configuring Notifications for Amazon S3 Events. To learn more about Amazon DynamoDB Streams, visit the DynamoDB Stream Developers Guide. To learn more about invoking Lambda functions using Amazon SNS, visit the Amazon SNDevelopers Guide. For more information on Amazon Cognito events, visit Amazon Cognito. For more information on AWS CloudTrail logs and auditing API calls across AWS services, see AWS CloudTrail. /lambda/faqs/;How do I make an AWS Lambda function respond to changes in an Amazon S3 bucket?;From the AWS Lambda console, you can select a function and associate it with notifications from an Amazon S3 bucket. Alternatively, you can use the Amazon S3 console and configure the bucket’s notifications to send to your AWS Lambda function. This same functionality is also available through the AWS SDK and CLI. /lambda/faqs/;How do I make an AWS Lambda function respond to updates in an Amazon DynamoDB table?;You can trigger a Lambda function on DynamoDB table updates by subscribing your Lambda function to the DynamoDB Stream associated with the table. You can associate a DynamoDB Stream with a Lambda function using the Amazon DynamoDB console, the AWS Lambda console, or Lambda’s registerEventSource API. /lambda/faqs/;How do I use an AWS Lambda function to process records in an Amazon Kinesis stream?;From the AWS Lambda console, you can select a Lambda function and associate it with an Amazon Kinesis stream owned by the same account. This same functionality is also available through the AWS SDK and CLI. /lambda/faqs/;How does AWS Lambda process data from Amazon Kinesis streams and Amazon DynamoDB Streams?;The Amazon Kinesis and DynamoDB Streams records sent to your AWS Lambda function are strictly serialized, per shard. This means that if you put two records in the same shard, Lambda guarantees that your Lambda function will be successfully invoked with the first record before it is invoked with the second record. If the invocation for one record times out, is throttled, or encounters any other error, Lambda will retry until it succeeds (or the record reaches its 24-hour expiration) before moving on to the next record. The ordering of records across different shards is not guaranteed, and processing of each shard happens in parallel. /lambda/faqs/;How should I choose between AWS Lambda and Amazon Kinesis Data Analytics for my analytics needs?;AWS Lambda allows you to perform time-based aggregations (such as count, max, sum, average, etc.) over a short window of up to 15 minutes for your data in Amazon Kinesis or Amazon DynamoDB Streams over a single logical partition such as a shard. This gives you the option to easily set up simple analytics for your event-based application without adding architectural complexity, as your business and analytics logic can be located in the same function. Lambda allows aggregations over a maximum of a 15-minute tumbling window, based on the event timestamp. Amazon Kinesis Data Analytics allows you to build more complex analytics applications that support flexible processing choices and robust fault-tolerance with exactly-once processing without duplicates, and analytics that can be performed over an entire data stream across multiple logical partitions. With KDA, you can analyze data over multiple types of aggregation windows (tumbling window, stagger window, sliding window, session window) using either the event time or the processing time. /lambda/faqs/;How do I use an AWS Lambda function to respond to notifications sent by Amazon Simple Notification Service (SNS)?;From the AWS Lambda console, you can select a Lambda function and associate it with an Amazon SNtopic. This same functionality is also available through the AWS SDK and CLI. /lambda/faqs/;How do I use an AWS Lambda function to respond to emails sent by Amazon Simple Email Service (SES)?;From the Amazon SES Console, you can set up your receipt rule to have Amazon SES deliver your messages to an AWS Lambda function. The same functionality is available through the AWS SDK and CLI. /lambda/faqs/;How do I use an AWS Lambda function to respond to Amazon CloudWatch alarms?;First, configure the alarm to send Amazon SNnotifications. Then from the AWS Lambda console, select a Lambda function and associate it with that Amazon SNtopic. See the Amazon CloudWatch Developer Guide for more on setting up Amazon CloudWatch alarms. /lambda/faqs/;How do I use an AWS Lambda function to respond to changes in user or device data managed by Amazon Cognito?;From the AWS Lambda console, you can select a function to trigger when any datasets associated with an Amazon Cognito identity pool are synchronized. This same functionality is also available through the AWS SDK and CLI. Visit Amazon Cognito for more information on using Amazon Cognito to share and synchronize data across a user’s devices. /lambda/faqs/;How can my application trigger an AWS Lambda function directly?;You can invoke a Lambda function using a custom event through AWS Lambda’s invoke API. Only the function owner or another AWS account that the owner has granted permission can invoke the function. Visit the Lambda Developers Guide to learn more. /lambda/faqs/;What is the latency of invoking an AWS Lambda function in response to an event?;AWS Lambda is designed to process events within milliseconds. Latency will be higher immediately after a Lambda function is created, updated, or if it has not been used recently. /lambda/faqs/;How do I create a mobile backend using AWS Lambda?;You upload the code you want AWS Lambda to execute and then invoke it from your mobile app using the AWS Lambda SDK included in the AWS Mobile SDK. You can make both direct (synchronous) calls to retrieve or check data in real time, as well as asynchronous calls. You can also define a custom API using Amazon API Gateway and invoke your Lambda functions through any REST compatible client. To learn more about the AWS Mobile SDK, visit the AWS Mobile SDK page. To learn more about Amazon API Gateway, visit the Amazon API Gateway page. /lambda/faqs/;How do I invoke an AWS Lambda function over HTTPS?;You can invoke a Lambda function over HTTPS by defining a custom RESTful API using Amazon API Gateway. This gives you an endpoint for your function which can respond to REST calls like GET, PUT, and POST. Read more about using AWS Lambda with Amazon API Gateway. /lambda/faqs/;How can my AWS Lambda function customize its behavior to the device and app making the request?;When called through the AWS Mobile SDK, AWS Lambda functions automatically gain insight into the device and application that made the call through the ‘context’ object. /lambda/faqs/;How can my AWS Lambda function personalize its behavior based on the identity of the end-user of an application?;When your app uses the Amazon Cognito identity, end users can authenticate themselves using a variety of public login providers such as Amazon, Facebook, Google, and other OpenID Connect-compatible services. User identity is then automatically and secured presented to your Lambda function in the form of an Amazon Cognito id, allowing it to access user data from Amazon Cognito, or as a key to store and retrieve data in Amazon DynamoDB or other web services. /lambda/faqs/;How do I create an Alexa skill using AWS Lambda?;AWS Lambda is integrated with the Alexa Skills Kit, a collection of self-service APIs, tools, documentation, and code samples that make it easy for you to create voice-driven capabilities (or “skills”) for Alexa. You simply upload the Lambda function code for the new Alexa skill you are creating, and AWS Lambda does the rest, executing the code in response to Alexa voice interactions and automatically managing the compute resources on your behalf. Read the Alexa Skills Kit documentation for more details. /lambda/faqs/;What happens if my function fails while processing an event?;For Amazon S3 bucket notifications and custom events, AWS Lambda will attempt execution of your function three times in the event of an error condition in your code or if you exceed a service or resource limit. For ordered event sources that AWS Lambda polls on your behalf, such as Amazon DynamoDB Streams and Amazon Kinesis streams, Lambda will continue attempting execution in the event of a developer code error until the data expires. You can monitor progress through the Amazon Kinesis and Amazon DynamoDB consoles and through the Amazon CloudWatch metrics that AWS Lambda generates for your function. You can also set Amazon CloudWatch alarms based on error or execution throttling rates. /lambda/faqs/;What is a serverless application?;Lambda-based applications (also referred to as serverless applications) are composed of functions triggered by events. A typical serverless application consists of one or more functions triggered by events such as object uploads to Amazon S3, Amazon SNnotifications, or API actions. These functions can stand alone or leverage other resources such as DynamoDB tables or Amazon S3 buckets. The most basic serverless application is simply a function. /lambda/faqs/;How do I deploy and manage a serverless application?;"You can deploy and manage your serverless applications using the AWS Serverless Application Model (AWS SAM). AWS SAM is a specification that prescribes the rules for expressing serverless applications on AWS. This specification aligns with the syntax used by AWS CloudFormation today and is supported natively within AWS CloudFormation as a set of resource types (referred to as ""serverless resources""). These resources make it easier for AWS customers to use CloudFormation to configure and deploy serverless applications using existing CloudFormation APIs." /lambda/faqs/;How can I discover existing serverless applications developed by the AWS community?;You can choose from a collection of serverless applications published by developers, companies, and partners in the AWS community with the AWS Serverless Application Repository. After finding an application, you can configure and deploy it straight from the Lambda console. /lambda/faqs/;How do I automate deployment for a serverless application?;You can automate your serverless application release process using AWS CodePipeline and AWS CodeDeploy. CodePipeline is a continuous delivery service that enables you to model, visualize and automate the steps required to release your serverless application. CodeDeploy provides a deployment automation engine for your Lambda-based applications. CodeDeploy lets you orchestrate deployments according to established best-practice methodologies such as canary and linear deployments, and helps you establish the necessary guardrails to verify that newly-deployed code is safe, stable, and ready to be fully released to production. /lambda/faqs/;How do I get started on building a serverless application?;To get started, visit the AWS Lambda console and download one of our blueprints. The file you download will contain an AWS SAM file (which defines the AWS resources in your application) and a .ZIP file (which includes your function code). You can then use AWS CloudFormation commands to package and deploy the serverless application that you just downloaded. For more details, visit our documentation. /lambda/faqs/;How do I coordinate calls between multiple AWS Lambda functions?;You can use AWS Step Functions to coordinate a series of AWS Lambda functions in a specific order. You can invoke multiple Lambda functions sequentially, passing the output of one to the other, and/or in parallel, and Step Functions will maintain state during executions for you. /lambda/faqs/;How do I troubleshoot a serverless application?;You can enable your Lambda function for tracing with AWS X-Ray by adding X-Ray permissions to your Lambda function execution role and changing your function “tracing mode” to “active. ” When X-Ray is enabled for your Lambda function, AWS Lambda will emit tracing information to X-Ray regarding the Lambda service overhead incurred when invoking your function. This will provide you with insights such as Lambda service overhead, function init time, and function execution time. In addition, you can include the X-Ray SDK in your Lambda deployment package to create your own trace segments, annotate your traces, or view trace segments for downstream calls made from your Lambda function. X-Ray SDKs are currently available for Node.js and Java. Visit Troubleshooting Lambda-based applications to learn more. AWS X-Ray rates will apply. /lambda/faqs/;Can I build serverless applications that connect to relational databases?;Yes. You can build highly scalable, secure, Lambda-based serverless applications that connect to relational databases using Amazon RDS Proxy, a highly available database proxy that manages thousands of concurrent connections to relational databases. Currently, RDS Proxy supports MySQL and Aurora databases. You can begin using RDS Proxy through the Amazon RDS console or the AWS Lambda console. Serverless applications that use fully managed connection pools from RDS Proxy will be billed according to RDS Proxy Pricing. /lambda/faqs/;How is AWS SAM licensed?;The specification is open sourced under Apache 2.0, which allows you and others to adopt and incorporate AWS SAM into build, deployment, monitoring, and management tools with a commercial-friendly license. You can access the AWS SAM repository on GitHub here. /lambda/faqs/;What is Container Image Support for AWS Lambda?;AWS Lambda now enables you to package and deploy functions as container images. Customers can leverage the flexibility and familiarity of container tooling, and the agility and operational simplicity of AWS Lambda to build applications. /lambda/faqs/;How can I use Container Image Support for AWS Lambda?;You can start with either an AWS provided base images for Lambda or by using one of your preferred community or private enterprise images. Then, simply use Docker CLI to build the image, upload it to Amazon ECR, and then create the function by using all familiar Lambda interfaces and tools, such as the AWS Management Console, the AWS CLI, the AWS SDK, AWS SAM, and AWS CloudFormation. /lambda/faqs/;Which container image types are supported?;You can deploy third-party Linux base images (e.g. Alpine or Debian) to Lambda in addition to the Lambda provided images. AWS Lambda will support all images based on the following image manifest formats: Docker Image Manifest V2 Schema 2 (used with Docker version 1.10 and newer) or Open Container Initiative (OCI) Spec (v1.0 and up). Lambda supports images with a size of up to 10GB. /lambda/faqs/;What base images can I use?;AWS Lambda provides a variety of base images customers can extend, and customers can also use their preferred Linux-based images with a size of up to 10GB. /lambda/faqs/;What container tools can I use to package and deploy functions as container images?;You can use any container tooling as long as it supports one of the following container image manifest formats: Docker Image Manifest V2 Schema 2 (used with Docker version 1.10 and newer) or Open Container Initiative (OCI) Specifications (v1.0 and up). For example, you can use native container tools (i.e. docker run, docker compose, Buildah and Packer) to define your functions as a container image and deploy to Lambda. /lambda/faqs/;What AWS Lambda features are available to functions deployed as container images?;All existing AWS Lambda features, with the exception of Lambda layers and Code Signing, can be used with functions deployed as container images. Once deployed, AWS Lambda will treat an image as immutable. Customers can use container layers during their build process to include dependencies. /lambda/faqs/;Will AWS Lambda patch and update my deployed container image?;Not at this time. Your image, once deployed to AWS Lambda, will be immutable. The service will not patch or update the image. However, AWS Lambda will publish curated base images for all supported runtimes that are based on the Lambda managed environment. These published images will be patched and updated along with updates to the AWS Lambda managed runtimes. You can pull and use the latest base image from DockerHub or Amazon ECR Public, re-build your container image and deploy to AWS Lambda via Amazon ECR. This allows you to build and test the updated images and runtimes, prior to deploying the image to production. /lambda/faqs/;What are the differences between functions created using ZIP archives vs. container images?;There are three main differences between functions created using ZIP archives vs. container images: /lambda/faqs/;Is there a performance difference between functions defined as zip and container images?;No - AWS Lambda ensures that the performance profiles for functions packaged as container images are the same as for those packaged as ZIP archives, including typically sub-second start up times. /lambda/faqs/;How will I be charged for deploying Lambda functions as container images?;There is no additional charge for packaging and deploying functions as container images to AWS Lambda. When you invoke your function deployed as a container image, you pay the regular price for requests and execution duration. To learn more, visit AWS Lambda pricing. You will be charged for storing your container images in Amazon ECR at the standard ECR prices. To learn more, visit Amazon ECR pricing. /lambda/faqs/;What is the Lambda Runtime Interface Emulator (RIE)?;The Lambda Runtime Interface Emulator is a proxy for the Lambda Runtime API,which allows customers to locally test their Lambda function packaged as a container image. It is a lightweight web server that converts HTTP requests to JSON events and emulates the Lambda Runtime API. It allows you to locally test your functions using familiar tools such as cURL and the Docker CLI (when testing functions packaged as container images). It also simplifies running your application on additional compute services. You can include the Lambda Runtime Interface Emulator in your container image to have it accept HTTP requests natively instead of the JSON events required for deployment to Lambda. This component does not emulate the Lambda orchestrator, or security and authentication configurations. The Runtime Interface Emulator is open sourced on GitHub. You can get started by downloading and installing it on your local machine. /lambda/faqs/;What function behaviors can I test locally with the emulator?;You can use the emulator to test if your function code is compatible with the Lambda environment, runs successfully, and provides the expected output. For example, you can mock test events from different event sources. You can also use it to test extensions and agents built into the container image against the Lambda Extensions API. /lambda/faqs/;How does the Runtime Interface Emulator (RIE) help me run my Lambda compatible image on additional compute services?;Customers can add the Runtime Interface Emulator as the entry point to the container image or package it as a sidecar to ensure the container image now accepts HTTP requests instead of JSON events. This simplifies the changes required to run their container image on additional compute services. Customers will be responsible for ensuring they follow all security, performance, and concurrency best practices for their chosen environment. RIE is pre-packaged into the AWS Lambda provided images, and is available by default in AWS SAM CLI. Base image providers can use the documentation to provide the same experience for their base images. /lambda/faqs/;How can I deploy my existing containerized application to AWS Lambda?;You can deploy a containerized application to AWS Lambda if it meets the below requirements: /lambda/faqs/;How do I choose between Lambda SnapStart and Provisioned Concurrency (PC)?;Lambda SnapStart is a performance optimization that helps your Java functions to achieve up to 10x faster start-up times by reducing the variable latency incurred during execution of one-time initialization code. Lambda SnapStart works broadly across all functions in your application or account at no additional cost. When a customer publishes a function version with Lambda SnapStart, the function’s code is initialized ahead of time, instead of being initialized on the first invoke. Lambda then takes a snapshot of the initialized execution environment and persists it in a tiered cache for low-latency access. When the function is first invoked and then scaled, Lambda resumes the function from the cached snapshot instead of initializing from scratch, driving a lower startup latency. While Lambda SnapStart reduces startup latency, it works as a best-effort optimization, and does not guarantee elimination of cold starts. If your application has strict latency requirements and requires double-digit millisecond startup times, we recommend you use PC. /lambda/faqs/;Can I enable both Lambda SnapStart and PC on the same function?;No. Lambda SnapStart and PC cannot be enabled at the same time, on the same function. /lambda/faqs/;Does the process of caching and resuming from snapshots introduce software compatibility considerations?; No. There's no additional cost for enabling Lambda SnapStart. You are charged based on the number of requests for your functions and the duration your code executes based on current Lambda Pricing. Duration charges apply to code that runs in the handler of a function and runtime hooks, as well as initialization code that is declared outside of the handler. Please note that AWS Lambda may periodically recycle execution environments with security patches and rerun your initialization code. For more details, see the Lambda Programming Model documentation. /lambda/faqs/;Can I execute my own code before a snapshot is created or when the function is resumed from snapshot?; With Lambda SnapStart, Lambda keeps a snapshot of the initialized execution environment for the last three published function versions, as long as the published versions continue to receive invokes. The snapshot associated with a published function version expires if it remains inactive for more than 14 days. /lambda/faqs/;Will I be charged for Lambda SnapStart?; With Lambda SnapStart, Lambda keeps a snapshot of the initialized execution environment for the last three published function versions, as long as the published versions continue to receive invokes. The snapshot associated with a published function version expires if it remains inactive for more than 14 days. /lambda/faqs/;How can I encrypt the snapshots of initialized execution environment created by Lambda SnapStart?; The maximum allowed initialization duration for Lambda SnapStart will match the execution timeout duration you have configured for your function. The maximum configurable execution timeout limit for a function is 15 minutes. /lambda/faqs/;What is AWS Lambda Provisioned Concurrency?;Provisioned Concurrency gives you greater control over the performance of your serverless applications. When enabled, Provisioned Concurrency keeps functions initialized and hyper-ready to respond in double-digit milliseconds. /lambda/faqs/;How do I set up and manage Provisioned Concurrency?;You can configure concurrency on your function through the AWS Management Console, the Lambda API, the AWS CLI, and AWS CloudFormation. The simplest way to benefit from Provisioned Concurrency is by using AWS Auto Scaling. You can use Application Auto Scaling to configure schedules, or have Auto Scaling automatically adjust the level of Provisioned Concurrency in real time as demand changes. To learn more about Provisioned Concurrency, see the documentation. /lambda/faqs/;Do I need to change my code if I want to use Provisioned Concurrency?;You don’t need to make any changes to your code to use Provisioned Concurrency. It works seamlessly with all existing functions and runtimes. There is no change to the invocation and execution model of Lambda when using Provisioned Concurrency. /lambda/faqs/;How will I be charged for Provisioned Concurrency?;Provisioned Concurrency adds a pricing dimension, of ‘Provisioned Concurrency’, for keeping functions initialized. When enabled, you pay for the amount of concurrency that you configure and for the period of time that you configure it. When your function executes while Provisioned Concurrency is configured on it, you also pay for Requests and execution Duration. To learn more about the pricing of Provisioned Concurrency, see AWS Lambda Pricing. /lambda/faqs/;When should I use Provisioned Concurrency?;Provisioned Concurrency is ideal for building latency-sensitive applications, such as web or mobile backends, synchronously invoked APIs, and interactive microservices. You can easily configure the appropriate amount of concurrency based on your application's unique demand. You can increase the amount of concurrency during times of high demand and lower it, or turn it off completely, when demand decreases. /lambda/faqs/;What happens if a function receives invocations above the configured level of Provisioned Concurrency?;If the concurrency of a function reaches the configured level, subsequent invocations of the function have the latency and scale characteristics of regular Lambda functions. You can restrict your function to only scale up to the configured level. Doing so prevents the function from exceeding the configured level of Provisioned Concurrency. This is a mechanism to prevent undesired variability in your application when demand exceeds the anticipated amount. /lambda/faqs/;What are AWS Lambda functions powered by Graviton2 processors?;AWS Lambda allows you to run your functions on either x86-based or Arm-based processors. AWS Graviton2 processors are custom built by Amazon Web Services using 64-bit Arm Neoverse cores to deliver increased price performance for your cloud workloads. Customers get the same advantages of AWS Lambda, running code without provisioning or managing servers, automatic scaling, high availability, and only paying for the resources you consume. /lambda/faqs/;Why should I use AWS Lambda functions powered by Graviton2 processors?;AWS Lambda functions powered by Graviton2, using an Arm-based processor architecture designed by AWS, are designed to deliver up to 34% better price performance compared to functions running on x86 processors, for a variety of serverless workloads, such as web and mobile backends, data, and stream processing. With lower latency, up to 19% better performance, a 20% lower cost, and the highest power-efficiency currently available at AWS, Graviton2 functions can power mission critical serverless applications. Customers can configure both existing and new functions to target the Graviton2 processor. They can deploy functions running on Graviton2 as either zip files or container images. /lambda/faqs/;How do I configure my functions to run on Graviton2 processors?;You can configure functions to run on Graviton2 through the AWS Management Console, the AWS Lambda API, the AWS CLI, and AWS CloudFormation by setting the architecture flag to ‘arm64’ for your function. /lambda/faqs/;How do I deploy my application built using functions powered by Graviton2 processors?;There is no change between x86-based and Arm-based functions. Simply upload your code via the AWS Management Console, zip file, or container image, and AWS Lambda automatically runs your code when triggered, without requiring you to provision or manage infrastructure. /lambda/faqs/;Can an application use both functions powered by Graviton2 processors and x86 processors?;An application can contain functions running on both architectures. AWS Lambda allows you to change the architecture (‘x86_64’ or ‘arm64’) of your function’s current version. Once you create a specific version of your function, the architecture cannot be changed. /lambda/faqs/;Does AWS Lambda support multi-architecture container images?;No. Each function version can only use a single container image. /lambda/faqs/;Can I create AWS Lambda Layers that target functions powered by AWS Graviton2 processors?;Yes. Layers and extensions can be targeted to ‘x86_64’ or ‘arm64’ compatible architectures. The default architecture for functions and layers is ‘x86_64’. /lambda/faqs/;What languages and runtimes are supported by Lambda functions running on Graviton2 processors?;At launch, customers can use Python, Node.js, Java, Ruby, .Net Core, Custom Runtime (provided.al2), and OCI Base images. To learn more, please see the AWS Lambda Runtimes. /lambda/faqs/;What is the pricing of AWS Lambda functions powered by AWS Graviton2 processors? Does the AWS Lambda free tier apply to functions powered by Graviton2?;AWS Lambda functions powered by AWS Graviton2 processors are 20% cheaper compared to x86-based Lambda functions. The Lambda free tier applies to AWS Lambda functions powered by x86 and Arm-based architectures. /lambda/faqs/;How do I choose between running my functions on Graviton2 processors or x86 processors?;Each workload is unique and we recommend customers test their functions to determine the price performance improvement they might see. To do that, we recommend using the AWS Lambda Power Tuning tool. We recommend starting with web and mobile backends, data, and stream processing when testing your workloads for potential price performance improvements. /lambda/faqs/;Do I need an Arm-based development machine to create, build, and test functions powered by Graviton2 processors locally?;Interpreted languages like Python, Java, and Node generally do not require recompilation unless your code references libraries that use architecture specific components. In those cases, you would need to provide the libraries targeted to arm64. For more details, please see the Getting started with AWS Graviton page. Non-interpreted languages will require compiling your code to target arm64. While more modern compilers will produce compiled code for arm64, you will need to deploy it into an arm-based environment to test. To learn more about using Lambda functions with Graviton2, please see the documentation. /lambda/faqs/;What is Amazon EFS for AWS Lambda?;With Amazon Elastic File System (Amazon EFS) for AWS Lambda, customers can securely read, write and persist large volumes of data at virtually any scale using a fully managed elastic NFS file system that can scale on demand without the need for provisioning or capacity management. Previously, developers added code to their functions to download data from S3 or databases to local temporary storage, limited to 512MB. With EFS for Lambda, developers don't need to write code to download data to temporary storage in order to process it. /lambda/faqs/;How do I set up Amazon EFS for Lambda?;Developers can easily connect an existing EFS file system to a Lambda function via an EFS Access Point by using the console, CLI, or SDK. When the function is first invoked, the file system is automatically mounted and made available to function code. You can learn more in the documentation. /lambda/faqs/;Do I need to configure my function with VPC settings before I can use my Amazon EFS file system?;Yes. Mount targets for Amazon EFS are associated with a subnet in a VPC. The AWS Lambda function needs to be configured to access that VPC. /lambda/faqs/;Who should use Amazon EFS for Lambda?;Using EFS for Lambda is ideal for building machine learning applications or loading large reference files or models, processing or backing up large amounts of data, hosting web content, or developing internal build systems. Customers can also use EFS for Lambda to keep state between invocations within a stateful microservice architecture, in a Step Functions workflow, or sharing files between serverless applications and instance or container-based applications. /lambda/faqs/;Will my data be encrypted in transit?;Yes. Data encryption in transit uses industry-standard Transport Layer Security (TLS) 1.2 to encrypt data sent between AWS Lambda functions and the Amazon EFS file systems. /lambda/faqs/;Is my data encrypted at rest?;Customers can provision Amazon EFS to encrypt data at rest. Data encrypted at rest is transparently encrypted while being written, and transparently decrypted while being read, so you don’t have to modify your applications. Encryption keys are managed by the AWS Key Management Service (KMS), eliminating the need to build and maintain a secure key management infrastructure. /lambda/faqs/;How will I be charged for Amazon EFS for AWS Lambda?;There is no additional charge for using Amazon EFS for AWS Lambda. Customers pay the standard price for AWS Lambda and for Amazon EFS. When using Lambda and EFS in the same availability zone, customers are not charged for data transfer. However, if they use VPC peering for Cross-Account access, they will incur data transfer charges. To learn more, please see Pricing. /lambda/faqs/;Can I associate more than one Amazon EFS file system with my AWS Lambda function?;No. Each Lambda function will be able to access one EFS file system. /lambda/faqs/;Can I use the same Amazon EFS file system across multiple functions, containers, and instances?;Yes. Amazon EFS supports Lambda functions, ECS and Fargate containers, and EC2 instances. You can share the same file system and use IAM policy and Access Points to control what each function, container, or instance has access to. /lambda/faqs/;What is AWS Lambda Extensions?;AWS Lambda Extensions lets you integrate Lambda with your favorite tools for monitoring, observability, security, and governance. Extensions enable you and your preferred tooling vendors to plug into Lambda’s lifecycle and integrate more deeply into the Lambda execution environment. /lambda/faqs/;How do Lambda extensions work?;Extensions are companion processes that run within Lambda’s execution environment which is where your function code is executed. In addition, they can run outside of the function invocation - i.e. they start before the function is initialized, run in parallel with the function, can run after the function execution is complete, and can also run before the Lambda service shuts down the execution environment. /lambda/faqs/;What can I use Lambda extensions for?;You can use extensions for your favorite tools for monitoring, observability, security, and governance from AWS as well as the following partners: AppDynamics, Coralogix, Datadog, Dynatrace, Epsagon, HashiCorp, Honeycomb, Imperva, Lumigo, Check Point CloudGuard, New Relic, Thundra, Splunk, Sentry, Site24x7, Sumo Logic, AWS AppConfig, Amazon CodeGuru Profiler, Amazon CloudWatch Lambda Insights, AWS Distro for OpenTelemetry. To learn more about these extensions, visit the launch blog post. /lambda/faqs/;How do I set up and manage Lambda extensions?;You can deploy extensions, using Layers, on one or more Lambda functions using the Console, CLI, or Infrastructure as Code tools such as CloudFormation, the AWS Serverless Application Model, and Terraform. To get started, visit the documentation. /lambda/faqs/;What runtimes can I use AWS Lambda extensions with?;You can view the list of runtimes that support extensions here. /lambda/faqs/;Do Extensions count towards the deployment package limit?;Yes, the total unzipped size of the function and all Extensions cannot exceed the unzipped deployment package size limit of 250 MB. /lambda/faqs/;Is there a performance impact of using an extension?;Extensions may impact the performance of your function because they share resources such as CPU, memory, and storage with the function, and because extensions are initialized before function code. For example, if an extension performs compute-intensive operations, you may see your function execution duration increase because the extension and your function code share the same CPU resources. Because Lambda allocates proportional CPU based on the memory setting you choose, you may see increased execution and initialization duration at lower memory settings as more processes compete for the same CPU resources. /lambda/faqs/;How will I be charged for using Lambda extensions?;Extensions share the same billing model as Lambda functions. When using Lambda functions with extensions, you pay for requests served and the combined compute time used to run your code and all extensions, in 1ms increments. You will be charged for compute time as per existing Lambda duration pricing. To learn more, see AWS Lambda pricing. /lambda/faqs/;Can I create my own custom Lambda extensions?;Yes, by using the AWS Lambda Runtime Extensions API. Visit the documentation to learn more. /lambda/faqs/;How do extensions work while Provisioned Concurrency is enabled?;Provisioned Concurrency keeps functions initialized and ready to respond in double-digit milliseconds. When enabled, Provisioned Concurrency will also initialize extensions and keep them ready to execute alongside function code. /lambda/faqs/;What permissions do extensions have?;Because Extensions are executed within the same environment as a Lambda function, they have access to the same resources as the function, and permissions are shared between the function and the extension. Therefore they share credentials, role, and environment variables. Extensions have read-only access to function code, and can read and write in /tmp. /lambda/faqs/;What is the AWS Lambda Telemetry API?;The AWS Lambda Telemetry API enables you to use extensions to capture enhanced monitoring and observability data directly from Lambda and send it to a destination of your choice. /lambda/faqs/;How does the Telemetry API work?;The Lambda service automatically captures and streams telemetry data to Amazon CloudWatch and AWS X-Ray. The Telemetry API provides a simple HTTP or TCP interface for extensions to receive the same telemetry data along with Lambda execution environment lifecycle events, and function invocation-level metrics. Extensions can use the Telemetry API to consume these telemetry streams directly from Lambda, and then process, filter, and send them to any preferred destination. /lambda/faqs/;How do I get started with using the Telemetry API?;You can deploy Telemetry API enabled extensions for your Lambda functions using AWS Lambda Console, AWS CLI, or Infrastructure as Code tools such as AWS CloudFormation, AWS Serverless Application Model (SAM), and Terraform. You do not have to make code changes to use a Telemetry API enabled extension with your Lambda function. Simply add an extension from the tooling provider of your choice to your Lambda function. To get started with extensions from APN Partners, follow the links provided in the launch blog post. You can also build your own extension that uses Telemetry API. To learn how, visit the AWS Lambda Developer Guide. /lambda/faqs/;Is there a performance impact of using the Telemetry API?;You can only use the Telemetry API from within AWS Lambda Extensions. Extensions may impact the performance of your function because they share resources such as CPU, memory, and storage with the function. Memory usage increases linearly as the number of Telemetry API subscriptions increase because each subscription opens a new memory buffer to store the telemetry data. However, you can optimize memory usage by adjusting the buffering configuration in the Telemetry API subscription request. We recommend extension vendors to publish expected resource consumption to make it easier for function developers to choose a suitable extension. Please refer to your extension vendor’s documentation to understand the potential performance overhead of using their extension. /lambda/faqs/;How will I be charged for using the Telemetry API?;There is no additional charge for using the AWS Lambda Telemetry API. Extensions that use the Telemetry API share the same billing model as other extensions and Lambda functions. To learn more about Extensions pricing, please see Lambda pricing page. /lambda/faqs/;Does using the Telemetry API disable sending logs to Amazon CloudWatch Logs?;No. By default, the Lambda service sends all telemetry data to CloudWatch Logs, and using the Telemetry API does not disable egress to CloudWatch Logs. /lambda/faqs/;Do AWS Lambda functions support HTTP(S) endpoints?;Yes. Lambda functions can be configured with a function URL, a built-in HTTPS endpoint that can be invoked using the browser, curl, and any HTTP client. Function URLs are an easy way to get started building HTTPS accessible functions. /lambda/faqs/;How do I configure a Lambda function URL for my function?;You can configure a function URL for your function through the AWS Management Console, the AWS Lambda API, the AWS CLI, AWS CloudFormation, and the AWS Serverless Application Model. Function URLs can be enabled on the $LATEST unqualified version of your function, or on any function alias. To learn more about configuring a function URL, see the documentation. /lambda/faqs/;How do I secure my Lambda function URL?;Lambda function URLs are secured with IAM authorization by default. You can choose to disable IAM authorization to create a public endpoint or if you plan to implement custom authorization as part of the function’s business logic. /lambda/faqs/;How do I invoke my function with a Lambda function URL?;You can easily invoke your function from your web browser by navigating to the Lambda URL, from your client application’s code using an HTTP library, or from the command line using curl. /lambda/faqs/;Do Lambda function URLs work with function versions and aliases?;Yes. Lambda function URLs can be enabled on a function or function alias. If no alias is specified, the URL will point to $LATEST by default. Function URLs cannot target an individual function version. /lambda/faqs/;Can I enable custom domains for my Lambda function URL?;Custom domain names are not currently supported with function URLs. You can use a custom domain with your function URL by creating an Amazon CloudFront distribution and a CNAME to map your custom domain to your CloudFront distribution name. Then, map your CloudFront distribution domain name to be routed to your function URL as an origin. /lambda/faqs/;Can Lambda function URLs be used to invoke a function in a VPC?;Yes, function URLs can be used to invoke a Lambda function in a VPC. /lambda/faqs/;What is the pricing for using Lambda function URLs?;There is no additional charge for using function URLs. You pay the standard price for AWS Lambda. To learn more, please see AWS Lambda Pricing. /lambda/faqs/;What is Lambda@Edge?;Lambda@Edge allows you to run code across AWS locations globally without provisioning or managing servers, responding to end-users at the lowest network latency. You just upload your Node.js or Python code to AWS Lambda and configure your function to be triggered in response to Amazon CloudFront requests (i.e., when a viewer request lands, when a request is forwarded to or received back from the origin, and right before responding back to the end-user). The code is then ready to execute across AWS locations globally when a request for content is received, and scales with the volume of CloudFront requests globally. Learn more in our documentation. /lambda/faqs/;How do I use Lambda@Edge?;To use Lambda@Edge, you just upload your code to AWS Lambda and associate a function version to be triggered in response to Amazon CloudFront requests. Your code must satisfy the Lambda@Edge service limits. Lambda@Edge supports Node.js and Python for global invocation by CloudFront events at this time. Learn more in our documentation. /lambda/faqs/;When should I use Lambda@Edge?;Lambda@Edge is optimized for latency-sensitive use cases where your end viewers are distributed globally. All the information you need to make a decision should be available at the CloudFront edge, within the function and the request. This means that use cases where you are looking to make decisions on how to serve content based on user characteristics (e.g., location, client device, etc.) can now be executed and served close to your users without having to be routed back to a centralized server. /lambda/faqs/;Can I deploy my existing Lambda functions for global invocation?;You can associate existing Lambda functions with CloudFront events for global invocation if the function satisfies the Lambda@Edge service requirements and limits. Read more here on how to update your function properties. /lambda/faqs/;What Amazon CloudFront events can be used to trigger my functions?;Your functions will automatically trigger in response to the following Amazon CloudFront events: /lambda/faqs/;How is AWS Lambda@Edge different from using AWS Lambda behind Amazon API Gateway?;The difference is that API Gateway and Lambda are regional services. Using Lambda@Edge and Amazon CloudFront allows you to execute logic across multiple AWS locations based on where your end viewers are located. /lambda/faqs/;How available are AWS Lambda functions?;AWS Lambda is designed to use replication and redundancy to provide high availability for both the service itself and for the Lambda functions it operates. There are no maintenance windows or scheduled downtimes for either. /lambda/faqs/;Do my AWS Lambda functions remain available when I change my code or its configuration?;Yes. When you update a Lambda function, there will be a brief window of time, typically less than a minute, when requests could be served by either the old or the new version of your function. /lambda/faqs/;Is there a limit to the number of AWS Lambda functions I can execute at once?;"No. AWS Lambda is designed to run many instances of your functions in parallel. However, AWS Lambda has a default safety throttle for the number of concurrent executions per account per region (visit here for info on default safety throttle limits). You can also control the maximum concurrent executions for individual AWS Lambda functions, which you can use to reserve a subset of your account concurrency limit for critical functions, or cap traffic rates to downstream resources. If you wish to submit a request to increase the throttle limit, you can visit our Support Center, click ""Open a new case,"" and file a service limit increase request." /lambda/faqs/;What happens if my account exceeds the default throttle limit on concurrent executions?;On exceeding the throttle limit, AWS Lambda functions being invoked synchronously will return a throttling error (429 error code). Lambda functions being invoked asynchronously can absorb reasonable bursts of traffic for approximately 15-30 minutes, after which incoming events will be rejected as throttled. In case the Lambda function is being invoked in response to Amazon S3 events, events rejected by AWS Lambda may be retained and retried by S3 for 24 hours. Events from Amazon Kinesis streams and Amazon DynamoDB streams are retried until the Lambda function succeeds or the data expires. Amazon Kinesis and Amazon DynamoDB Streams retain data for 24 hours. /lambda/faqs/;Is the default limit applied on a per function level?;No, the default limit only applies at an account level. /lambda/faqs/;What happens if my Lambda function fails while processing an event?;On failure, Lambda functions being invoked synchronously will respond with an exception. Lambda functions being invoked asynchronously are retried at least 3 times. Events from Amazon Kinesis streams and Amazon DynamoDB streams are retried until the Lambda function succeeds or the data expires. Kinesis and DynamoDB Streams retain data for a minimum of 24 hours. /lambda/faqs/;What happens if my Lambda function invocations exhaust the available policy?;"On exceeding the retry policy for asynchronous invocations, you can configure a “dead letter queue” (DLQ) into which the event will be placed; in the absence of a configured DLQ the event may be rejected. On exceeding the retry policy for stream based invocations, the data would have already expired and therefore rejected." /lambda/faqs/;What resources can I configure as a dead letter queue for a Lambda function?;You can configure an Amazon SQS queue or an Amazon SNtopic as your dead letter queue. /lambda/faqs/;How do I allow my AWS Lambda function access to other AWS resources?;You grant permissions to your Lambda function to access other resources using an IAM role. AWS Lambda assumes the role while executing your Lambda function, so you always retain full, secure control of exactly which AWS resources it can use. Visit Setting up AWS Lambda to learn more about roles. /lambda/faqs/;How do I control which Amazon S3 buckets can call which AWS Lambda functions?;When you configure an Amazon S3 bucket to send messages to an AWS Lambda function, a resource policy rule will be created that grants access. Visit the Lambda Developer Guide to learn more about resource policies and access controls for Lambda functions. /lambda/faqs/;How do I control which Amazon DynamoDB table or Amazon Kinesis stream an AWS Lambda function can poll?;Access controls are managed through the Lambda function role. The role you assign to your Lambda function also determines which resource(s) AWS Lambda can poll on its behalf. Visit the Lambda Developer Guide to learn more. /lambda/faqs/;How do I control which Amazon SQS queue an AWS Lambda function can poll?;Access controls can be managed by the Lambda function role or a resource policy setting on the queue itself. If both policies are present, the more restrictive of the two permissions will be applied. /lambda/faqs/;Can I access resources behind Amazon VPC with my AWS Lambda function?;Yes. You can access resources behind Amazon VPC. /lambda/faqs/;How do I enable and disable the VPC support for my Lambda function?;To enable VPC support, you need to specify one or more subnets in a single VPC and a security group as part of your function configuration. To disable VPC support, you need to update the function configuration and specify an empty list for the subnet and security group. You can change these settings using the AWS APIs, CLI, or AWS Lambda Management Console. /lambda/faqs/;Can a single Lambda function have access to multiple VPCs?;No. Lambda functions provide access only to a single VPC. If multiple subnets are specified, they must all be in the same VPC. You can connect to other VPCs by peering your VPCs. /lambda/faqs/;Can Lambda functions in a VPC also be able to access the internet and AWS Service endpoints?;Lambda functions configured to access resources in a particular VPC will not have access to the internet as a default configuration. If you need access to external endpoints, you will need to create a NAT in your VPC to forward this traffic and configure your security group to allow this outbound traffic. /lambda/faqs/;What is Code Signing for AWS Lambda?;Code Signing for AWS Lambda offers trust and integrity controls that enable you to verify that only unaltered code from approved developers is deployed in your Lambda functions. You can use AWS Signer, a fully-managed code signing service, to digitally sign code artifacts and configure your Lambda functions to verify the signatures at deployment. Code Signing for AWS Lambda is currently only available for functions packaged as ZIP archives. /lambda/faqs/;How do I create digitally signed code artifacts?;You can create digitally signed code artifacts using a Signing Profile through the AWS Signer console, the Signer API, SAM CLI or AWS CLI. To learn more, please see the documentation for AWS Signer. /lambda/faqs/;How do I configure my Lambda functions to enable code signing?;You can enable code signing by creating a Code Signing Configuration through the AWS Management Console, the Lambda API, the AWS CLI, AWS CloudFormation, and AWS SAM. Code Signing Configuration helps you specify the approved signing profiles and configure whether to warn or reject deployments if signature checks fail. Code Signing Configurations can be attached to individual Lambda functions to enable the code signing feature. Such functions now start verifying signatures at deployment. /lambda/faqs/;What signature checks does AWS Lambda perform on deployment?;AWS Lambda can perform the following signature checks at deployment: /lambda/faqs/;Can I enable code signing for existing functions?;Yes, you can enable code signing for existing functions by attaching a code signing configuration to the function. You can do this using the AWS Lambda console, the Lambda API, the AWS CLI, AWS CloudFormation, and AWS SAM. /lambda/faqs/;Is there any additional cost for using Code Signing for AWS Lambda?;There is no additional cost when using Code Signing for AWS Lambda. You pay the standard price for AWS Lambda. To learn more, please see Pricing. /lambda/faqs/;How do I compile my AWS Lambda function Java code?;You can use standard tools like Maven or Gradle to compile your Lambda function. Your build process should mimic the same build process you would use to compile any Java code that depends on the AWS SDK. Run your Java compiler tool on your source files and include the AWS SDK 1.9 or later with transitive dependencies on your classpath. For more details, see our documentation. /lambda/faqs/;What is the JVM environment that Lambda uses for executing my function?;Lambda provides the Amazon Linux build of openjdk 1.8. /lambda/faqs/;Can I use packages with AWS Lambda?;Yes. You can use NPM packages as well as custom packages. Learn more here. /lambda/faqs/;Can I execute other programs from within my AWS Lambda function written in Node.js?;Yes. Lambda’s built-in sandbox lets you run batch (“shell”) scripts, other language runtimes, utility routines, and executables. Learn more here. /lambda/faqs/;Is it possible to use native modules with AWS Lambda functions written in Node.js?;Yes. Any statically linked native module can be included in the ZIP file you upload, as well as dynamically linked modules compiled with an rpath pointing to your Lambda function root directory. Learn more here. /lambda/faqs/;Can I execute binaries with AWS Lambda written in Node.js?;Yes. You can use Node.js' child_process command to execute a binary that you included in your function or any executable from Amazon Linux that is visible to your function. Alternatively several NPM packages exist that wrap command line binaries such as node-ffmpeg. Learn more here. /lambda/faqs/;How do I deploy AWS Lambda function code written in Node.js?;To deploy a Lambda function written in Node.js, simply package your Javascript code and dependent libraries as a ZIP. You can upload the ZIP from your local environment, or specify an Amazon S3 location where the ZIP file is located. For more details, see our documentation. /lambda/faqs/;Can I use Python packages with AWS Lambda?;Yes. You can use pip to install any Python packages needed. /lambda/faqs/;How do I package and deploy an AWS Lambda function in C#?;"You can create a C# Lambda function using the Visual Studio IDE by selecting ""Publish to AWS Lambda"" in the Solution Explorer. Alternatively, you can directly run the ""dotnet lambda publish"" command from the dotnet CLI, which has the [# Lambda CLI tools patch] installed, which creates a ZIP of your C# source code along with all NuGet dependencies as well as your own published DLL assemblies, and automatically uploads it to AWS Lambda using the runtime parameter “dotnetcore1.0”" /lambda/faqs/;How do I deploy AWS Lambda function code written in PowerShell?;A PowerShell Lambda deployment package is a ZIP file that contains your PowerShell script, PowerShell modules that are required for your PowerShell script, and the assemblies needed to host PowerShell Core. You then use the AWSLambdaPSCore PowerShell module that you can install from the PowerShell Gallery to create your PowerShell Lambda deployment package. /lambda/faqs/;How do I deploy AWS Lambda function code written in PowerShell?; A PowerShell Lambda deployment package is a ZIP file that contains your PowerShell script, PowerShell modules that are required for your PowerShell script, and the assemblies needed to host PowerShell Core. You then use the AWSLambdaPSCore PowerShell module that you can install from the PowerShell Gallery to create your PowerShell Lambda deployment package. /lambda/faqs/;How do I coordinate calls between multiple Lambda functions?;You can use Amazon Step Functions to coordinate multiple invoking Lambda functions. You can invoke multiple Lambda functions serially, passing the output of one to the other, or in parallel. See our documentation for more details. /lambda/faqs/;Does AWS Lambda support Advanced Vector Extensions 2 (AVX2)?;Yes, AWS Lambda supports the Advanced Vector Extensions 2 (AVX2) instruction set. To learn more about how to compile your application code to target this instruction set for improved performance, visit the AWS Lambda developer documentation. /outposts/rack/faqs/;Why would I use AWS Outposts rack instead of operating in an AWS Region?;You can use Outposts rack to support your applications that have low latency or local data processing requirements. These applications may need to generate near real-time responses to end user applications or need to communicate with other on-premises systems or control on-site equipment. These can include workloads running on factory floors for automated operations in manufacturing, real-time patient diagnosis or medical imaging, and content and media streaming. You can use Outposts rack to securely store and process customer data that needs to remain on premises or in countries where there is no AWS Region. You can run data intensive workloads on Outposts rack and process data locally when transmitting data to AWS Regions is expensive and wasteful and for better control on data analysis, backup and restore. /outposts/rack/faqs/;In which AWS Region is Outposts rack available?;Outposts rack is supported in the following AWS Regions and customers can connect their Outposts to the following AWS Regions: /outposts/rack/faqs/;In which countries and territories is Outposts rack available?;Outposts rack can be shipped to and installed in the following countries and territories. /outposts/rack/faqs/;Can I order an Outpost to a country or territory where Outposts rack has not launched and link it back to a supported Region?;No, we can deliver and install Outposts rack only in countries and territories where Outposts rack can be delivered and supported. /outposts/rack/faqs/;Can I use Outposts rack when it is not connected to the AWS Region or in a disconnected environment?;An Outpost relies on connectivity to the parent AWS Region. Outposts rack is not designed for disconnected operations or environments with limited to no connectivity. We recommend that customers have highly available networking connections back to their AWS Region. If interested in leveraging AWS services in disconnected environments such as cruise ships or remote mining locations, learn more about AWS services such as Snowball Edge, which is optimized to operate in environments with limited to no connectivity. /outposts/rack/faqs/;Can I reuse my existing servers in an Outpost?;No, AWS Outposts rack leverages AWS designed infrastructure, and is only supported on AWS-designed hardware that is optimized for secure, high-performance, and reliable operations. /outposts/rack/faqs/;Is there a software-only version of AWS Outposts rack?;No, AWS Outposts rack is a fully managed service that provides you with native access to AWS services. /outposts/rack/faqs/;Can I order my own hardware that can be installed as part of my Outposts rack?;No, AWS Outposts rack provides fully integrated AWS designed configurations with built in top-of-rack switches and redundant power supply to ensure an ideal AWS experience. You can order as much compute and storage infrastructure as you need by selecting from the range of available Outposts rack options, or work with us to create a custom combination with your desired Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), and Amazon Simple Storage Service (S3) capacity. These are pre-validated and tested to ensure that you can get started quickly with no additional effort or configuration required on-site. /outposts/rack/faqs/;Can I create EC2 instances using an EBS backed AMI on my Outposts?;Yes, you can launch EC2 instances using the AMIs backed with EBS gp2 volume types. /outposts/rack/faqs/;Where are EBS snapshots stored?;EBS snapshots of EBS Volumes on Outposts rack are stored by default on Amazon S3 in the Region. If the Outpost is provisioned with Amazon S3 on Outposts you have the option to store your snapshots locally on your Outpost. EBS snapshots are incremental, which means that only the blocks on your Outpost that have changed after your most recent snapshot are saved. You can at any time restore (hydrate) EBS Volume on Outposts from the stored snapshots. To learn more, visit the EBS Snapshots documentation. /outposts/rack/faqs/;What use cases are best suited to run on S3 on Outposts?;S3 on Outposts is ideal for customers with data residency requirements or those in regulated industries that need to securely store and process customer data that needs to remain on premises or in locations where there is no AWS region. Additionally, customers can use S3 on Outposts to run data intensive workloads to process data locally and store on-premises. S3 on Outposts will also help if you have applications that need to communicate with other on-premises systems or control on-site equipment, such as within a factory, hospital, or research facility. /outposts/rack/faqs/;How can I establish network connectivity between my Outpost and the AWS Region?;You can choose to establish Outposts rack service link VPN connection to the parent AWS Region via an AWS Direct Connect private connection, a public virtual interface, or the public Internet. /outposts/rack/faqs/;Is Application Load Balancer available on Outposts rack?;Yes, Application Load Balancer is available on Outposts rack in all AWS Regions where Outposts rack is available, except the AWS GovCloud (US) Regions. /outposts/rack/faqs/;Can Outposts rack support real-time applications with low-latency requirements?;Yes, with Amazon RDS on Outposts you can run managed Microsoft SQL Server, MySQL, and PostgreSQL databases on premises for low latency workloads that need to be run in close proximity to on premises data and applications. You can manage RDS databases both in the cloud and on premises using the same AWS Management Console, APIs, and CLI. For ultra low-latency applications, ElastiCache on Outposts enables sub-millisecond responses for real-time applications, including workloads running on factory floors for automated operations in manufacturing, real-time patient diagnosis, and media streaming. /outposts/rack/faqs/;Can Outposts rack be used to meet data residency requirements?;Yes. Customer data can be configured to remain on Outposts rack using Amazon Elastic Block Store (EBS) and Amazon Simple Storage Service (S3) on Outposts, in the customer’s on-premises location or specified co-location facility. Well-architected applications using Outposts rack and AWS services and tools address the data residency requirements we most commonly hear from our customers. AWS Identity and Access Management (IAM) lets you control access to AWS resources. You can use IAM and granular data control rules to specify which types of data must remain on Outposts rack and cannot be replicated to the AWS Region. S3 on Outposts stores data on your Outpost by default, and you may choose to replicate some or all of your data to AWS Regions based on your specific residency requirements. ElastiCache on Outposts allows you to securely process customer data locally on the Outposts rack. Some limited meta-data (e.g. instance IDs, monitoring metrics, metering records, tags, bucket names, etc.) will flow back to the AWS Region. To ensure your unique data residency requirements are met, we recommend you confirm and work closely with your compliance and security teams. /outposts/rack/faqs/;Is Resource Sharing available on AWS Outposts rack?;Yes. AWS Resource Access Manager (RAM) is a service that enables you to share your AWS resources with any AWS account or within your AWS Organization. RAM support lets you, the Outpost owner, create and manage Outpost resources — EC2 instances, EBS volumes, subnets, and local gateways (LGWs) centrally — and share the resources across multiple AWS accounts within the same AWS organization. This allows others to configure VPCs, launch and run instances, and create EBS volumes on the shared Outpost. /outposts/rack/faqs/;Which EC2 instances are available on Outposts rack?;EC2 instances built on the AWS Nitro System, for general purpose, compute optimized, memory optimized, storage optimized, and GPU optimized with Intel Xeon Scalable processors are supported on AWS Outposts rack, and Graviton processors based EC2 instances are coming soon. /outposts/rack/faqs/;Are there any prerequisites for deploying Outposts 42U racks at my location?;Your site must support the basic power, networking and space requirements to host an Outpost. An Outposts rack needs 5-15 kVA, can support 1/10/40/100 Gbps uplinks, and space for a 42U rack (80” X 24” X 48” dimensions). As Outposts rack requires reliable network connectivity to the AWS Region, you should plan for a public internet connection. Customers must have Enterprise Support or Enterprise On-Ramp Support, which provides 24x7 remote support within 15 minutes or within 30 minutes depending on the Support plan selected. /outposts/rack/faqs/;Do the same compliance certifications for AWS Services today apply for services on Outposts rack?;AWS Outposts rack itself is HIPAA eligible, PCI, SOC, ISMAP, IRAP and FINMA compliant, ISO, CSA STAR, and HITRUST certified, and we expect to add more compliance certifications in coming months. You can see the latest certification status for AWS Services on Outposts rack on our Services in Scope page. AWS Services on Outposts rack like RDS or Elasticache Redis that have achieved certifications like PCI are also considered certified on Outposts rack. As AWS Outposts rack runs at the customer’s data center, under the AWS Shared Responsibility model customers own the responsibility for physical security and access controls around the Outpost for compliance certification. /outposts/rack/faqs/;Is AWS Outposts rack GxP Compatible?;Yes, AWS Outposts rack is GxP compatible. AWS Outposts rack extends AWS services to AWS-managed infrastructure that is physically located at a customer site. Outposts rack capacity can be accessed locally over a local gateway that is mapped to the customer’s local network, in addition to having a connection path back to the AWS Region. You can learn more about using the AWS Cloud, including Outposts rack, for GxP systems here. GxP-regulated life sciences organizations using AWS services are responsible for designing and verifying their GxP compliance. /outposts/rack/faqs/;Who is responsible for the physical security of the Outposts rack at my datacenter?;AWS provides services that allow data to be encrypted at rest and in-transit and other granular security controls and auditing mechanisms. In addition, customer data is wrapped to a physical Nitro Security Key. Destroying the device is equivalent to destroying the data. As part of the shared responsibility model, customers are responsible for attesting to physical security and access controls around the Outpost, as well as environmental requirements for facility, networking, and power as published here . Prior to returning the Outposts rack hardware, the Nitro Security Key will be removed to ensure customer content is crypto shredded. /outposts/rack/faqs/;How does AWS maintain AWS Outposts rack infrastructure?;When your Outpost is installed and is visible in the AWS Management Console, AWS will monitor it as part of the public Region and will automatically execute software upgrades and patches. /outposts/rack/faqs/;What happens when my facility's network connection goes down?;EC2 instances and EBS volumes on the Outpost will continue to operate normally and can be accessed locally via the local gateway. Similarly, AWS service resources such as ECS worker nodes continue to run locally. However, API availability will be degraded, for instance run/start/stop/terminate APIs may not work. Instance metrics and logs will continue to be cached locally for a few hours, and will be pushed to the AWS Region when connectivity returns. Disconnection beyond a few hours however may result in loss of metrics and logs. At this time, DNqueries on the Outpost to the Route 53 Resolver (aka AmazonProvidedDNS) also rely on the network link to the AWS Region, so default DNresolution will stop working. If you expect to lose network connectivity, we strongly recommend regularly testing your workload to ensure it behaves properly in this state when an Outpost is disconnected. For S3 on Outposts, if the network connection to your Outpost is lost, you will not be able to access your objects. Requests to store and retrieve objects are authenticated using the regional AWS Identity and Access Management (IAM) service, and if the Outpost has no connectivity to the home AWS Region, you are not able to access your data. Your data remains safely stored on your Outpost during periods of disconnect, and once connectivity is restored authentication and requests can resume. /outposts/rack/faqs/;What type of control plane information flows back to the parent AWS Region?;As an example, information about instance health, instance activity (launched, stopped), and the underlying hypervisor system may be sent back to the parent AWS Region. This information enables AWS to provide alerting on instance health and capacity, and apply patches and updates to the Outpost. Your team does not need to implement your own tooling to manage these elements, or to actively push security updates and patches for your Outpost. For S3 on Outposts, certain data management and telemetry data, such as bucket names and metrics, may be stored in the AWS Region for reporting and management. When disconnected, this information cannot be sent back to the parent Region. /outposts/rack/faqs/;How does AWS support adding capacity to existing Outposts?;There are two mechanisms to increase your compute and storage capacity of your AWS Outposts rack. First, you can increase capacity by adding additional Outposts racks from the Outposts rack catalog. Second, if your existing Outposts racks have available power and positions within the rack, you can increase from a “small” to a “medium” or “large” configuration, or from a” medium” to a “large” configuration. You will be able to add compute and storage capacity a maximum of twice within a rack that supports 10KVA – 15KVA power consumption. Note: The 1U and 2U Outposts servers cannot be installed in the 42U Outposts form factor. /serverless/serverlessrepo/faqs/;How do I package a nested app that I used from the Serverless Application Repository?;Nested applications from the Serverless Application Repository are already packaged and ready for you to use. You can use the existing SAM CLI sam package command to ensure that nested applications are still available to you before you deploy the application in your account. /vmware/faqs/;What is VMware Cloud on AWS?;VMware Cloud on AWS is the preferred service for AWS for all vSphere-based workloads. VMware Cloud on AWS brings VMware’s enterprise-class SDDC software to the AWS Cloud with optimized access to native AWS services. Powered by VMware Cloud Foundation, VMware Cloud on AWS integrates VMware's compute, storage, and network virtualization products (VMware vSphere, VMware vSANand VMware NSX) along with VMware vCenter Server management, optimized to run on dedicated, elastic, bare-metal AWS infrastructure. /vmware/faqs/;Why should I use VMware Cloud on AWS?;AWS is VMware's preferred public cloud partner for all vSphere-based workloads. VMware Cloud on AWS provides you consistent and interoperable infrastructure and services between VMware-based datacenters and the AWS cloud, which minimizes the complexity and associated risks of managing diverse environments. VMware Cloud on AWS offers native access to AWS services and innovation that extends the value of enterprise applications over their lifecycle. /vmware/faqs/;Where is VMware Cloud on AWS available today?;The service is available in 23 regions: AWS US East (NVirginia), AWS US East (Ohio), AWS US West (NCalifornia), AWS US West (Oregon), AWS Canada (Central), AWS Europe (Frankfurt), AWS Europe (Ireland), AWS Europe (London), AWS Europe (Paris), AWS Asia Pacific (Singapore), AWS Asia Pacific (Sydney), AWS Asia Pacific (Tokyo), AWS Asia Pacific (Mumbai) Region, AWS South America (Sao Paulo), AWS Asia Pacific (Seoul), AWS Europe (Stockholm), AWS Europe (Milan), AWS Asia Pacific (Osaka), AWS GovCloud (US West), AWS GovCloud (US East), AWS Asia Pacific (Hong Kong), AWS Africa (Cape Town), and AWS Middle East (Bahrain). /vmware/faqs/;Can workloads running in a VMware Cloud on AWS instance integrate with AWS services?;Yes. VMware Cloud on AWS SDDC is directly connected to customer’s VPC using Elastic Network Interface (ENI) and therefore has access to AWS services. Virtual machine workloads can access public API endpoints for AWS services such as AWS Lambda, Amazon Simple Queue Service (SQS), Amazon S3 and Elastic Load Balancing, as well as private resources in the customer's Amazon VPC such as Amazon EC2, and data and analytics services such as Amazon RDS, Amazon DynamoDB, Amazon Kinesis and Amazon Redshift. Customers can also now enjoy Amazon Elastic File System (EFS) for fully managed file service to scale the file-based storage automatically to petabyte scale with high availability and durability across multiple Availability Zones (AZs) and the newest generation of VPC Endpoints designed to access AWS services while keeping all the traffic within the AWS network. /vmware/faqs/;How do I get started with VMware Cloud on AWS?;With a new purchase agreement in place, customers can now buy VMware Cloud on AWS directly through AWS and AWS Partner Network (APNPartners in the AWS Solution Provider Program. This allows customers the flexibility to purchase VMware Cloud on AWS either through AWS or VMware, or the AWS Solution Provider or VMware VPN Solution Provider of their choice. Through our partnership, customers can use additional services like Amazon RDS for VMware. /vmware/faqs/;Can I use my existing Windows Server licenses in VMware Cloud on AWS?;Yes. Please consult your Microsoft Product Terms for more details and any restrictions. /vmware/faqs/;What is single host SDDC starter configuration?;Single host SDDC starter configuration is a time-bound offering for customers to kickstart their VMware Cloud on AWS on-demand hybrid experience at a low, predictable price. Service life for the single host SDDC is limited to 30-day intervals only. This new consumption option is designed for customers who want to prove the value of VMware Cloud on AWS in their environment before scaling to 3+ host configurations for production environments. /vmware/faqs/;What compliance certifications has VMware Cloud on AWS achieved?;VMware Cloud on AWS has been independently verified to comply with many leading compliance programs, including but not limited to ISO 27001, ISO 27017, ISO 27018, SOC 2, HIPAA, PCI-DSS, OSPAR, IRAP. Check VMware Cloud Trust Center for more information. /vmware/faqs/;How is VMware Cloud on AWS deployed?;VMware Cloud on AWS infrastructure runs on dedicated, single tenant hosts provided by AWS in a single account. Each host is equivalent to an Amazon EC2 I3.metal instance (2 sockets with 18 cores per socket, 512 GiB RAM, and 15.2 TB Raw SSD storage). Each host is capable of running many VMware Virtual Machines (tens to hundreds depending on their compute, memory and storage requirements). Clusters can range from a minimum of 2 hosts up to a maximum of 16 hosts per cluster. A single VMware vCenter server is deployed per SDDC environment. /vmware/faqs/;What version of VMware vSphere do I need in my on-premises environment?;With vSphere 6.0 or later running in your on-premises environment, you can move workloads to and from VMware Cloud on AWS by doing cold migration of VMs. Nconversion or modification is necessary. In order to take advantage of “Hybrid Linked Mode” for single pane of glass management between your on-premises environment and VMware Cloud on AWS, you must have VMware vSphere 6.5 or later. /vmware/faqs/;How do I manage resources on VMware Cloud on AWS?;You can use the same management tools you use today. A vCenter Server instance is deployed as part of every VMware Cloud on AWS SDDC. You may connect to this vCenter Server instance to manage their VMware Cloud on AWS clusters. A VMware Cloud Web Console is provided which allows for common tasks such as the add/remove hosts, configure firewalls and other basic networking settings. It is important to note that tools that require plug-ins or extensive vSphere permissions may not function properly in VMware Cloud on AWS. VMware Cloud on AWS uses a least privilege security model in which you (and therefore their tools) do not have full administrative access. /vmware/faqs/;Can I manage both my existing data center VMware vSphere VMs and my VMware Cloud on AWS instances in a single view?;You will need vSphere version 6.5 and vCenter Server 6.5 or later running in your data center to use vCenter Hybrid Linked Mode for single pane of glass management of resources on-premises and in the cloud. If you do not have VMware vSphere 6.5 or later running in your on-premises environment, you will need to run multiple vCenter instances to manage your environment: one vCenter instance on-premises and one vCenter instance in VMware Cloud on AWS. /vmware/faqs/;Can I migrate existing vSphere VMs to my VMware Cloud on AWS deployment?;Yes. There are multiple ways to migrate existing vSphere VMs to VMware Cloud on AWS. You can perform of a live migration of vSphere VMs via a vMotion or by leveraging VMware Hybrid Cloud Extension (HCX). /vmware/faqs/;What does VMware mean when it says AWS is its preferred partner?;The relationship we have with AWS is a mutual and strategic partnership that runs both ways. AWS is VMware’s preferred public cloud partner for all VMware vSphere-based workloads. Conversely, VMware Cloud on AWS is the preferred public cloud service recommended by AWS for all VMware vSphere based workloads. For over 4 years, VMware and AWS have been jointly engineering, selling, operating and supporting hybrid cloud solutions. This preferred status is a confirmation of the maturity of the partnership, and customer traction of the VMware Cloud on AWS service. /vmware/faqs/;What is the distinction in what AWS offers as a preferred partner for VMware with the VMware Cloud on AWS service?;There are two clear areas of distinction in the AWS relationship. The first is that VMware Cloud on AWS is the only public cloud service delivered, operated and supported by VMware. Additionally, as strategic and preferred partners, there is a deeper level of engineering and joint go to market investment that we have with AWS. /vmware/faqs/;How can I get more information?;Please contact us. /ebs/faqs/;Are Amazon EBS volume and snapshot ID lengths changing in 2018?;Yes, please visit the EC2 FAQs page for more details. /ebs/faqs/;What happens to my data when an Amazon EC2 instance terminates?;"Unlike the data stored on a local instance store (which persists only as long as that instance is alive), data stored on an Amazon EBS volume can persist independently of the life of the instance. Therefore, we recommend that you use the local instance store only for temporary data. For data requiring a higher level of durability, we recommend using Amazon EBS volumes or backing up the data to Amazon S3. If you are using an Amazon EBS volume as a root partition, set the Delete on termination flag to ""No"" if you want your Amazon EBS volume to persist outside the life of the instance." /ebs/faqs/;What kind of performance can I expect from Amazon EBS volumes?;Amazon EBS provides seven volume types: Provisioned IOPS SSD (io2 Block Express, io2, and io1), General Purpose SSD (gp3 and gp2), Throughput Optimized HDD (st1) and Cold HDD (sc1). These volume types differ in performance characteristics and price, allowing you to tailor your storage performance and cost to the needs of your applications. The average latency between EC2 instances and EBS is single digit milliseconds. For more performance information see the EBS product details page. For more information about Amazon EBS performance guidelines, see Increasing EBS Performance. /ebs/faqs/;Which volume should I choose?;Amazon EBS includes two major categories of storage: SSD-backed storage for transactional workloads (performance depends primarily on IOPS, latency, and durability) and HDD-backed storage for throughput workloads (performance depends primarily on throughput, measured in MB/s). SSD-backed volumes are designed for transactional, IOPS-intensive database workloads, boot volumes, and workloads that require high IOPS. SSD-backed volumes include Provisioned IOPS SSD (io1 and io2) and General Purpose SSD (gp3 and gp2). Both io2 and io2 Block Express of the Provisioned IOPS SSD volumes are designed to provide 100X durability of 99.999% making it ideal for business-critical applications that need higher uptime. gp3 is the latest generation of General Purpose SSD volumes that provides the right balance of price and performance for most applications that don’t require the highest IOPS performance or 99.999% durability. HDD-backed volumes are designed for throughput-intensive and big-data workloads, large I/O sizes, and sequential I/O patterns. HDD-backed volumes include Throughput Optimized HDD (st1) and Cold HDD (sc1). /ebs/faqs/;Since io2 provides higher volume durability, should I still take snapshots and plan to replicate io2 volumes across Availability Zones (AZs) for high durability?;High volume durability, snapshots, and replicating volumes across AZs protect against different types of failures, and customers can choose to use one, two, or all of these approaches based on their data durability requirements. Higher volume durability reduces the probability of losing the primary copy of your data. Snapshots protect against the unlikely event of a volume failure. Replicating volumes across AZs protects against an AZ level failure and also provides faster recovery in case of failure. /ebs/faqs/;What are best practices for high availability on Amazon EBS?;Amazon EBS volumes are designed to be highly available, reliable, and durable. At no additional charge to you, Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component. Depending on the degree of high availability (HA) that your application requires, we recommend these guidelines to achieve a robust degree of high availability: 1) Design the system to have no single point of failure. For more details, see High Availability and Scaling on AWS. 2) Use automated monitoring, failure detection, and failover mechanisms. See Monitoring the Status of your EBS volumes and Monitoring EBS Volumes using CloudWatch for more details on monitoring your EBS Volume’s performance. 3) Prepare operating procedures for manual mechanisms to respond to, mitigate, and recover from any failures. This includes detaching unavailable volumes and attaching a backup recovery volume in cases of failure. For more details, see the documentation on Replacing an EBS volume. /ebs/faqs/;How do I modify the capacity, performance, or type of an existing EBS volume?;Changing a volume configuration is easy. The Elastic Volumes feature allows you to increase capacity, tune performance, or change your volume type with a single CLI call, API call or a few console clicks. For more information about Elastic Volumes, see the Elastic Volumes documentation. /ebs/faqs/;Are EBS Standard Volumes still available?;EBS Standard Volumes have been renamed to EBS Magnetic volumes. Any existing volumes will not have been changed as a result of this and there are no functional differences in the EBS Magnetic offering compared to EBS Standard. The name of this offering was changed to avoid confusion with our General Purpose SSD (gp2) volume type which is our recommended default volume type. /ebs/faqs/;Are Provisioned IOPS SSD (io2 Block Express, io2, and io1) volumes available for all Amazon EC2 instance types?;Provisioned IOPS SSD io2 volumes are available on all EC2 Instances Types, with the exception of the EC2 instances which support io2 Block Express. io2 Block Express volumes are currently available on these Amazon EC2 instances. Use EBS optimized EC2 instances to deliver consistent and predictable IOPS on io2 and io1 volumes. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 62.5 MB/s and 7,500 MB/s depending on the instance type used. To achieve the limit of 64,000 IOPS and 1,000 MB/s throughput, the volume must be attached to a Nitro System-based EC2 instance. /ebs/faqs/;What is the difference between io2 and io2 Block Express?;io2 volumes offer high performance block storage for all EC2 instances. For applications that require even higher performance, you can attach io2 volumes to these Amazon EC2 instances which run on Block Express and provide 4x higher performance than io2. This will enable you to achieve up to 64 TiB capacity, 256,000 IOPS and 4,000 MB/s of throughput from a single io2 volume along with sub-millisecond average IO latency. /ebs/faqs/;What level of performance consistency can I expect to see from my Provisioned IOPS SSD (io2 and io1) volumes?;When attached to EBS-optimized instances, Provisioned IOPS SSD (io2 and io1) volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time in a given year. Your exact performance depends on your application’s I/O requirements. /ebs/faqs/;What level of performance latency can I expect to see from my Provisioned IOPS SSD (io2 and io1) volumes?;When attached to EBS-optimized instances, Provisioned IOPS (io1 and io2) volumes can achieve single digit millisecond latencies and io2 Block Express volumes can achieve sub-millisecond latency. Your exact performance depends on your application’s I/O requirements. /ebs/faqs/;Does the I/O size of my application reads and writes affect the rate of IOPS I get from my Provisioned IOPS SSD (io2 and io1) volumes?;Yes, it does. When you provision IOPS for io2 or io1 volumes, the IOPS rate you get depends on the I/O size of your application reads and writes. Provisioned IOPS volumes have a base I/O size of 16KB. So, if you have provisioned a volume with 40,000 IOPS for an I/O size of 16KB, it will achieve up to 40,000 IOPS at that size. If the I/O size is increased to 32 KB, then you will achieve up to 20,000 IOPS, and so on. For more details, please visit technical documentation on Provisioned IOPS volumes. You can use Amazon CloudWatch to monitor your throughput and I/O sizes. /ebs/faqs/;What factors can affect the performance consistency I see with Provisioned IOPS SSD (io2 and io1) volumes?;Provisioned IOPS SSD (io2 and io1) volumes attached to EBS-optimized instances are designed to offer consistent performance, delivering within 10% of the provisioned IOPS performance 99.9% of the time over a given year. For maximum performance consistency with new volumes created from a snapshot, we recommend enabling Fast Snapshot Restore (FSR) on your snapshots. EBS volumes restored from FSR-enabled snapshots instantly receive their full performance. /ebs/faqs/;What level of performance consistency can I expect to see from my HDD-backed volumes?;When attached to EBS-optimized instances, Throughput Optimized HDD (st1) and Cold HDD (sc1) volumes are designed to deliver within 10% of the expected throughput performance 99% of the time in a given year. Your exact performance depends on your application’s I/O requirements and the performance of your EC2 instance. /ebs/faqs/;Does the I/O size of my application reads and writes affect the rate of throughput I get from my HDD-backed volumes?;Yes. The throughput rate you get depends on the I/O size of your application reads and writes. HDD-backed volumes process reads and writes in I/O sizes of 1MB. Sequential I/Os are merged and processed as 1 MB units while each non-sequential I/O is processed as 1MB even if the actual I/O size is smaller. Thus, while a transactional workload with small, random IOs, such as a database, won't perform well on HDD-backed volumes, sequential I/Os and large I/O sizes will achieve the advertised performance of st1 and sc1 for a longer period of time. /ebs/faqs/;What factors can affect the performance consistency of my HDD-backed volumes?;Throughput Optimized HDD (st1) and Cold HDD (sc1) volumes attached to EBS-optimized instances are designed to offer consistent performance, delivering within 10% of the expected throughput performance 99% of the time in a given year. There are several factors that could affect the level of consistency you see. For example, the relative balance between random and sequential I/O operations on the volume can impact your performance. Too many random small I/O operations will quickly deplete your I/O credits and lower your performance down to the baseline rate. Your throughput rate may also be lower depending on the instance selected. Although st1 can drive throughput up to 500 MB/s, performance will be limited by the separate instance-level limit for EBS traffic. Another factor is taking a snapshot which will decrease expected write performance down to the baseline rate, until the snapshot completes. This is specific to st1 and sc1. /ebs/faqs/;Can I stripe multiple volumes together to get better performance?;Yes. You can stripe multiple volumes together to achieve up to 260,000 IOPS or 60,000 Mbps (or 7500 MB/s) when attached to larger EC2 instances. However, performance for st1 and sc1 scales linearly with volume size so there may not be as much of a benefit to stripe these volumes together. /ebs/faqs/;How does Amazon EBS handle issues like storage contention?;EBS is a multi-tenant block storage service. We employ rate limiting as a mechanism to avoid resource contention. This starts with having defined performance criteria for the volumes – our volume types (gp2, PIOPS, st1, and sc1) all have defined performance characteristics in terms of IOPS and throughput. The next step is defining performance at the instance level. Each EBS Optimized instance has defined performance (both throughput and IOPS) for the set of EBS volumes attached to the instance. A customer can, therefore, size instances and volumes to get the desired level of performance. In addition, customers can use our reported metrics to observe instance level and volume level performance. They can set alarms to determine if what they are seeing does not match the expected performance – the metrics can also help determine if customers are configured at the right type of instance with the right amount of performance at the volume level or not. On the EBS end, we use the configured performance to inform how we allocate the appropriate instance and EBS infrastructure to support the volumes. By appropriately allocating infrastructure, we avoid resource contention. Additionally, we constantly monitor our infrastructure. This monitoring allows us to detect infrastructure failure (or imminent infrastructure failure) and therefore, move the volumes pro-actively to functioning hardware while the underlying infrastructure is either repaired or replaced (as appropriate). /ebs/faqs/;What level of performance consistency can I expect to see from my General Purpose SSD (gp3 and gp2) volumes?;When attached to EBS-optimized instances, General Purpose SSD (gp3 and gp2) volumes are designed to deliver within 10% of the provisioned IOPS performance 99% of the time in a given year. Your exact performance depends on your application’s I/O requirements. /ebs/faqs/;What level of performance latency can I expect to see from my General Purpose SSD (gp3 and gp2) volumes?;When attached to EBS-optimized instances, General Purpose SSD (gp3 and gp2) volumes can achieve single digit millisecond latencies. Your exact performance depends on your application’s I/O requirements. /ebs/faqs/;Do General Purpose SSD (gp3) volumes have burst?;No. All General Purpose SSD (gp3) volumes include 3,000 IOPS and 125 MB/s of consistent performance at no additional cost. Volumes can sustain the full 3,000 IOPS and 125 MB/s indefinitely. /ebs/faqs/;How does burst work on General Purpose SSD (gp2) volumes?;General Purpose SSD (gp2) volumes that are under 1,000 GB receive burst IOPS performance up to 3,000 IOPS for at least 30 min of sustained performance. Additionally, gp2 volumes deliver consistent performance of 3 IOPS per provisioned GB. For example, a 500 GB volume is capable of driving 1,500 IOPS consistently, and bursting to 3,000 IOPS for 60 minutes (3,000 IOPS * 60 seconds * 30 minutes / 1,500 IOPS / 60 seconds). /ebs/faqs/;What is the difference between io2 and io2 Block Express?;io2 volumes offer high performance block storage for all EC2 instances. For applications that require even higher performance, you can attach io2 volumes to these Amazon EC2 instances which run on Block Express and provide 4x higher performance than io2. This will enable you to achieve up to 64 TiB capacity, 256,000 IOPS and 4,000 MB/s of throughput from a single io2 volume along with sub-millisecond average IO latency. /ebs/faqs/;What is EBS Block Express?;EBS Block Express is the next generation of Amazon EBS storage server architecture purpose-built to deliver the highest levels of performance with sub-millisecond latency for block storage at cloud scale. Block Express does this by using Scalable Reliable Datagrams (SRD), a high-performance lower-latency network protocol, to communicate with Nitro System-based EC2 instances. This is the same high performance and low latency network interface that is used for inter-instance communication in Elastic Fabric Adapter (EFA) for High Performance Computing (HPC) and Machine Learning (ML) workloads. Additionally, Block Express offers modular software and hardware building blocks that can be assembled in many different ways, giving us the flexibility to design and deliver improved performance and new features at a faster rate. /ebs/faqs/;What workloads are suited for io2 Block Express?;io2 Block Express is suited for performance and capacity intensive workloads that benefit from lower latency, higher IOPS, higher throughput, or larger capacity in a single volume. These workloads include relational and NoSQL databases such as SAP HANA, Oracle, MS SQL, PostgreSQL, MySQL, MongoDB, Cassandra, and critical business operation workloads such as SAP Business Suite, NetWeaver, Oracle eBusiness, PeopleSoft, Siebel, and ERP workloads such as Infor LN and Infor M3. /ebs/faqs/;How do I know if an io2 volume is running on Block Express?;If an io2 volume is attached to these Amazon EC2 instances then it runs on Block Express, which offers sub-millisecond latency and capability to drive up to 256,000 IOPS and 4,000 MB/s throughput, and up to 64 TiB in size for a single volume. io2 volumes attached to all other instances do not run on Block Express and offer single-digit millisecond latency and capability to drive up to 64K IOPS and 1 GB/s throughput, and up to 16 TiB in size for a single volume. /ebs/faqs/;How can I use EBS direct APIs for Snapshots?;This feature can be used via the following APIs that can be called using AWS CLI or via AWS SDK. /ebs/faqs/;What block sizes are supported by GetSnapshotBlock and PutSnapshotBlock APIs?;GetSnapshotBlock and PutSnapshotBlock APIs support 512KiB block size. /ebs/faqs/;Will I be able to access my snapshots using the regular Amazon S3 API?;No, snapshots are only available through the Amazon EC2 API. /ebs/faqs/;Do volumes need to be un-mounted to take a snapshot?;No, snapshots can be done in real time while the volume is attached and in use. However, snapshots only capture data that has been written to your Amazon EBS volume, which might exclude any data that has been locally cached by your application or OS. To ensure consistent snapshots on volumes attached to an instance, we recommend detaching the volume cleanly, issuing the snapshot command, and then reattaching the volume. For Amazon EBS volumes that serve as root devices, we recommend shutting down the machine to take a clean snapshot. /ebs/faqs/;Does it take longer to snapshot an entire 16 TB volume as compared to an entire 1 TB volume?;By design, an EBS Snapshot of an entire 16 TB volume should take no longer than the time it takes to snapshot an entire 1 TB volume. However, the actual time taken to create a snapshot depends on several factors including the amount of data that has changed since the last snapshot of the EBS volume. /ebs/faqs/;How can I discover Amazon EBS snapshots that are shared with me?;You can find snapshots that are shared with you by selecting Private Snapshots from the list in the Snapshots section of the AWS Management Console. This section lists both snapshots that you own and snapshots that are shared with you. /ebs/faqs/;How can I find which Amazon EBS snapshots are shared globally?;You can find snapshots that are shared globally by selecting Public Snapshots from the list in the Snapshots section of the AWS Management Console. /ebs/faqs/;How can I find a list of Amazon public datasets stored in Amazon EBS Snapshots?;You can use the AWS Management Console to find public datasets stored as Amazon Snapshots. Log into the console, select the Amazon EC2 Service, select Snapshots and then filter on Public Snapshots. All information on public datasets is available in our AWS Public Datasets resource center. /ebs/faqs/;When would I use Fast Snapshot Restore (FSR)?;You should enable FSR on snapshots if you are concerned about latency of data access when you restore data from a snapshot to a volume and want to avoid the initial performance hit during initialization. FSR is intended to help with use cases such as virtual desktop infrastructure (VDI), backup & restore, test/dev volume copies, and booting from custom AMIs. By enabling FSR on your snapshot, you will see improved and predictable performance whenever you need to restore data from that snapshot. /ebs/faqs/;Does enabling FSR for my snapshot speed up snapshot creation?;No. FSR-enabled snapshots improve restoring backup data from your snapshot to your volumes. FSR-enabled snapshots do not speed up snapshot creation time. /ebs/faqs/;How do I enable Fast Snapshot Restore (FSR)?;To use the feature, invoke the new enable-fast-snapshot-restores API on a snapshot within the availability zone (AZ) where initialized volumes are to be restored. /ebs/faqs/;How do I use Fast Snapshot Restore (FSR)?;Volumes created from an FSR-enabled snapshot are fully initialized. However, there are limits on the number of volumes that can be created with immediate full performance. These limits are expressed in the form of a credit bucket that is associated with an FSR-enabled snapshot in a given AZ. The important things to know regarding credits: /ebs/faqs/;How many concurrent volumes can I create and what happens when I surpass this limit?;The size of the create credit bucket represents the maximum number and the balance of the credit bucket represents the number of creates available. When filled, up to 10 initialized volumes can be created from an FSR-enabled snapshot at once. Both the maximum size of the credit bucket and the credit bucket balance are published as CloudWatch metrics. Volume creations beyond the limit will proceed as if FSR is not enabled on the snapshot. /ebs/faqs/;How do I know when a volume was created from an FSR-enabled snapshot?;When using FSR, a new EBS-specific attribute (fastRestored) is added in the DescribeVolumes API to denote the status at create time. When a volume is created from an FSR-enabled snapshot without sufficient volume-create credits, the create will succeed but the volume will not be initialized. /ebs/faqs/;What happens to FSR when I delete a snapshot?;When you delete a snapshot, the FSR for your snapshot is automatically disabled and FSR billing for the snapshot will be terminated. /ebs/faqs/;Can I enable FSR for public and private snapshots shared with me?;Yes, you can enable FSR for public snapshots as well as all private snapshots shared with your account. To enable FSR for shared snapshots, you can use the same set of API calls that you use for enabling FSR on snapshots you own. /ebs/faqs/;How am I billed for enabling FSR on a snapshot shared with me?;When you enable FSR on your shared snapshot, you will be billed at standard FSR rates (see pricing pages). Note that only your account will be billed for the FSR of the shared snapshot. The owner of the snapshot will not get billed when you enable FSR on the shared snapshot. /ebs/faqs/;What happens to the FSR for a shared snapshot when the owner of the snapshot stops sharing the snapshot or deletes it?;When the owner of your shared snapshot deletes the snapshot, or stops sharing the snapshot with you by revoking your permissions to create volumes from this snapshot, the FSR for your shared snapshot is automatically disabled and FSR billing for the snapshot will be terminated. /ebs/faqs/;What is Amazon EBS encryption?;Amazon EBS encryption offers seamless encryption of EBS data volumes, boot volumes and snapshots, eliminating the need to build and maintain a secure key management infrastructure. EBS encryption enables data at rest security by encrypting your data using Amazon-managed keys, or keys you create and manage using the AWS Key Management Service (KMS). The encryption occurs on the servers that host EC2 instances, providing encryption of data as it moves between EC2 instances and EBS storage. For more details, see Amazon EBS encryption in the Amazon EC2 User Guide. /ebs/faqs/;What is the AWS Key Management Service (KMS)?;AWS KMS is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. AWS Key Management Service is integrated with other AWS services including Amazon EBS, Amazon S3, and Amazon Redshift, to make it simple to encrypt your data with encryption keys that you manage. AWS Key Management Service is also integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs. To learn more about KMS, visit the AWS Key Management Service product page. /ebs/faqs/;Why should I use EBS encryption?;You can use Amazon EBS encryption to meet security and encryption compliance requirements for data at rest encryption in the cloud. Pairing encryption with existing IAM access control policies improves your company’s defense-in-depth strategy. /ebs/faqs/;How are my Amazon EBS encryption keys managed?;"Amazon EBS encryption handles key management for you. Each newly created volume gets a unique 256-bit AES key; Volumes created from the encrypted snapshots share the key. These keys are protected by our own key management infrastructure, which implements strong logical and physical security controls to prevent unauthorized access. Your data and associated keys are encrypted using the industry-standard AES-256 algorithm." /ebs/faqs/;Does EBS encryption support boot volumes?;Yes. /ebs/faqs/;Can I create an encrypted data volume at the time of instance launch?;Yes, using customer master keys (CMKs) that are either AWS-managed or customer-managed. You can specify the volume details and encryption through a RunInstances API call with the BlockDeviceMapping parameter or through the Launch Wizard in the EC2 Console. /ebs/faqs/;Can I create additional encrypted data volumes at the time of instance launch that are not part of the AMI?;Yes, you can create encrypted data volume with either default or custom CMK encryption at the time of instances launch. You can specify the volume details and encryption through BlockDeviceMapping object in RunInstances API call or through Launch Wizard in EC2 Console. /ebs/faqs/;Can I launch an encrypted EBS instance from an unencrypted AMI?;Yes. See technical documentation for details. /ebs/faqs/;Can I share encrypted snapshots and AMIs with other accounts?;Yes. You can share encrypted snapshots and AMIs using a customer-managed customer master key (CMK) with other AWS accounts. See technical documentation for details. /ebs/faqs/;Can I ensure that all new volumes created are always encrypted?;Yes, you can enable EBS encryption by default with a single setting per region. This ensures that all new volumes are always encrypted. Refer to technical documentation for more details. /ebs/faqs/;Will I be billed for the IOPS provisioned on a Provisioned IOPS volume when it is disconnected from an instance?;"Yes, you will be billed for the IOPS provisioned when it is disconnected from an instance. When a volume is detached, we recommend you consider creating a snapshot and deleting the volume to reduce costs. For more information, see the ""Underutilized Amazon EBS Volumes"" cost optimization check in Trusted Advisor. This item checks your Amazon Elastic Block Store (Amazon EBS) volume configurations and warns when volumes appear to be underused." /ebs/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /ebs/faqs/;Is there an additional fee to enable Multi-Attach?;No. Multi-Attach can be enabled on an EBS Provisioned IOPS io1 volume and there will be charges for the storage (GB-Mo) and IOPS (IOPS-Mo) provisioned. /ebs/faqs/;Can I boot an EC2 instance using a Multi-Attach enabled volume?;No. /ebs/faqs/;What happens if all of my attached instances do not have the ‘deleteOnTermination’ flag set?;The volume's deleteOnTermination behavior is determined by the configuration of the last attached instance that is terminated. To ensure predictable delete on termination behavior, enable or disable 'deleteOnTermination' for all of the instances to which the volume is attached. /ebs/faqs/;Can my application use Multi-Attach?;If your application does not require storage layer coordination of write operations, such as a read-only application or it enforces application level IO fencing, then your application can use Multi-Attach. /fsx/lustre/faqs/;What is Amazon FSx for Lustre?;Amazon FSx for Lustre makes it easy and cost effective to launch, run, and scale the world’s most popular high-performance file system. /fsx/lustre/faqs/;What use cases does Amazon FSx for Lustre support?;Use Amazon FSx for Lustre for workloads where speed matters, such as machine learning, high performance computing (HPC), video processing, financial modeling, genome sequencing, and electronic design automation (EDA). /fsx/lustre/faqs/;How do I get started with Amazon FSx for Lustre?;To use Amazon FSx for Lustre, you must have an AWS account. If you do not have one, sign up on Sign up for AWS. /fsx/lustre/faqs/;What is the difference between scratch and persistent deployment options?;Amazon FSx for Lustre provides two deployment options: scratch and persistent. /fsx/lustre/faqs/;How do I choose the right storage type for my application?;Choose SSD storage for latency-sensitive workloads or workloads requiring the highest levels of IOPS/throughput. Choose HDD storage for throughput-focused workloads that aren’t latency-sensitive. For HDD-based file systems, the optional SSD cache improves performance by automatically placing your most frequently read data on SSD (the cache size is 20% of your file system size). /fsx/lustre/faqs/;What instance types and AMIs work with Amazon FSx for Lustre?;FSx for Lustre is compatible with the most popular Linux-based AMIs, including Amazon Linux, Amazon Linux 2, Red Hat Enterprise Linux (RHEL), CentOS, SUSE Linux and Ubuntu. FSx for Lustre is also compatible with both x86-based EC2 instances and Arm-based EC2 instances powered by the AWS Graviton2 processor. With FSx for Lustre, you can mix and match the instance types and Linux AMIs that are connected to a single file system. /fsx/lustre/faqs/;How do I access an FSx for Lustre file system from a compute instance?;To access your file system from a Linux instance, you first install the open-source Lustre client on that instance. Once it’s installed, you can mount your file system using standard Linux commands. Once mounted, you can work with the files and directories in your file system just like you would with a local file system. /fsx/lustre/faqs/;How do I access an Amazon FSx for Lustre file system from an Amazon Elastic Kubernetes Service (EKS) cluster?;You can use persistent storage volumes backed by FSx for Lustre using the FSx for Lustre CSI driver from Amazon EKS or your self-managed Kubernetes on AWS. See the Amazon EKS documentation for details. /fsx/lustre/faqs/;How do I manage an FSx for Lustre file system?;Amazon FSx is a fully managed service, so all of the file storage infrastructure is managed for you. When you use Amazon FSx, you avoid the complexity of deploying and maintaining complex file system infrastructure. /fsx/lustre/faqs/;How can I set and enforce storage limits for file system users?;You can set and enforce storage limits based on the number of files or storage capacity consumed by a specific user or group. You can choose to set a hard limit that denies users and groups from consuming additional storage after exceeding their quota, or set a soft limit that provides users with a grace period to complete their workloads before converting into a hard limit. To simplify file system administration, you can also monitor user-and group-level storage usage on FSx for Lustre file systems. To learn more, visit the FSx for Lustre Storage Quotas documentation. /fsx/lustre/faqs/;If I have data in S3, how do I access it from Amazon FSx for Lustre?;You can link your Amazon FSx for Lustre file system to your Amazon S3 buckets, and FSx for Lustre makes your S3 data transparently accessible in your file system. /fsx/lustre/faqs/;How does Amazon FSx for Lustre stay synchronized with my S3 buckets?;You can configure FSx for Lustre to keep content synchronized in both directions between the file system and the linked S3 buckets. As you make changes to objects in your S3 bucket, FSx for Lustre automatically detects and imports the changes to your file system. As you make changes to files in your file system, FSx for Lustre automatically exports the changes to your S3 bucket. /fsx/lustre/faqs/;How are directories, symbolic links, POSIX metadata, and POSIX permissions imported from and exported to Amazon S3?;"FSx for Lustre stores directories and symbolic links (symlinks) as separate objects in your S3 bucket. For example, a directory is stored as an S3 object with a key name that ends with a slash (""/"")." /fsx/lustre/faqs/;Can I link my S3 bucket to multiple FSx for Lustre file systems?;Yes, you can create multiple FSx for Lustre file systems linked to the same S3 bucket. Doing so allows you to maintain a common data set in S3 that is reflected in multiple FSx for Lustre file systems, or to share results of computation on one FSx file system with users processing data on other file systems. You can modify files in any of the linked file systems. S3 and each file system will persist updates in the order they receive them. If you modify the same file in multiple file systems, you should coordinate updates at the application level to prevent conflicts. FSx for Lustre will not prevent conflicting writes in multiple locations. /fsx/lustre/faqs/;How do I monitor my file system’s activity?;Amazon FSx for Lustre provides native CloudWatch integration, allowing you to monitor file system health and performance metrics in real time. Example metrics include storage consumed, number of compute instance connections, throughput, and number of file operations per second. You can log all Amazon FSx API calls using AWS CloudTrail. /fsx/lustre/faqs/;How do I use Amazon FSx for Lustre to speed up my Amazon SageMaker machine learning training jobs?;Amazon FSx for Lustre can be an input data source for Amazon SageMaker. When you use FSx for Lustre as an input data source, Amazon SageMaker ML training jobs can be accelerated by eliminating the initial S3 download step. SageMaker jobs are started as soon as the FSx for Lustre file system is linked with the S3 bucket without needing to download the full machine learning training dataset from S3. Data is lazy loaded as needed from Amazon S3 for processing jobs. FSx for Lustre can also help you reduce total cost of ownership (TCO) by avoiding the repeated download of common objects (so you can save S3 request costs) for iterative jobs on the same dataset. /fsx/lustre/faqs/;How do I use Amazon FSx for Lustre with AWS Batch?;Amazon FSx for Lustre integrates with AWS Batch through EC2 Launch Templates. AWS Batch is a cloud-native batch scheduler for HPC, ML, and other asynchronous workloads. AWS Batch will automatically and dynamically size instances to job resource requirements, and use existing FSx for Lustre file systems when launching instances and running jobs. /fsx/lustre/faqs/;How do I use Amazon FSx for Lustre with AWS ParallelCluster?;AWS ParallelCluster is an AWS-supported open-source cluster management tool that helps you to deploy and manage High Performance Computing (HPC) clusters on AWS. AWS ParallelCluster supports automatic creation of a new Amazon FSx for Lustre file system or the ability to use an existing Amazon FSx for Lustre file system as part of the cluster creation process. /fsx/lustre/faqs/;What regions is Amazon FSx for Lustre available in?;Please refer to Regional Products and Services for details of Amazon FSx for Lustre service availability by region. /fsx/lustre/faqs/;If I have data on-premises how do I make it available to Amazon FSx for Lustre to process it?;If you have high-performance or data processing workloads running on-premises and demand for computing capacity spikes, you can cloud burst your workloads to Amazon FSx for Lustre by using Amazon Direct Connect or VPN. /fsx/lustre/faqs/;What performance can I expect from Amazon FSx for Lustre?;Amazon FSx for Lustre file systems scale to TB/s of throughput and millions of IOPS. FSx for Lustre also supports concurrent access to the same file or directory from thousands of compute instances. FSx for Lustre provides consistent, sub-millisecond latencies for file operations. See Amazon FSx Performance documentation for more details. /fsx/lustre/faqs/;How does throughput scale with storage capacity?;FSx for Lustre file systems automatically provision throughput for each TiB of storage provisioned. SSD-based file systems can be provisioned with 125, 250, 500, or 1,000 MB/s of throughput per TiB of storage provisioned. HDD-based file systems can be provisioned with 12 or 40 MB/s of throughput per TiB of storage provisioned. /fsx/lustre/faqs/;How do I change my file system's storage capacity?;"You can increase the storage capacity of your file system by clicking “Update"" in the Amazon FSx Console, or by calling “UpdateFileSystem” in the AWS CLI/API and specifying the desired storage capacity." /fsx/lustre/faqs/;How does Amazon FSx grow the storage capacity of my file system? How long does it take?;Amazon FSx for Lustre stores data across multiple network file servers and stores file metadata on a dedicated metadata server with its own storage. When you request an update to your file system’s storage capacity, Amazon FSx automatically adds new network file servers and scales your metadata server. While scaling storage capacity, the file system may be unavailable for a few minutes. Client requests sent while the file system is unavailable will transparently retry and eventually succeed after scaling is complete. /fsx/lustre/faqs/;How frequently and in what increments can I increase my file system’s storage capacity?;You can increase your file system’s storage capacity every six hours, and in the same increments that you can provision new file systems. Note that your previous scaling request, including optimization, must be complete when you issue a new scaling request. /fsx/lustre/faqs/;How many instances can connect to a file system?;An FSx for Lustre file system can be concurrently accessed by thousands of compute instances. /fsx/lustre/faqs/;What file system sizes are supported by FSx for Lustre and what is the increment granularity?;Scratch and persistent SSD-based file systems can be created in sizes of 1.2 TiB or in increments of 2.4 TiB. Persistent HDD-based file systems with 12 MB/s and 40 MB/s of throughput per TiB can be created in increments of 6.0 TiB and 1.8 TiB, respectively. /fsx/lustre/faqs/;How many file systems can I create?;There is a 100-file system limit per account, which can be increased upon request. /fsx/lustre/faqs/;Does Amazon FSx for Lustre support data encryption?;Yes. Amazon FSx for Lustre always encrypts your file system data and your backups at-rest using keys you manage through AWS Key Management Service (KMS). Amazon FSx encrypts data-in-transit when accessed from supported EC2 instances. See the Amazon FSx documentation for details on regions where in-transit encryption is supported. /fsx/lustre/faqs/;What access control capabilities does Amazon FSx provide?;Every FSx for Lustre resource is owned by an AWS account, and permissions to create or access a resource are governed by permissions policies. You specify the Amazon Virtual Private Cloud (VPC) in which your file system is made accessible, and you control which resources within the VPC have access to your file system using VPC Security Groups. You control who can administer your file system and backup resources (create, delete, etc.) using AWS IAM. /fsx/lustre/faqs/;Does Amazon FSx support shared VPCs?;Yes, with Amazon FSx, you can create and use file systems in shared Amazon Virtual Private Clouds (VPCs) from both owner accounts and participant accounts with which the VPC has been shared. VPC sharing enables you to reduce the number of VPCs that you need to create and manage, while you still benefit from using separate accounts for billing and access control. /fsx/lustre/faqs/;What compliance programs does Amazon FSx support?;AWS has the longest-running compliance program in the cloud and is committed to helping customers navigate their requirements. Amazon FSx has been assessed to meet global and industry security standards. It complies with PCI DSS, ISO 9001, 27001, 27017, and 27018, and SOC 1, 2, and 3, in addition to being HIPAA eligible. That makes it easier for you to verify our security and meet your own obligations. For more information and resources, visit our compliance pages. You can also go to the Services in Scope by Compliance Program page to see a full list of services and certifications. /fsx/lustre/faqs/;When and why should I use the persistent FSx for Lustre versus the scratch FSx for Lustre deployment option?;Use scratch file systems when you need cost-optimized storage for short-term, processing-heavy workloads. /fsx/lustre/faqs/;Does Amazon FSx offer a Service Level Agreement (SLA)?;Yes. The Amazon FSx SLA provides for a service credit if a customer's monthly uptime percentage is below our service commitment in any billing cycle. /fsx/lustre/faqs/;What are the availability and durability characteristics of FSx for Lustre file systems?;Amazon FSx for Lustre provides a parallel file system. In parallel file systems, data is stored across multiple network file servers to maximize performance and reduce bottlenecks, and each server has multiple disks. Larger file systems have more file servers and disks than smaller file systems. /fsx/lustre/faqs/;How do I take backups on Amazon FSx for Lustre?;Amazon FSx takes daily automatic backups of your file systems, and allows you to take additional backups at any point. Amazon FSx backups are incremental, which means that only the changes after your most recent backup are saved, thus saving on backup storage costs by not duplicating data. /fsx/lustre/faqs/;What durability and consistency does Amazon FSx provide for backups?;Backups are highly durable and file-system-consistent. To ensure high durability, Amazon FSx stores backups with 99.999999999% (11 9's) of durability on Amazon S3. Backups also present a consistent view of your file system, meaning that if metadata exists for a file in the backup, then the file’s associated data is also included in the backup. /fsx/lustre/faqs/;What is the daily backup window?;The daily backup window is a 30-minute window that you specify when creating a file system. Amazon FSx takes the daily automatic backup of your file system during this window. At some point during the daily backup window, storage I/O will be briefly suspended while the backup process initializes (typically a few seconds). /fsx/lustre/faqs/;What is the daily backup retention period?;The daily backup retention period specified for your file system (7 days by default) determines the number of days your daily automatic backups are kept. /fsx/lustre/faqs/;What happens to my backups if I delete my file system?;When you delete your file system, all automatic daily backups associated with the file system are deleted. Any user-initiated backups you created will remain. /fsx/lustre/faqs/;Can I take a backup of any FSx for Lustre file system?;You can take a backup of any FSx for Lustre file system that has persistent storage and is a standalone file system (i.e., not linked to an Amazon S3 bucket). /fsx/lustre/faqs/;How do I protect my FSx for Lustre resources using AWS Backup?;You first enable Amazon FSx as a protected service in AWS Backup. You can then configure backups of your Amazon FSx resources via the AWS Backup console, API or CLI. You can create both scheduled and on-demand backups of Amazon FSx resources via AWS Backup and restore these backups as new Amazon FSx file systems. Amazon FSx file systems can be added to backup plans in the same way as other AWS resources, either by specifying the ARN or by tagging the Amazon FSx file system for protection in the backup plan. Learn more in the AWS Backup documentation. /fsx/lustre/faqs/;How do I copy backups of my Amazon FSx file systems across AWS Regions and AWS accounts?;You can configure your backup plans on AWS Backup to periodically create and copy backups of your Amazon FSx file systems to other AWS Regions, other AWS accounts, or both, with your desired frequency and retention policy. For cross-account backup copies, you use your AWS Organizations management account to designate source and destination accounts. /fsx/lustre/faqs/;How will I be charged and billed for my use of Amazon FSx for Lustre?;You pay only for the resources you use. See the Amazon FSx for Lustre pricing page for details. /fsx/lustre/faqs/;How am I billed when scaling the storage capacity of my file system?;Storage capacity scaling requests are processed by adding new storage capacity to your file system. You will be billed for new storage capacity once the new file servers have been added to your file system, and the file system status changes from UPDATINto AVAILABLE. /fsx/lustre/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /fsx/windows/faqs/;What is an Amazon FSx for Windows File Server file system, and what is a file share?;A file system is the primary resource in Amazon FSx. It’s where you store and access your files and folders. It is associated with a storage amount and a throughput capacity, as well as a DNname for accessing it. /fsx/windows/faqs/;How do I get started with FSx for Windows File Server?;To use Amazon FSx, you must have an AWS account. If you do not already have an AWS account, you can sign up for an AWS account. /fsx/windows/faqs/;What instance types and OS versions can I access my file system from?;By supporting the SMB protocol, Amazon FSx can connect your file system to Amazon EC2, Amazon ECS, VMware Cloud on AWS, Amazon WorkSpaces, and Amazon AppStream 2.0 instances. To ensure compatibility with your applications, Amazon FSx supports all Windows versions starting from Windows Server 2008 and Windows 7, and current versions of Linux (using the cifs-utils tool). /fsx/windows/faqs/;How do I access Amazon FSx for Windows File Server from Amazon Elastic Container Service (Amazon ECS) containers?;You can use Amazon FSx to enable persistent, shared storage for your containerized applications running on Amazon ECS. You can easily access Amazon FSx for Windows Server file systems on Amazon ECS by referencing your file systems in your task definition. Getting started instructions can be found in the Amazon ECS documentation. /fsx/windows/faqs/;How do I access data on my Amazon FSx file system?;From within Windows, use the “Map Network Drive” feature to map a drive letter (e.g., Z:) to a file share on your Amazon FSx file system. You can also access your file system from Linux using the cifs-utils tool to mount your file share. Once you've done this, you can work with the files and folders in your Amazon FSx file system just like you would with a local file system. /fsx/windows/faqs/;How do I manage a file system?;Amazon FSx is a fully-managed service, so all of the file storage infrastructure is managed for you. When you use Amazon FSx, you avoid the complexity of deploying and maintaining complex file system infrastructure. /fsx/windows/faqs/;How do I migrate my existing file data into an Amazon FSx file system?;If you’d like to migrate your existing files to Amazon FSx for Windows File Server file systems, we recommend the use of AWS DataSync, an online data transfer service designed to simplify, automate, and accelerate copying large amounts of data to and from AWS storage services. DataSync copies data over the internet or AWS Direct Connect. As a fully managed service, DataSync removes much of the need to modify applications, develop scripts, or manage infrastructure. For more information, see Migrating Existing Files to Amazon FSx for Windows File Server Using AWS DataSync guide. /fsx/windows/faqs/;How do I monitor my file system’s activity?;You can monitor storage capacity and file system activity using Amazon CloudWatch, monitor all Amazon FSx API calls using AWS CloudTrail, and monitor end user actions with file access auditing using Amazon CloudWatch Logs and Amazon Kinesis Data Firehose. /fsx/windows/faqs/;What workloads is Amazon FSx for Windows File Server designed for?;Amazon FSx was designed for a broad set of use cases that require Windows shared file storage, like CRM, ERP, custom or .NET applications, home directories, data analytics, media and entertainment workflows, web serving and content management, software build environments, and Microsoft SQL Server. /fsx/windows/faqs/;When should I use Amazon FSx for Windows File Server vs. other Amazon FSx file system types?;Please refer to the Choosing an Amazon FSx file system page for a guide on how to choose between the different Amazon FSx file storage offerings. /fsx/windows/faqs/;How does Amazon FSx support access from my on-premises environment?;Yes, you can access Amazon FSx file systems from your on-premises environment using an AWS Direct Connect or AWS VPN connection between your on-premises datacenter and your Amazon VPC. With support for AWS Direct Connect, Amazon FSx allows you to access your file system over a dedicated network connection from your on-premises environment. With support for AWS VPNAmazon FSx allows you to access your file system from your on-premises devices over a secure and private tunnel. /fsx/windows/faqs/;Does Amazon FSx support access from multiple VPCs, accounts, and regions?;Yes, you can access your Amazon FSx file systems from multiple Amazon VPCs, AWS accounts, and AWS Regions using VPC Peering connections or AWS Transit Gateway. A VPC Peering connection is a networking connection between two VPCs that enables you to route traffic between them. A transit gateway is a network transit hub that you can use to interconnect your VPCs. With VPC Peering and AWS Transit Gateway, you can even interconnect VPCs across AWS accounts and AWS Regions. /fsx/windows/faqs/;Does Amazon FSx support shared VPCs?;Yes, with Amazon FSx, you can create and use file systems in shared Amazon Virtual Private Clouds (VPCs) from both owner accounts and participant accounts with which the VPC has been shared. VPC sharing enables you to reduce the number of VPCs that you need to create and manage, while you still benefit from using separate accounts for billing and access control. /fsx/windows/faqs/;What regions is Amazon FSx for Windows File Server available in?;Please refer to Regional Products and Services for details of Amazon FSx for Windows File Server service availability by region. /fsx/windows/faqs/;Does Amazon FSx offer a Service Level Agreement (SLA)?;Yes. The Amazon FSx SLA provides for a service credit if a customer's monthly uptime percentage is below our service commitment in any billing cycle. /fsx/windows/faqs/;What performance does FSx for Windows File Server provide?;Amazon FSx provides consistent sub-millisecond latencies with SSD storage, and single-digit millisecond latencies with HDD storage for file operations. For all file systems, including those with HDD storage, Amazon FSx provides a fast (in-memory) cache on the file server, so you can get high performance and sub-millisecond latencies for actively accessed data irrespective of storage type. /fsx/windows/faqs/;How much data can I store on Amazon FSx for Windows File Server?;You can run up to thousands of Amazon FSx for Windows File Server file systems in your account, with each file system having up to 64 TB of data. To unify your data from multiple file systems into one common folder structure, Amazon FSx supports the use of Microsoft’s Distributed File System (DFS) to organize shares into a single folder structure up to hundreds of PB in size. /fsx/windows/faqs/;Can I change my file system’s storage capacity and throughput capacity?;"Yes, you can increase the storage capacity, and increase or decrease the throughput capacity of your file system – while continuing to use it – at any time by clicking “Update storage"" or ""Update throughput” in the Amazon FSx Console, or by calling “update-file-system” in the AWS CLI/API and specifying the desired level." /fsx/windows/faqs/;How does Amazon FSx grow the storage capacity of my file system? How long does it take?;Amazon FSx grows the storage capacity of your existing file system without any downtime impact to your applications and users by adding larger disks, transparently migrating your data in the background from the original disks to the new ones, and then removing the original disks from your file system – the standard process for growing storage on a Windows File Server. /fsx/windows/faqs/;How does Amazon FSx change the throughput capacity of my file system? How long does it take?;Amazon FSx updates the throughput capacity of your file system by switching out the file servers powering your file system to meet the new throughput capacity configuration. This update process typically takes a few minutes to complete. Multi-AZ file systems will experience an automatic failover and failback during this process, and single-AZ file systems will be offline for a brief period of time. /fsx/windows/faqs/;How do I scale out performance across multiple file systems?;Amazon FSx supports the use of Microsoft’s Distributed File System (DFS) Namespaces to scale out performance across multiple file systems in the same namespace up to tens of GBps and millions of IOPs. /fsx/windows/faqs/;How does Amazon FSx integrate with Microsoft Active Directory (AD)?;Amazon FSx works with Microsoft Active Directory (AD) to integrate with your existing Windows environments. When creating a file system with Amazon FSx, you join it to your Microsoft AD -- either an AWS Managed Microsoft AD or your self-managed Microsoft AD. Your users can then use their existing AD-based user identities to authenticate themselves and access the Amazon FSx file system, and to control access to individual files and folders. /fsx/windows/faqs/;What access control capabilities does Amazon FSx provide?;Amazon FSx provides standard Windows permissions (full support for Windows Access Controls ACLS) for files and folders. /fsx/windows/faqs/;Does Amazon FSx for Windows File Server support data encryption?;Yes. Amazon FSx for Windows File Server always encrypts your file system data and your backups at-rest using keys you manage through AWS Key Management Service (KMS). Amazon FSx encrypts data-in-transit using SMB Kerberos session keys, when you access your file system from clients that support SMB 3.0 (and higher). You can also choose to enforce in-transit encryption on all connections to your file system by limiting access to only those clients that support SMB 3.0 and higher to help meet compliance needs. /fsx/windows/faqs/;What compliance programs does Amazon FSx support?;AWS has the longest-running compliance program in the cloud and is committed to helping customers navigate their requirements. Amazon FSx has been assessed to meet global and industry security standards. It complies with PCI DSS, ISO 9001, 27001, 27017, and 27018), and SOC 1, 2, and 3, in addition to being HIPAA eligible. That makes it easier for you to verify our security and meet your own obligations. For more information and resources, visit our compliance pages. You can also go to the Services in Scope by Compliance Program page to see a full list of services and certifications. /fsx/windows/faqs/;What does Amazon FSx for Windows File Server do to ensure high availability and durability?;To ensure high availability and durability, Amazon FSx automatically replicates your data within an Availability Zone (AZ) to protect it from component failure, continuously monitors for hardware failures, and automatically replaces infrastructure components in the event of a failure. You can also create a Multi-AZ file system, which provides redundancy across multiple AZs. Amazon FSx also takes highly durable backups (stored in S3) of your file system daily using Windows’s Volume Shadow Copy Service, and allows you to take additional backups at any point. /fsx/windows/faqs/;What is the difference between Single-AZ and Multi-AZ file systems?;Single-AZ file systems are composed of a single Windows file server instance and a set of storage volumes within a single Availability Zone (AZ). With Single-AZ file systems, data is automatically replicated to protect it from the failure of a single component in most cases. Amazon FSx continuously monitors for hardware failures, and automatically recovers from failure events by replacing the failed infrastructure component. /fsx/windows/faqs/;What events would cause a Multi-AZ Amazon FSx file system to initiate a failover to the standby file server?;Amazon FSx automatically performs a failover in the event of a loss of availability to the active file server. This can be caused by a failure in the active Availability Zone, or by a failure of the active file server itself. Amazon FSx will also temporarily fail over to the standby file server during planned maintenance. /fsx/windows/faqs/;What happens during a failover in a Multi-AZ file system and how long does a failover take?;Amazon FSx detects and automatically recovers from failures so that you can resume file system operations as quickly as possible without administrative intervention. When failing over from one file server to another, the new active file server will automatically begin serving all file system reads and write requests. Failovers, as defined by the interval between the detection of the failure on the active and promotion of the other file server to active, typically complete within 30 seconds. Failback will occur once the file server in the preferred subnet is fully recovered (typically under 20 minutes), and also completes within 30 seconds. /fsx/windows/faqs/;How does Amazon FSx keep Windows Server software up to date?;Amazon FSx performs routine software updates for the Windows Server software it manages. The maintenance window is your opportunity to control what day and time of the week this software patching occurs. Patching occurs infrequently, typically once every several weeks, and should require only a fraction of your 30-minute maintenance window. /fsx/windows/faqs/;How does Amazon FSx enable me to protect my data?;Beyond automatically replicating your file system's data to ensure high durability, Amazon FSx provides you with two options to further protect the data stored on your file systems: Windows shadow copies to enable your end-users to easily undo file changes and compare file versions by restoring files to previous versions, and backups to support your backup retention and compliance needs. /fsx/windows/faqs/;How do my end-users restore files to previous versions?;Amazon FSx supports file- or folder-level restores to previous versions by supporting Windows shadow copies, which are snapshots of your file system at a point in time. With shadow copies enabled, your end-users can view and restore individual files or folders from a prior snapshot with the click of a button in Windows File Explorer. Storage administrators using Amazon FSx can easily schedule shadow copies to be taken periodically using Windows PowerShell commands. /fsx/windows/faqs/;How do I take backups on Amazon FSx for Windows File Server?;Creating regular backups for your file system is a best practice that complements the replication that Amazon FSx performs for your file system. Working with Amazon FSx backups is easy, whether it's creating backups, restoring a file system from a backup, or deleting a backup. /fsx/windows/faqs/;What durability and consistency does Amazon FSx provide for backups?;With Amazon FSx, backups are file-system-consistent and highly durable. To ensure file-system-consistency, Amazon FSx uses Windows’s Volume Shadow Copy Service, allowing you to restore to a point in time snapshot of your file system. To ensure high durability, Amazon FSx stores backups in Amazon S3. /fsx/windows/faqs/;What is the daily backup window?;The daily backup window is a 30-minute window that you specify when creating a file system. Amazon FSx takes the daily automatic backup of your file system during this window. At some point during the daily backup window, storage I/O might be suspended briefly while the backup process initializes (typically under a few seconds). /fsx/windows/faqs/;What is the daily backup retention period?;The daily backup retention period specified for your file system (7 days by default) determines the number of days your daily automatic backups are kept. /fsx/windows/faqs/;What happens to my backups if I delete my file system?;When you delete your file system, all automatic daily backups associated with the file system are deleted. Any user-initiated backups you created will remain. /fsx/windows/faqs/;How do I protect Amazon FSx resources using AWS Backup?;You first enable Amazon FSx as a protected service in AWS Backup. You can then configure backups of your Amazon FSx resources via the AWS Backup console, API or CLI. You can create both scheduled and on-demand backups of Amazon FSx resources via AWS Backup and restore these backups as new Amazon FSx file systems. Amazon FSx file systems can be added to backup plans in the same way as other AWS resources, either by specifying the ARN or by tagging the Amazon FSx file system for protection in the backup plan. To learn more, visit the AWS Backup documentation. /fsx/windows/faqs/;How do I copy backups of my Amazon FSx file systems across AWS Regions and AWS accounts?;You can configure your backup plans on AWS Backup to periodically create and copy backups of your Amazon FSx file systems to other AWS Regions, other AWS accounts, or both, with your desired frequency and retention policy. For cross-account backup copies, you use your AWS Organizations management account to designate source and destination accounts. /fsx/windows/faqs/;How can I automatically replicate my FSx for Windows File Server file system to another file system?;You can use AWS DataSync to schedule the periodic replication of your Amazon FSx for Windows File Server file system to a second file system. This capability is available for both same-region and cross-region replications. To learn more, see Data Transfer between AWS Storage services. /fsx/windows/faqs/;What storage options does Amazon FSx support for my file system? How should I choose?;Amazon FSx provides two types of storage – Hard Disk Drives (HDD) and Solid State Drives (SSD) – to allow you to optimize cost/performance to meet your workload needs. HDD storage is designed for a broad spectrum of workloads, including home directories, user and departmental shares, and content management systems. SSD storage is designed for the highest-performance and most latency-sensitive workloads, including databases, media processing workloads, and data analytics applications. /fsx/windows/faqs/;Can I change the storage type (SSD/HDD) of my file system?;While you cannot change the storage type on your existing file system, you can take a backup and restore that backup to a new file system with a different storage type. /fsx/windows/faqs/;What is Data Deduplication?;Large datasets often have redundant data. For example, with user file shares, multiple users tend to have files that are similar or identical. As another example, with software development shares, most binaries remain largely unchanged from build to build. Data Deduplication is a feature in Windows Server that reduces costs that are associated with redundant and uncompressed data by storing duplicated portions of files only once and compressing the data after deduplication. Learn more by visiting the Using Data Duplication documentation. /fsx/windows/faqs/;How much storage savings can I expect with Data Deduplication?;The storage savings you can achieve with Data Deduplication depends on the nature of your data set, including how much duplication exists across files. Typical savings average 50-60% for general-purpose file shares, with savings ranging 30-50% for user documents, and 70-80% for software development data sets. /fsx/windows/faqs/;How do I enable Data Deduplication on my file system?;You can enable Data Deduplication on your file system by running a single command (Enable-FSxDedup) on the Amazon FSx remote management CLI via PowerShell. Once enabled, Data Deduplication continually and automatically scans and optimizes your files in the background. /fsx/windows/faqs/;How do I monitor and control individual users' storage consumption on my file system?;You can enable and configure user storage quotas on your file system to monitor usage and allocate storage costs to individual teams, and to impose restrictions at a user-level in order to prevent any one user from storing a lot of data. /fsx/windows/faqs/;How will I be charged and billed for my use of Amazon FSx for Windows File Server?;You pay only for the resources you use. You are billed hourly for your file systems, based on their deployment type (Single-AZ or Multi-AZ), storage type (SSD or HDD), storage capacity (priced per GB-month), and throughput capacity (priced per MBps-month). You are billed hourly for your backup storage (priced per GB-month). For pricing information, please visit the Amazon FSx pricing page. /fsx/windows/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /fsx/windows/faqs/;How am I billed when scaling the storage capacity or throughput capacity of my file system?;Scaling storage capacity and throughput capacity is available at no additional cost. Once storage capacity scaling is complete (typically within 1-2 minutes of you requesting a storage capacity increase), you will be billed for that storage capacity moving forward. Once throughput capacity scaling is complete (i.e., the new throughput capacity is available), you will be billed for the new throughput capacity from that point forward. /s3/faqs/;What is Amazon S3?;Amazon S3 is object storage built to store and retrieve any amount of data from anywhere. S3 is a simple storage service that offers industry leading durability, availability, performance, security, and virtually unlimited scalability at very low costs. /s3/faqs/;What can I do with Amazon S3?;Amazon S3 provides a simple web service interface that you can use to store and retrieve any amount of data, at any time, from anywhere. Using this service, you can easily build applications that make use of cloud native storage. Since Amazon S3 is highly scalable and you only pay for what you use, you can start small and grow your application as you wish, with no compromise on performance or reliability. /s3/faqs/;How can I get started using Amazon S3?;To sign up for Amazon S3, visit the S3 console. You must have an Amazon Web Services account to access this service. If you do not already have an account, you will be prompted to create one when you begin the Amazon S3 sign-up process. After signing up, refer to the Amazon S3 documentation, view the S3 getting started materials, and see the additional resources in the resource center to begin using Amazon S3. /s3/faqs/;What can I do with Amazon S3 that I cannot do with an on-premises solution?;Amazon S3 lets you leverage Amazon’s own benefits of massive scale with no up-front investment or performance compromises. By using Amazon S3, it is inexpensive and simple to ensure your data is quickly accessible, always available, and secure. /s3/faqs/;What kind of data can I store in Amazon S3?;You can store virtually any kind of data in any format. Refer to the Amazon Web Services Licensing Agreement for details. /s3/faqs/;How much data can I store in Amazon S3?;The total volume of data and number of objects you can store in Amazon S3 are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB. For objects larger than 100 MB, customers should consider using the multipart upload capability. /s3/faqs/;Can I have a bucket that has different objects in different storage classes?;Yes, you can have an S3 bucket that has different objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive. /s3/faqs/;What does Amazon do with my data in Amazon S3?;Amazon stores your data and tracks its associated usage for billing purposes. Amazon will not otherwise access your data for any purpose outside of the Amazon S3 offering, except when required to do so by law. Refer to the Amazon Web Services Licensing Agreement for details. /s3/faqs/;Does Amazon store its own data in Amazon S3?;Yes. Organizations across Amazon use Amazon S3 for a wide variety of projects. Many of these projects use Amazon S3 as their authoritative data store and rely on it for business-critical operations. /s3/faqs/;How is Amazon S3 data organized?;Amazon S3 is a simple key-based object store. When you store data, you assign a unique object key that can later be used to retrieve the data. Keys can be any string, and they can be constructed to mimic hierarchical attributes. Alternatively, you can use S3 Object Tagging to organize your data across all of your S3 buckets and/or prefixes. /s3/faqs/;How do I interface with Amazon S3?;Amazon S3 provides a simple, standards-based REST web services interface that is designed to work with any internet-development toolkit. The operations are intentionally made simple to make it easy to add new distribution protocols and functional layers. /s3/faqs/;How reliable is Amazon S3?;Amazon S3 gives you access to the same highly scalable, highly available, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The S3 Standard storage class is designed for 99.99% availability, the S3 Standard-IA storage class, S3 Intelligent-Tiering storage class, and the S3 Glacier Instant Retrieval storage classes are designed for 99.9% availability, the S3 One Zone-IA storage class is designed for 99.5% availability, and the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive class are designed for 99.99% availability and an SLA of 99.9%. All of these storage classes are backed by the Amazon S3 Service Level Agreement. /s3/faqs/;How will Amazon S3 perform if traffic from my application suddenly spikes?;Amazon S3 is designed from the ground up to handle traffic for any internet application. Pay-as-you-go pricing and unlimited capacity ensures that your incremental costs don’t change and that your service is not interrupted. Amazon S3’s massive scale lets you spread the load evenly, so that no individual application is affected by traffic spikes. /s3/faqs/;Does Amazon S3 offer a Service Level Agreement (SLA)?;Yes. The Amazon S3 SLA provides for a service credit if a customer's monthly uptime percentage is below our service commitment in any billing cycle. /s3/faqs/;What is the consistency model for Amazon S3?;Amazon S3 delivers strong read-after-write consistency automatically, without changes to performance or availability, without sacrificing regional isolation for applications, and at no additional cost. /s3/faqs/;Why does strong read-after-write consistency help me?;"Strong read-after-write consistency helps when you need to immediately read an object after a write; for example, when you often read and list immediately after writing objects. High-performance computing workloads also benefit in that when an object is overwritten and then read many times simultaneously, strong read-after-write consistency provides assurance that the latest write is read across all reads. These applications automatically and immediately benefit from strong read-after-write consistency. The strong consistency of S3 also reduces costs by removing the need for extra infrastructure to provide strong consistency." /s3/faqs/;Where is my data stored?;You specify an AWS Region when you create your Amazon S3 bucket. For S3 Standard, S3 Standard-IA, S3 Intelligent-Tiering, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive storage classes, your objects are automatically stored across multiple devices spanning a minimum of three Availability Zones (AZs). AZs are physically separated by a meaningful distance, many kilometers, from any other AZ, although all are within 100 km (60 miles) of each other. Objects stored in the S3 One Zone-IA storage class are stored redundantly within a single Availability Zone in the AWS Region you select. For S3 on Outposts, your data is stored in your Outpost on-premises environment, unless you manually choose to transfer it to an AWS Region. Refer to AWS regional services list for details of Amazon S3 service availability by AWS Region. /s3/faqs/;What is an AWS Region?;An AWS Region is a physical location around the world where AWS cluster data centers. Each group of logical data centers within a Region is know as an Availability Zone (AZ). Each AWS Region consists of a minimum of three, isolated, and physically separate AZs within a geographic area. Unlike other cloud providers, who often define a Region as a single data center, the multiple AZ design of every AWS Region offers advantages for customers. Each AZ has independent power, cooling, and physical security and is connected via redundant, ultra-low-latency networks. /s3/faqs/;What is an AWS Availability Zone (AZ)?;An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. AZs give customers the ability to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center. All AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs. /s3/faqs/;How do I decide which AWS Region to store my data in?;There are several factors to consider based on your specific application. For instance, you may want to store your data in a Region that is near your customers, your data centers, or other AWS resources to reduce data access latencies. You may also want to store your data in a Region that is remote from your other operations for geographic redundancy and disaster recovery purposes. You should also consider Regions that let you address specific legal and regulatory requirements and/or reduce your storage costs—you can choose a lower priced Region to save money. For S3 pricing information, visit the Amazon S3 pricing page . /s3/faqs/;In which parts of the world is Amazon S3 available?;Amazon S3 is available in AWS Regions worldwide, and you can use Amazon S3 regardless of your location. You just have to decide which AWS Region(s) you want to store your Amazon S3 data. See the AWS regional services list for a list of AWS Regions in which S3 is available today. /s3/faqs/;How much does Amazon S3 cost?;With Amazon S3, you pay only for what you use. There is no minimum charge. You can estimate your monthly bill using the AWS Pricing Calculator. /s3/faqs/;How will I be charged and billed for my use of Amazon S3?;There are no set up charges or commitments to begin using Amazon S3. At the end of the month, you will automatically be charged for that month’s usage. You can view your charges for the current billing period at any time by logging into your Amazon Web Services account, and selecting the 'Billing Dashboard' associated with your console profile. /s3/faqs/;Why do prices vary depending on which Amazon S3 Region I choose?;AWS charges less where our costs are less. For example, our costs are lower in the US East (Northern Virginia) Region than in the US West (Northern California) Region. /s3/faqs/;How am I charged for using Versioning?;Normal Amazon S3 rates apply for every version of an object stored or requested. For example, let’s look at the following scenario to illustrate storage costs when utilizing Versioning (let’s assume the current month is 31 days long): /s3/faqs/;How am I charged for accessing Amazon S3 through the AWS Management Console?;Normal Amazon S3 pricing applies when accessing the service through the AWS Management Console. To provide an optimized experience, the AWS Management Console may proactively execute requests. Also, some interactive operations result in more than one request to the service. /s3/faqs/;How am I charged if my Amazon S3 buckets are accessed from another AWS account?;Normal Amazon S3 pricing applies when your storage is accessed by another AWS Account. Alternatively, you may choose to configure your bucket as a Requester Pays bucket, in which case the requester will pay the cost of requests and downloads of your Amazon S3 data. /s3/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. /s3/faqs/;What is IPv6?;Every server and device connected to the internet must have a unique address. Internet Protocol Version 4 (IPv4) was the original 32-bit addressing scheme. However, the continued growth of the internet means that all available IPv4 addresses will be utilized over time. Internet Protocol Version 6 (IPv6) is an addressing mechanism designed to overcome the global address limitation on IPv4. /s3/faqs/;How do I get started with IPv6 on Amazon S3?;You can get started by pointing your application to Amazon S3’s “dual-stack” endpoint, which supports access over both IPv4 and IPv6. In most cases, no further configuration is required for access over IPv6, because most network clients prefer IPv6 addresses by default. Applications that are impacted by using IPv6 can switch back to the standard IPv4-only endpoints at any time. IPv6 with Amazon S3 is supported in all commercial AWS Regions, including AWS GovCloud (US) Regions, the Amazon Web Services China (Beijing) Region, operated by Sinnet, and the Amazon Web Services China (Ningxia) Region, operated by NWCD. /s3/faqs/;What are Amazon S3 Event Notifications?;You can use the Amazon S3 Event Notifications feature to receive notifications when certain events happen in your S3 bucket, such as PUT, POST, COPY, and DELETE events. You can publish notifications to Amazon EventBridge, Amazon SNS, Amazon SQS, or directly to AWS Lambda. /s3/faqs/;What can I do with Amazon S3 Event Notifications?;"Amazon S3 Event Notifications let you run workflows, send alerts, or perform other actions in response to changes in your objects stored in S3. You can use S3 Event Notifications to set up triggers to perform actions including transcoding media files when they are uploaded, processing data files when they become available, and synchronizing S3 objects with other data stores. You can also set up event notifications based on object name prefixes and suffixes. For example, you can choose to receive notifications on object names that start with “images/.""" /s3/faqs/;What is included in Amazon S3 Event Notifications?;For a detailed description of the information included in Amazon S3 Event Notification messages, refer to the configuring Amazon S3 Event Notifications documentation. /s3/faqs/;How do I set up Amazon S3 Event Notifications?;For a detailed description of how to configure event notifications, refer to the configuring Amazon S3 Event Notifications documentation. You can learn more about AWS messaging services in the Amazon SNdocumentation and the Amazon SQS documentation. /s3/faqs/;What does it cost to use Amazon S3 Event Notifications?;There are no additional charges for using Amazon S3 for event notifications. You pay only for use of Amazon SNor Amazon SQS to deliver event notifications, or for the cost of running an AWS Lambda function. Visit the Amazon SNS, Amazon SQS, or AWS Lambda pricing pages to view the pricing details for these services. /s3/faqs/;What is S3 Transfer Acceleration?;Amazon S3 Transfer Acceleration creates fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. S3 Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge locations. As data arrives at an AWS Edge Location, data is routed to your Amazon S3 bucket over an optimized network path. /s3/faqs/;How do I get started with S3 Transfer Acceleration?;To get started with S3 Transfer Acceleration enable S3 Transfer Acceleration on an S3 bucket using the Amazon S3 console, the Amazon S3 API, or the AWS CLI. After S3 Transfer Acceleration is enabled, you can point your Amazon S3 PUT and GET requests to the s3-accelerate endpoint domain name. Your data transfer application must use one of the following two types of endpoints to access the bucket for faster data transfer: .s3-accelerate.amazonaws.com or .s3-accelerate.dualstack.amazonaws.com for the “dual-stack” endpoint. If you want to use standard data transfer, you can continue to use the regular endpoints. /s3/faqs/;How fast is S3 Transfer Acceleration?;S3 Transfer Acceleration helps you fully use your bandwidth, minimize the effect of distance on throughput, and is designed to ensure consistently fast data transfer to Amazon S3 regardless of your client’s location. The amount of acceleration primarily depends on your available bandwidth, the distance between the source and destination, and packet loss rates on the network path. Generally, you will see more acceleration when the source is farther from the destination, when there is more available bandwidth, and/or when the object size is bigger. /s3/faqs/;Who should use S3 Transfer Acceleration?;S3 Transfer Acceleration is designed to optimize transfer speeds from across the world into S3 buckets. If you are uploading to a centralized bucket from geographically dispersed locations or if you regularly transfer GBs or TBs of data across continents, you may save hours or days of data transfer time with S3 Transfer Acceleration. /s3/faqs/;How secure is S3 Transfer Acceleration?;S3 Transfer Acceleration provides the same security as regular transfers to Amazon S3. All Amazon S3 security features, such as access restriction based on a client’s IP address, are supported as well. S3 Transfer Acceleration communicates with clients over standard TCP and does not require firewall changes. Ndata is ever saved at AWS Edge locations. /s3/faqs/;How should I choose between S3 Transfer Acceleration and Amazon CloudFront’s PUT/POST?;S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1 GB or if the data set is less than 1 GB in size, you should consider using Amazon CloudFront's PUT/POST commands for optimal performance. /s3/faqs/;How should I choose between S3 Transfer Acceleration and AWS Snow Family (Snowball, Snowball Edge, and Snowmobile)?;The AWS Snow Family is ideal for customers moving large batches of data at once. The AWS Snowball has a typical 5—7 days turnaround time. As a rule of thumb, S3 Transfer Acceleration over a fully-utilized 1 Gbps line can transfer up to 75 TBs in the same time period. In general, if it will take more than a week to transfer over the internet, or there are recurring transfer jobs and there is more than 25Mbps of available bandwidth, S3 Transfer Acceleration is a good option. Another option is to use both: perform initial heavy lift moves with an AWS Snowball (or series of AWS Snowballs) and then transfer incremental ongoing changes with S3 Transfer Acceleration. /s3/faqs/;Can S3 Transfer Acceleration complement AWS Direct Connect?;AWS Direct Connect is a good choice for customers who have a private networking requirement or who have access to AWS Direct Connect exchanges. S3 Transfer Acceleration is best for submitting data from distributed client locations over the public internet, or where variable network conditions make throughput poor. Some AWS Direct Connect customers use S3 Transfer Acceleration to help with remote office transfers where they may suffer from poor internet performance. /s3/faqs/;Can S3 Transfer Acceleration complement AWS Storage Gateway or a third-party gateway?;You can benefit from configuring the bucket destination in your third-party gateway to use an S3 Transfer Acceleration endpoint domain. /s3/faqs/;Can S3 Transfer Acceleration complement third-party integrated software?;Yes. Software packages that connect directly into Amazon S3 can take advantage of S3 Transfer Acceleration when they send their jobs to Amazon S3. /s3/faqs/;Is S3 Transfer Acceleration HIPAA eligible?;Yes, AWS has expanded its HIPAA compliance program to include S3 Transfer Acceleration as a HIPAA eligible service. If you have an executed Business Associate Agreement (BAA) with AWS, you can use S3 Transfer Acceleration to make fast, easy, and secure transfers of files, including protected health information (PHI) over long distances between your client and your Amazon S3 bucket. /s3/faqs/;How can I control access to my data stored on Amazon S3?;Customers can use a number of mechanisms for controlling access to Amazon S3 resources, including AWS Identity and Access Management (IAM) policies, bucket policies, access point policies, access control lists (ACLs), Query String Authentication, Amazon Virtual Private Cloud (Amazon VPC) endpoint policies, service control policies (SCPs) in AWS Organizations, and Amazon S3 Block Public Access. /s3/faqs/;Does Amazon S3 support data access auditing?;Yes, customers can optionally configure an Amazon S3 bucket to create access log records for all requests made against it. Alternatively, customers who need to capture IAM/user identity information in their logs can configure AWS CloudTrail Data Events. /s3/faqs/;What options do I have for encrypting data stored on Amazon S3?;Amazon S3 encrypts all new data uploads to any bucket. Amazon S3 applies S3-managed server-side encryption (SSE-S3) as the base level of encryption to all object uploads (as of January 5, 2023). SSE-S3 provides a fully-managed solution where Amazon handles key management and key protection using multiple layers of security. You should continue to use SSE-S3 if you prefer to have Amazon manage your keys. Additionally, you can choose to encrypt data using SSE-C, SSE-KMS, or a client library such as the Amazon S3 Encryption Client. Each option allows you to store sensitive data encrypted at rest in Amazon S3. /s3/faqs/;Can I comply with European data privacy regulations using Amazon S3?;Customers can choose to store all data in Europe by using the Europe (Frankfurt), Europe (Ireland), Europe (Paris), Europe (Stockholm), Europe (Milan), Europe (Spain), Europe (London), or Europe (Zurich) Region. You can also use Amazon S3 on Outposts to keep all of your data on premises on the AWS Outpost, and you may choose to transfer data between AWS Outposts or to an AWS Region. It is your responsibility to ensure that you comply with European privacy laws. View the AWS General Data Protection Regulation (GDPR) Center and AWS Data Privacy Center for more information. If you have more specific location requirements or other data privacy regulations that require you to keep data in a location where there is not an AWS Region, you can use S3 on Outposts. /s3/faqs/;What is an Amazon VPC Endpoint for Amazon S3?;An Amazon VPC Endpoint for Amazon S3 is a logical entity within a VPC that allows connectivity to S3 over the AWS global network. There are two types of VPC endpoints for S3: gateway VPC endpoints and interface VPC endpoints. Gateway endpoints are a gateway that you specify in your route table to access S3 from your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IPs to route requests to S3 from within your VPC, on-premises, or from a different AWS Region. For more information, visit the AWS PrivateLink for Amazon S3 documentation. /s3/faqs/;Can I allow a specific Amazon VPC Endpoint access to my Amazon S3 bucket?;You can limit access to your bucket from a specific Amazon VPC Endpoint or a set of endpoints using Amazon S3 bucket policies. S3 bucket policies now support a condition, aws:sourceVpce, that you can use to restrict access. For more details and example policies, read the gateway endpoints for S3 documentation. /s3/faqs/;What is AWS PrivateLink for Amazon S3?;AWS PrivateLink for S3 provides private connectivity between Amazon S3 and on-premises. You can provision interface VPC endpoints for S3 in your VPC to connect your on-premises applications directly to S3 over AWS Direct Connect or AWS VPNYou no longer need to use public IPs, change firewall rules, or configure an internet gateway to access S3 from on-premises. To learn more visit the AWS PrivateLink for S3 documentation. /s3/faqs/;How do I get started with interface VPC endpoints for S3?;You can create an interface VPC endpoint using the AWS VPC Management Console, AWS Command Line Interface (AWS CLI), AWS SDK, or API. To learn more, visit the documentation. /s3/faqs/;When should I choose gateway VPC endpoints versus AWS PrivateLink-based interface VPC endpoints?;AWS recommends that you use interface VPC endpoints to access S3 from on-premises or from a VPC in another AWS Region. For resources that are accessing S3 from VPC in the same AWS Region as S3, we recommend using gateway VPC endpoints as they are not billed. To learn more, visit the documentation. /s3/faqs/;Can I use both Interface Endpoints and Gateway Endpoints for S3 in the same VPC?;Yes. If you have an existing gateway VPC endpoint, create an interface VPC endpoint in your VPC and update your client applications with the VPC endpoint specific endpoint names. For example, if your VPC endpoint id of the interface endpoint is vpce-0fe5b17a0707d6abc-29p5708s in the us-east-1 Region, then your endpoint specific DNname will be vpce-0fe5b17a0707d6abc-29p5708s.s3.us-east-1.vpce.amazonaws.com. In this case, only the requests to the VPC endpoint specific names will route through Interface VPC endpoints to S3 while all other requests would continue to route through the gateway VPC endpoint. To learn more, visit the documentation. /s3/faqs/;What is Amazon Macie and how can I use it to secure my data?;Amazon Macie is an AI-powered security service that helps you prevent data loss by automatically discovering, classifying, and protecting sensitive data stored in Amazon S3. Amazon Macie uses machine learning to recognize sensitive data such as personally identifiable information (PII) or intellectual property, assigns a business value, and provides visibility into where this data is stored and how it is being used in your organization. Amazon Macie continuously monitors data access activity for anomalies, and delivers alerts when it detects risk of unauthorized access or inadvertent data leaks. /s3/faqs/;What is IAM Access Analyzer for Amazon S3 and how does it work?;Access Analyzer for S3 is a feature that helps you simplify permissions management as you set, verify, and refine policies for your S3 buckets and access points. Access Analyzer for S3 monitors your existing access policies to verify that they provide only the required access to your S3 resources. Access Analyzer for S3 evaluates your bucket access policies and helps you discover and swiftly make changes to buckets that do not require access. /s3/faqs/;What is Amazon S3 Access Points?;Today, customers manage access to their S3 buckets using a single bucket policy that controls access for hundreds of applications with different permission levels. /s3/faqs/;Why should I use an access point?;S3 Access Points simplify how you manage data access to your shared datasets on S3. You no longer have to manage a single, complex bucket policy with hundreds of different permission rules that need to be written, read, tracked, and audited. With S3 Access Points, you can create access points or delegate permissions to trusted accounts to create cross-account access points on your bucket. This permits access to shared data sets with policies tailored to the specific application. /s3/faqs/;How do S3 Access Points work?;Each S3 Access Point is configured with an access policy specific to a use case or application, and a bucket can have thousands of access points. For example, you can create an access point for your S3 bucket that grants access for groups of users or applications for your data lake. An Access Point can support a single user or application, or groups of users or applications within and across accounts, allowing separate management of each access point. /s3/faqs/;Is there a quota on how many access points I can create?;By default, you can create 10,000 access points per Region per account on buckets in your account and cross-account. Unlike S3 buckets, there is no hard limit on the number of access points per AWS account. Visit AWS Service Quotas to request an increase in this quota. /s3/faqs/;When using an access point, how are requests authorized?;S3 access points have their own IAM access point policy. You write access point policies like you would a bucket policy, using the access point ARN as the resource. Access point policies can grant or restrict access to the S3 data requested through the access point. Amazon S3 evaluates all the relevant policies, including those on the user, bucket, access point, VPC Endpoint, and service control policies as well as Access Control Lists, to decide whether to authorize the request. /s3/faqs/;How do I write access point policies?;You can write an access point policy just like a bucket policy, using IAM rules to govern permissions and the access point ARN in the policy document. /s3/faqs/;How is restricting access to specific VPCs using network origin controls on access points different from restricting access to VPCs using the bucket policy?;You can continue to use bucket policies to limit bucket access to specified VPCs. Access points provide an easier, auditable way to lock down all or a subset of data in a shared data set to VPC-only traffic for all applications in your organization using API controls. You can use an AWS Organizations Service Control Policy (SCP) to mandate that any access point created in your organization set the “network origin control” API parameter value to “vpc”. Then, any new access point created automatically restricts data access to VPC-only traffic. Nadditional access policy is required to make sure that data requests are processed only from specified VPCs. /s3/faqs/;Can I enforce a “No internet data access” policy for all access points in my organization?;Yes. To enforce a “Ninternet data access” policy for access points in your organization, you would want to make sure all access points enforce VPC only access. To do so, you will write an AWS SCP that only supports the value “vpc” for the “network origin control” parameter in the create_access_point() API. If you had any internet-facing access points that you created previously, they can be removed. You will also need to modify the bucket policy in each of your buckets to further restrict internet access directly to your bucket through the bucket hostname. Since other AWS services may be directly accessing your bucket, make sure you set up access to allow the AWS services you want by modifying the policy to permit these AWS services. Refer to the S3 documentation for examples of how to do this. /s3/faqs/;Can I completely disable direct access to a bucket using the bucket hostname?;Not currently, but you can attach a bucket policy that rejects requests not made using an access point. Refer to the S3 documentation for more details. /s3/faqs/;Can I replace or remove an access point from a bucket?;Yes. When you remove an access point, any access to the associated bucket through other access points, and through the bucket hostname, will not be disrupted. /s3/faqs/;What is the cost of Amazon S3 Access Points?;There is no additional charge for access points or buckets that use access points. Usual Amazon S3 request rates apply. /s3/faqs/;How do I get started with S3 Access Points?;You can start creating S3 Access Points on new buckets as well as existing buckets through the AWS Management Console, the AWS Command Line Interface (CLI), the Application Programming Interface (API), and the AWS Software Development Kit (SDK) client. To learn more about S3 Access Points, visit the user guide. /s3/faqs/;How durable is Amazon S3?;Amazon S3 Standard, S3 Standard–IA, S3 Intelligent-Tiering, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive are all designed to provide 99.999999999% (11 9's) of data durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000 years. S3 on Outposts is designed to durably and redundantly store data across multiple devices and servers on your Outpost. In addition, Amazon S3 Standard, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive are all designed to sustain data in the event of an entire S3 Availability Zone loss. /s3/faqs/;How is Amazon S3 designed to achieve 99.999999999% durability?;Amazon S3 Standard, S3 Standard-IA, S3 Intelligent-Tiering, and S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive storage classes redundantly store your objects on multiple devices across a minimum of three Availability Zones (AZs) in an Amazon S3 Region before returning SUCCESS. The S3 One Zone-IA storage class stores data redundantly across multiple devices within a single AZ. These services are designed to sustain concurrent device failures by quickly detecting and repairing any lost redundancy, and they also regularly verify the integrity of your data using checksums. /s3/faqs/;What checksums does Amazon S3 support for data integrity checking?;Amazon S3 uses a combination of Content-MD5 checksums, secure hash algorithms (SHAs), and cyclic redundancy checks (CRCs) to verify data integrity. Amazon S3 performs these checksums on data at rest and repairs any disparity using redundant data. In addition, S3 calculates checksums on all network traffic to detect alterations of data packets when storing or retrieving data. You can choose from four supported checksum algorithms for data integrity checking on your upload and download requests. You can choose a SHA-1, SHA-256, CRC32, or CRC32C checksum algorithm, depending on your application needs. You can automatically calculate and verify checksums as you store or retrieve data from S3, and can access the checksum information at any time using the GetObjectAttributes S3 API or an S3 Inventory report. Calculating a checksum as you stream data into S3 saves you time as you’re able to both verify and transmit your data in a single pass, instead of as two sequential operations. Using checksums for data validation is a best practice for data durability, and these capabilities increase the performance and reduce the cost to do so. /s3/faqs/;Why should I use Versioning?;Amazon S3 provides customers with a highly durable storage infrastructure. Versioning offers an additional level of protection by providing a means of recovery when customers accidentally overwrite or delete objects. This allows you to easily recover from unintended user actions and application failures. You can also use Versioning for data retention and archiving. /s3/faqs/;How do I start using Versioning?;You can start using Versioning by enabling a setting on your Amazon S3 bucket. For more information on how to enable Versioning, refer to the Amazon S3 documentation. /s3/faqs/;How does Versioning protect me from accidental deletion of my objects?;When a user performs a DELETE operation on an object, subsequent simple (un-versioned) requests will no longer retrieve the object. However, all versions of that object will continue to be preserved in your Amazon S3 bucket and can be retrieved or restored. Only the owner of an Amazon S3 bucket can permanently delete a version. You can set Lifecycle rules to manage the lifetime and the cost of storing multiple versions of your objects. /s3/faqs/;Can I set up a trash, recycle bin, or rollback window on my Amazon S3 objects to recover from deletes and overwrites?;You can use Amazon S3 Lifecycle rules along with S3 Versioning to implement a rollback window for your S3 objects. For example, with your versioning-enabled bucket, you can set up a rule that archives all of your previous versions to the lower-cost S3 Glacier Flexible Retrieval storage class and deletes them after 100 days, giving you a 100-day window to roll back any changes on your data while lowering your storage costs. Additionally, you can save costs by deleting old (noncurrent) versions of an object after five days and when there are at least two newer versions of the object. You can change the number of days or the number of newer versions based on your cost optimization needs. This allows you to retain additional versions of your objects when needed, but saves you cost by transitioning or removing them after a period of time. /s3/faqs/;How can I ensure maximum protection of my preserved versions?;Versioning’s Multi-Factor Authentication (MFA) Delete capability can be used to provide an additional layer of security. By default, all requests to your Amazon S3 bucket require your AWS account credentials. If you enable Versioning with MFA Delete on your Amazon S3 bucket, two forms of authentication are required to permanently delete a version of an object: your AWS account credentials and a valid six-digit code and serial number from an authentication device in your physical possession. To learn more about enabling Versioning with MFA Delete, including how to purchase and activate an authentication device, refer to the Amazon S3 documentation. /s3/faqs/;How am I charged for using Versioning?;Normal Amazon S3 rates apply for every version of an object stored or requested. For example, let’s look at the following scenario to illustrate storage costs when utilizing Versioning (let’s assume the current month is 31 days long): /s3/faqs/;What is S3 Intelligent-Tiering?;S3 Intelligent-Tiering is the first cloud storage that automatically reduces your storage costs on a granular object level by automatically moving data to the most cost-effective access tier based on access frequency, without performance impact, retrieval fees, or operational overhead. S3 Intelligent-Tiering delivers milliseconds latency and high throughput performance for frequently, infrequently, and rarely accessed data in the Frequent, Infrequent, and Archive Instant Access tiers. For a small monthly object monitoring and automation charge, S3 Intelligent-Tiering monitors the access patterns and moves the objects automatically from one tier to another. There are no retrieval charges in S3 Intelligent-Tiering, so you won’t see unexpected increases in storage bills when access patterns change. /s3/faqs/;How does S3 Intelligent-Tiering work?;The Amazon S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective access tier when access patterns change. For a low monthly object monitoring and automation charge, S3 Intelligent-Tiering monitors access patterns and automatically moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier to save up to 40% on storage costs. After 90 days consecutive days of no access, objects are moved to the Archive Instant Access tier to save up to 68% on storage costs. There is no impact on performance and there are no retrieval charges in S3 Intelligent-Tiering. If an object in the Infrequent Access tier or Archive Instant Access tier is accessed later, it is automatically moved back to the Frequent Access tier. /s3/faqs/;Why would I choose to use S3 Intelligent-Tiering?;You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes, data analytics, machine learning, new applications, and user-generated content. S3 Intelligent-Tiering is the first cloud storage that automatically reduces your storage costs on a granular object level by automatically moving data to the most cost-effective access tier based on access frequency, without performance impact, retrieval fees, or operational overhead. If you have data with unknown or changing access patterns, including data lakes, data analytics, and new applications, we recommend using S3 Intelligent-Tiering. If you have data that does not require immediate retrieval, we recommend activating the Deep Archive Access tier where you pay as little as $1 per TB per month for data that may become rarely accessed over long periods of time. S3 Intelligent-Tiering is for data with unknown or changing access patterns. There are no retrieval fees when using the S3 Intelligent-Tiering storage class. /s3/faqs/;What performance does S3 Intelligent-Tiering offer?;S3 Intelligent-Tiering automatically optimizes your storage costs without an impact to your performance. The S3 Intelligent-Tiering Frequent, Infrequent, and Archive Instant Access tiers provide milliseconds latency and high throughput performance. /s3/faqs/;How durable and available is S3 Intelligent-Tiering?;S3 Intelligent-Tiering is designed for the same 99.999999999% durability as the S3 Standard storage class. S3 Intelligent-Tiering is designed for 99.9% availability, and carries a service level agreement providing service credits if availability is less than our service commitment in any billing cycle. /s3/faqs/;How do I get my data into S3 Intelligent-Tiering?;There are two ways to get data into S3 Intelligent-Tiering. You can directly PUT into S3 Intelligent-Tiering by specifying INTELLIGENT_TIERINin the x-amz-storage-class header or set lifecycle policies to transition objects from S3 Standard or S3 Standard-IA to S3 INTELLIGENT_TIERING. /s3/faqs/;How am I charged for S3 Intelligent-Tiering?;S3 Intelligent-Tiering charges you for monthly storage, requests, and data transfer, and charges a small monthly charge for monitoring and automation per object. The S3 Intelligent-Tiering storage class automatically stores objects in three access tiers: a Frequent Access tier priced at S3 Standard storage rates, an Infrequent Access tier priced at S3 Standard-Infrequent Access storage rates, and an Archive Instant Access tier priced at the S3 Glacier Instant Retrieval storage rates. S3 Intelligent-Tiering also has two optional archive tiers designed for asynchronous access, an Archive Access tier priced at S3 Glacier Flexible Retrieval storage rates, and a Deep Archive Access tier priced at S3 Glacier Deep Archive storage rates. For a small monitoring and automation fee, S3 Intelligent-Tiering monitors access patterns and automatically moves objects through low latency and high throughput access tiers, as well as two opt in asynchronous archive access tiers where customers get the lowest storage costs in the cloud for data that can be accessed asynchronously. There is no minimum billable object size in S3 Intelligent-Tiering, but objects smaller than 128KB are not eligible for auto-tiering. These small objects will not be monitored and will always be charged at the Frequent Access tier rates, with no monitoring and automation charge. For each object archived to the Archive Access tier or Deep Archive Access tier in S3 Intelligent-Tiering, Amazon S3 uses 8 KB of storage for the name of the object and other metadata (billed at S3 Standard storage rates) and 32 KB of storage for index and related metadata (billed at S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage rates). /s3/faqs/;Is there a charge to retrieve data from S3 Intelligent-Tiering?;No. There are no retrieval fees for S3 Intelligent-Tiering. S3 Intelligent-Tiering monitors the access patterns of your data and if you access an object in the Infrequent Access, Archive Instant Access, or the asynchronous archive tiers, S3 Intelligent-Tiering automatically moves that object to the Frequent Access tier. /s3/faqs/;How do I activate S3 Intelligent-Tiering archive access tiers?;You can activate the Archive Access tier and Deep Archive Access tier by creating a bucket, prefix, or object tag level configuration using the Amazon S3 API, CLI, or S3 management console. You should only activate one or both of the archive access tiers if your objects can be accessed asynchronously by your application. /s3/faqs/;How do I access an object from the Archive Access or Deep Archive Access tiers in the S3 Intelligent-Tiering storage class?;To access an object in the Archive or Deep Archive Access tiers, you need to issue a Restore request and the object will begin moving back to the Frequent Access tier, all within the S3 Intelligent-Tiering storage class. Objects in the Archive Access Tier are moved to the Frequent Access tier in 3-5 hours, objects in the Deep Archive Access tier are moved to the Frequent Access tier within 12 hours. Once the object is in the Frequent Access tier, you can issue a GET request to retrieve the object. /s3/faqs/;How do I know in which S3 Intelligent-Tiering access tier my objects are stored in?;You can use Amazon S3 Inventory to report the access tier of objects stored in the S3 Intelligent-Tiering storage class. Amazon S3 Inventory provides CSV, ORC, or Parquet output files that list your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or a shared prefix. You can also make a HEAD request on your objects to report the S3 Intelligent-Tiering archive access tiers. /s3/faqs/;Can I lifecycle objects from S3 Intelligent-Tiering to another storage class?;Yes. You can lifecycle objects from S3 Intelligent-Tiering Frequent Access, Infrequent, and Archive Instant Access tiers to S3 One-Zone Infrequent Access, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive. In addition, you can lifecycle objects from the S3 Intelligent-Tiering optional archive access tiers to S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive, and from the S3 Intelligent-Tiering Deep Archive Access tier to S3 Glacier Deep Archive. /s3/faqs/;Is there a minimum duration for S3 Intelligent-Tiering?;No. The S3 Intelligent-Tiering storage class has no minimum storage duration. /s3/faqs/;Is there a minimum billable object size for S3 Intelligent-Tiering?;No. The S3 Intelligent-Tiering storage class has no minimum billable object size, but objects smaller than 128KB are not eligible for auto-tiering. These smaller objects will always be charged at the Frequent Access tier rates, with no monitoring and automation charge. For each object archived to the opt-in Archive Access tier or Deep Archive Access tier in S3 Intelligent-Tiering, Amazon S3 uses 8 KB of storage for the name of the object and other metadata (billed at S3 Standard storage rates) and 32 KB of storage for index and related metadata (billed at S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage rates). For more details, visit the Amazon S3 pricing page. /s3/faqs/;What is S3 Standard?;Amazon S3 Standard delivers durable storage with millisecond access latency and high throughput performance for frequently accessed data, typically more than once per month. S3 Standard is designed for performance-sensitive uses cases, such as data lakes, cloud-native applications, dynamic websites, content distribution, mobile and gaming applications, analytics, and machine learning models. S3 Standard is designed for 99.99% data availability and durability of 99.999999999% of objects across multiple Availability Zones in a given year. You can use S3 Lifecycle policies to control exactly when data is transitioned between S3 Standard and lower costs storage classes without any application changes. /s3/faqs/;Why would I choose to use S3 Standard?;S3 Standard is ideal for your most frequently accessed or modified data that requires access in milliseconds and high throughput performance. S3 Standard is ideal for data that is read or written very often, as there are no retrieval charges. S3 Standard is optimized for a wide variety of use cases, including data lakes, cloud native applications, dynamic websites, content distribution, mobile and gaming applications, and analytics. /s3/faqs/;What is S3 Standard-Infrequent Access?;Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is an Amazon S3 storage class for data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA offers the high durability, throughput, and low latency of the Amazon S3 Standard storage class, with a low per-GB storage price and per-GB retrieval charge. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery. The S3 Standard-IA storage class is set at the object level and can exist in the same bucket as the S3 Standard or S3 One Zone-IA storage classes, allowing you to use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes. /s3/faqs/;What performance does S3 Standard-IA offer?;S3 Standard-IA provides the same milliseconds latency and high throughput performance as the S3 Standard storage class. /s3/faqs/;What charges will I incur if I change the storage class of an object from S3 Standard-IA to S3 Standard with a COPY request?;You will incur charges for an S3 Standard (destination storage class) COPY request and an S3 Standard-IA (source storage class) data retrieval. For more information, visit the Amazon S3 pricing page. /s3/faqs/;;S3 Standard-IA is designed for long-lived, infrequently accessed data that is retained for months or years. Data that is deleted from S3 Standard-IA within 30 days will be charged for a full 30 days. See the Amazon S3 pricing page for information about S3 Standard-IA pricing. /s3/faqs/;Is there a minimum object storage charge for S3 Standard-IA?;S3 Standard-IA is designed for larger objects and has a minimum object storage charge of 128KB. Objects smaller than 128KB in size will incur storage charges as if the object were 128KB. For example, a 6KB object in S3 Standard-IA will incur S3 Standard-IA storage charges for 6KB and an additional minimum object size charge equivalent to 122KB at the S3 Standard-IA storage price. See the Amazon S3 pricing page for information about S3 Standard-IA pricing. /s3/faqs/;Can I tier objects from S3 Standard-IA to S3 One Zone-IA or to the S3 Glacier Flexible Retrieval storage class?;Yes. In addition to using Lifecycle policies to migrate objects from S3 Standard to S3 Standard-IA, you can also set up Lifecycle policies to tier objects from S3 Standard-IA to S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and the S3 Glacier Deep Archive storage class. /s3/faqs/;What is S3 One Zone-IA storage class?;S3 One Zone-IA storage class is an Amazon S3 storage class that customers can choose to store objects in a single availability zone. S3 One Zone-IA storage redundantly stores data within that single Availability Zone to deliver storage at 20% less cost than geographically redundant S3 Standard-IA storage, which stores data redundantly across multiple geographically separate Availability Zones. /s3/faqs/;What performance does S3 One Zone-IA storage offer?;S3 One Zone-IA storage class offers the same latency and throughput performance as the S3 Standard and S3 Standard-Infrequent Access storage classes. /s3/faqs/;How durable is the S3 One Zone-IA storage class?;S3 One Zone-IA storage class is designed for 99.999999999% of durability within an Availability Zone. However, data in the S3 One Zone-IA storage class is not resilient to the loss of availability or physical loss of an Availability Zone. In contrast, S3 Standard, S3 Intelligent-Tiering, S3 Standard-Infrequent Access, and the S3 Glacier storage classes are designed to withstand loss of availability or the destruction of an Availability Zone. S3 One Zone-IA can deliver the same or better durability and availability than most modern, physical data centers, while providing the added benefit of elasticity of storage and the Amazon S3 feature set. /s3/faqs/;Is an S3 One Zone-IA “Zone” the same thing as an AWS Availability Zone?;Yes. Each AWS Region is a separate geographic area. Each Region has multiple, isolated locations known as Availability Zones. The Amazon S3 One Zone-IA storage class uses an individual AWS Availability Zone within the Region. /s3/faqs/;How much disaster recovery protection do I forgo by using S3 One Zone-IA?;Each Availability Zone uses redundant power and networking. Within an AWS Region, Availability Zones are on different flood plains, earthquake fault zones, and geographically separated for fire protection. S3 Standard and S3 Standard-IA storage classes offer protection against these sorts of disasters by storing your data redundantly in multiple Availability Zones. S3 One Zone-IA offers protection against equipment failure within an Availability Zone, but the data is not resilient to the physical loss of the Availability Zone resulting from disasters, such as earthquakes and floods. Using S3 One Zone-IA, S3 Standard, and S3 Standard-IA options, you can choose the storage class that best fits the durability and availability needs of your storage. /s3/faqs/;What is the S3 Glacier Instant Retrieval storage class?;The S3 Glacier Instant Retrieval storage class delivers the lowest cost storage for long-lived data that is rarely accessed and requires milliseconds retrieval. S3 Glacier Instant Retrieval delivers the fastest access to archive storage, with the same throughput and milliseconds access as S3 Standard and S3 Standard-IA storage classes. S3 Glacier Instant Retrieval is designed for 99.999999999% (11 9s) of data durability and 99.9% availability by redundantly storing data across a minimum of three physically separated AWS Availability Zones. /s3/faqs/;Why would I choose to use S3 Glacier Instant Retrieval?;S3 Glacier Instant Retrieval is ideal if you have data that is rarely accessed (once a quarter) and requires milliseconds retrieval times. It’s the ideal storage class if you want the same low latency and high throughput performance as S3 Standard-IA, but store data that is accessed less frequently than S3 Standard-IA, with a lower storage price and slightly higher data access costs. /s3/faqs/;How available and durable is S3 Glacier Instant Retrieval?;S3 Glacier Instant Retrieval is designed for 99.999999999% (11 9s) of durability and 99.9% availability, the same as S3 Standard-IA, and carries a service level agreement providing service credits if availability is less than 99% in any billing cycle. /s3/faqs/;What performance does S3 Glacier Instant Retrieval offer?;S3 Glacier Instant Retrieval provides the same milliseconds latency and high throughput performance as the S3 Standard and S3 Standard-IA storage classes. Unlike the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes, which are designed for asynchronous access, you do not need to issue a Restore request before accessing an object stored in S3 Glacier Instant Retrieval. /s3/faqs/;How do I get my data into S3 Glacier Instant Retrieval?;There are two ways to get data into S3 Glacier Instant Retrieval. You can directly PUT into S3 Glacier Instant retrieval by specifying GLACIER_IR in the x-amz-storage-class header or set S3 Lifecycle policies to transition objects from S3 Standard or S3 Standard-IA to S3 Glacier Instant Retrieval. /s3/faqs/;Is there a minimum storage duration charge for Amazon S3 Glacier Instant Retrieval?;S3 Glacier Instant Retrieval is designed for long-lived, rarely accessed data that is retained for months or years. Objects that are archived to S3 Glacier Instant Retrieval have a minimum of 90 days of storage, and objects deleted, overwritten, or transitioned before 90 days incur a pro-rated charge equal to the storage charge for the remaining days. View the Amazon S3 pricing page for information about Amazon S3 Glacier Instant Retrieval pricing. /s3/faqs/;Is there a minimum object size charge for Amazon S3 Glacier Instant Retrieval?;S3 Glacier Instant Retrieval is designed for larger objects and has a minimum object storage charge of 128KB. Objects smaller than 128KB in size will incur storage charges as if the object were 128KB. For example, a 6KB object in S3 Glacier Instant Retrieval will incur S3 Glacier Instant Retrieval storage charges for 6KB and an additional minimum object size charge equivalent to 122KB at the S3 Glacier Instant Retrieval storage price. View the Amazon S3 pricing page for information about Amazon S3 Glacier Instant Retrieval pricing. /s3/faqs/;How am I charged for S3 Glacier Instant Retrieval?;S3 Glacier Instant Retrieval charges you for monthly storage, requests based on the request type, and data retrievals. The volume of storage billed in a month is based on average storage used throughout the month, measured in gigabyte per month (GB-Month). You are charged for requests based on the request type—such as PUTs, COPYs, and GETs. You also pay a per GB fee for every gigabyte of data returned to you. /s3/faqs/;What is the S3 Glacier Flexible Retrieval storage class?;The S3 Glacier Flexible Retrieval storage class delivers low-cost storage, up to 10% lower cost (than S3 Glacier Instant Retrieval), for archive data that is accessed 1-2 times per year and is retrieved asynchronously, with free bulk retrievals. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, S3 Glacier Flexible Retrieval (formerly S3 Glacier) is the ideal storage class. S3 Glacier Flexible Retrieval delivers the most flexible retrieval options that balance cost with access times ranging from minutes to hours and with free bulk retrievals. It is an ideal solution for backup, disaster recovery, offsite data storage needs, and for when some data needs to occasionally retrieved in minutes, and you don’t want to worry about costs. S3 Glacier Flexible Retrieval is designed for 99.999999999% (11 9s) of data durability and 99.99% availability by redundantly storing data across multiple physically separated AWS Availability Zones in a given year. /s3/faqs/;Why would I choose to use S3 Glacier Flexible Retrieval storage class?;For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, S3 Glacier Flexible Retrieval (formerly S3 Glacier) is the ideal storage class. S3 Glacier Flexible Retrieval delivers the most flexible retrieval options that balance cost with access times ranging from minutes to hours and with free bulk retrievals. It is an ideal solution for backup, disaster recovery, offsite data storage needs, and for when some data needs to occasionally retrieved in minutes, and you don’t want to worry about costs to retrieve the data. /s3/faqs/;How do I get my into S3 Glacier Flexible Retrieval?;There are two ways to get data into S3 Glacier Flexible Retrieval. You can directly PUT into S3 Glacier Flexible Retrieval by specifying GLACIER in the x-amz-storage-class header. You can also use S3 Lifecycle rules to transition objects from any of the S3 storage classes for active data (S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, and S3 Glacier Instant Retrieval) to Amazon S3 Glacier Flexible Retrieval based on object age. Use the Amazon S3 Management Console, the AWS SDKs, or the Amazon S3 APIs to directly PUT into Amazon S3 Glacier or define rules for archival. Note: S3 Glacier Flexible Retrieval (Formerly S3 Glacier) is also available through the original direct Glacier APIs and through the Amazon S3 Glacier Management Console. For an enhanced experience complete with access to the full S3 feature set including lifecycle management, S3 Replication, S3 Storage Lens, and more, we recommend using S3 APIs and the S3 Management Console to use S3 Glacier features. /s3/faqs/;How can I retrieve my objects that are archived in S3 Glacier Flexible Retrieval and will I be notified when the object is restored?;Objects that are archived in S3 Glacier Flexible Retrieval are accessed asynchronously. To retrieve data stored in S3 Glacier Flexible Retrieval, initiate a retrieval request using the Amazon S3 APIs or the Amazon S3 console. The retrieval request creates a temporary copy of your data in the S3 Standard storage class while leaving the archived data intact in S3 Glacier Flexible Retrieval. You can specify the amount of time in days for which the temporary copy is stored in Amazon S3. You can then access your temporary copy from S3 through an Amazon S3 GET request on the archived object. /s3/faqs/;How long will it take to restore my objects archived in Amazon S3 Glacier Flexible Retrieval?;When processing a retrieval job, Amazon S3 first retrieves the requested data from S3 Glacier Flexible Retrieval, and then creates a temporary copy of the requested data in Amazon S3. This typically takes a few minutes. The access time of your request depends on the retrieval option you choose: Expedited, Standard, or Bulk retrievals. For all but the largest objects (250MB+), data accessed using Expedited retrievals are typically made available within 1-5 minutes. Objects retrieved using Standard retrievals typically complete between 3-5 hours. Bulk retrievals typically complete within 5—12 hours, and are free of charge. For more information about the S3 Glacier Flexible Retrieval options, refer to restoring an archived object in the S3 user guide. /s3/faqs/;How is my storage charge calculated for Amazon S3 objects archived to S3 Glacier Flexible Retrieval?;The volume of storage billed in a month is based on average storage used throughout the month, measured in gigabyte-months (GB-Months). Amazon S3 calculates the object size as the amount of data you stored, plus an additional 32 KB of S3 Glacier data, plus an additional 8 KB of Amazon S3 Standard storage class data. S3 Glacier Flexible Retrieval requires an additional 32 KB of data per object for S3 Glacier’s index and metadata so you can identify and retrieve your data. Amazon S3 requires 8 KB to store and maintain the user-defined name and metadata for objects archived to S3 Glacier Flexible Retrieval. This enables you to get a real-time list of all of your Amazon S3 objects, including those stored using S3 Glacier Flexible Retrieval, using the Amazon S3 LIST API, or the S3 inventory report. /s3/faqs/;Are there minimum storage duration and minimum object storage charges for Amazon S3 Glacier Flexible Retrieval?;Objects archived to S3 Glacier Flexible Retrieval have a minimum of 90 days of storage. If an object is deleted, overwritten, or transitioned before 90 days, a pro-rated charge equal to the storage charge for the remaining days will be incurred. S3 Glacier Flexible Retrieval also requires 40 KB of additional metadata for each archived object. This includes 32 KB of metadata charged at the S3 Glacier Flexible Retrieval rate required to identify and retrieve your data. And, an additional 8 KB data charged at the S3 Standard rate which is required to maintain the user-defined name and metadata for objects archived to S3 Glacier Flexible Retrieval. This allows you to get a real-time list of all of your S3 objects using the S3 LIST API or the S3 Inventory report. View the Amazon S3 pricing page for information about Amazon S3 Glacier Flexible Retrieval pricing. /s3/faqs/;How much does it cost to retrieve data from Amazon S3 Glacier Flexible Retrieval?;There are three ways to retrieve data from S3 Glacier Flexible Retrieval: Expedited, Standard, and Bulk Retrievals. Expedited and Standard have a per-GB retrieval fee and per-request fee (i.e., you pay for requests made against your Amazon S3 objects). Bulk Retrievals from S3 Glacier Flexible Retrieval are free. For detailed S3 Glacier pricing by AWS Region, visit the Amazon S3 pricing page. /s3/faqs/;Does Amazon S3 provide capabilities for archiving objects to lower cost storage classes?;The Amazon S3 Glacier storage classes are purpose-built for data archiving, providing you with the highest performance, most retrieval flexibility, and the lowest cost archive storage in the cloud. You can now choose from three archive storage classes optimized for different access patterns and storage duration. For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5—12 hours. To save even more on long-lived archive storage such as compliance archives and digital media preservation, choose S3 Glacier Deep Archive, the lowest cost storage in the cloud with data retrieval within 12 hours. /s3/faqs/;What is the backend infrastructure supporting the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage class?;We prefer to focus on the customer outcomes of performance, durability, availability, and security. However, this question is often asked by our customers. We use a number of different technologies which allow us to offer the prices we do to our customers. Our services are built using common data storage technologies specifically assembled into purpose-built, cost-optimized systems using AWS-developed software. The S3 Glacier storage classes benefit from our ability to optimize the sequence of inputs and outputs to maximize efficiency accessing the underlying storage. /s3/faqs/;What is the Amazon S3 Glacier Deep Archive storage class?;S3 Glacier Deep Archive is an Amazon S3 storage class that provides secure and durable object storage for long-term retention of data that is accessed once or twice in a year. From just $0.00099 per GB-month (less than one-tenth of one cent, or about $1 per TB-month), S3 Glacier Deep Archive offers the lowest cost storage in the cloud, at prices significantly lower than storing and maintaining data in on-premises magnetic tape libraries or archiving data off-site. /s3/faqs/;What use cases are best suited for the S3 Glacier Deep Archive storage class?;S3 Glacier Deep Archive is an ideal storage class to provide offline protection of your company’s most important data assets, or when long-term data retention is required for corporate policy, contractual, or regulatory compliance requirements. Customers find S3 Glacier Deep Archive to be a compelling choice to protect core intellectual property, financial and medical records, research results, legal documents, seismic exploration studies, and long-term backups, especially in highly regulated industries, such as Financial Services, Healthcare, Oil & Gas, and Public Sectors. In addition, there are organizations, such as media and entertainment companies, that want to keep a backup copy of core intellectual property. Frequently, customers using S3 Glacier Deep Archive can reduce or discontinue the use of on-premises magnetic tape libraries and off-premises tape archival services. /s3/faqs/;How does the S3 Glacier Deep Archive storage class differ from the S3 Glacier Instant Retrieval, and S3 Glacier Flexible Retrieval storage classes?;S3 Glacier Deep Archive expands our data archiving offerings, enabling you to select the optimal storage class based on storage and retrieval costs, and retrieval times. Choose the S3 Glacier Instant Retrieval storage class when you need milliseconds access to low cost archive data. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5-12 hours. S3 Glacier Deep Archive, in contrast, is designed for colder data that is very unlikely to be accessed, but still requires long-term, durable storage. S3 Glacier Deep Archive is up to 75% less expensive than S3 Glacier Flexible Retrieval and provides retrieval within 12 hours using the Standard retrieval speed. You may also reduce retrieval costs by selecting Bulk retrieval, which will return data within 48 hours. /s3/faqs/;How do I get started using S3 Glacier Deep Archive?;The easiest way to store data in S3 Glacier Deep Archive is to use the S3 API to upload data directly. Just specify “S3 Glacier Deep Archive” as the storage class. You can accomplish this using the AWS Management Console, S3 REST API, AWS SDKs, or AWS Command Line Interface. /s3/faqs/;How do you recommend migrating data from my existing tape archives to S3 Glacier Deep Archive?;There are multiple ways to migrate data from existing tape archives to S3 Glacier Deep Archive. You can use the AWS Tape Gateway to integrate with existing backup applications using a virtual tape library (VTL) interface. This interface presents virtual tapes to the backup application. These can be immediately used to store data in Amazon S3, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive. /s3/faqs/;How can I retrieve my objects stored in S3 Glacier Deep Archive?;To retrieve data stored in S3 Glacier Deep Archive, initiate a “Restore” request using the Amazon S3 APIs or the Amazon S3 Management Console. The Restore creates a temporary copy of your data in the S3 Standard storage class while leaving the archived data intact in S3 Glacier Deep Archive. You can specify the amount of time in days for which the temporary copy is stored in S3. You can then access your temporary copy from S3 through an Amazon S3 GET request on the archived object. /s3/faqs/;How am I charged for using S3 Glacier Deep Archive?;S3 Glacier Deep Archive storage is priced based on the amount of data you store in GBs, the number of PUT/lifecycle transition requests, retrievals in GBs, and number of restore requests. This pricing model is similar to S3 Glacier Flexible Retrieval. See the Amazon S3 pricing page for information about S3 Glacier Deep Archive pricing. /s3/faqs/;How will S3 Glacier Deep Archive usage show up on my AWS bill and in the AWS Cost Management tool?;S3 Glacier Deep Archive usage and cost will show up as an independent service line item on your monthly AWS bill, separate from your Amazon S3 usage and costs. However, if you are using the AWS Cost Management tool, S3 Glacier Deep Archive usage and cost will be included under the Amazon S3 usage and cost in your detailed monthly spend reports, and not broken out as a separate service line item. /s3/faqs/;Are there minimum storage duration and minimum object storage charges for S3 Glacier Deep Archive?;Objects that are archived to S3 Glacier Deep Archive have a minimum of 180 days of storage. If an object is deleted, overwritten, or transitioned before 180 days, a pro-rated charge equal to the storage charge for the remaining days will be incurred. /s3/faqs/;How does S3 Glacier Deep Archive integrate with other AWS Services?;S3 Glacier Deep Archive is integrated with Amazon S3 features, including S3 Object Tagging, S3 Lifecycle policies, S3 Object Lock, and S3 Replication. With S3 storage management features, you can use a single Amazon S3 bucket to store a mixture of S3 Glacier Deep Archive, S3 Standard, S3 Standard-IA, S3 One Zone-IA, and S3 Glacier Flexible Retrieval data. This allows storage administrators to make decisions based on the nature of the data and data access patterns. Customers can use Amazon S3 Lifecycle policies to automatically migrate data to lower-cost storage classes as the data ages, or S3 Cross-Region Replication or Same-Region Replication policies to replicate data to the same or a different region. /s3/faqs/;What is Amazon S3 on Outposts?;Amazon S3 on Outposts delivers object storage in your on-premises environment, using the S3 APIs and capabilities that you use in AWS today. AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility. Using S3 on Outposts, you can securely process and store customer data generated on-premises before moving it to an AWS Region, access data locally for applications that run on-premises, or store data on your Outpost for companies in locations with data residency requirements, and or those in regulated industries. To learn more about S3 on Outposts, visit the overview page. /s3/faqs/;What are S3 Object Tags?;S3 Object Tags are key-value pairs applied to S3 objects which can be created, updated or deleted at any time during the lifetime of the object. With these, you have the ability to create Identity and Access Management (IAM) policies, set up S3 Lifecycle policies, and customize storage metrics. These object-level tags can then manage transitions between storage classes and expire objects in the background. You can add tags to new objects when you upload them or you can add them to existing objects. Up to ten tags can be added to each S3 object and you can use either the AWS Management Console, the REST API, the AWS CLI, or the AWS SDKs to add object tags. /s3/faqs/;Why should I use object tags?;Object tags are a tool you can use to enable simple management of your S3 storage. With the ability to create, update, and delete tags at any time during the lifetime of your object, your storage can adapt to the needs of your business. These tags allow you to control access to objects tagged with specific key-value pairs, allowing you to further secure confidential data for only a select group or user. Object tags can also be used to label objects that belong to a specific project or business unit, which could be used in conjunction with S3 Lifecycle policies to manage transitions to other storage classes (S3 Standard-IA, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive) or with S3 Replication to selectively replicate data between AWS Regions. /s3/faqs/;How much do object tags cost?;Object tags are priced based on the quantity of tags and a request cost for adding tags. The requests associated with adding and updating Object Tags are priced the same as existing request prices. See the Amazon S3 pricing page for more information. /s3/faqs/;How do I get started with Storage Class Analysis?;You can use the AWS Management Console or the S3 PUT Bucket Analytics API to configure a Storage Class Analysis policy to identify infrequently accessed storage that can be transitioned to the S3 Standard-IA or S3 One Zone-IA storage class or archived to the S3 Glacier storage classes. You can navigate to the “Management” tab in the S3 Console to manage Storage Class Analysis, S3 Inventory, and S3 CloudWatch metrics. /s3/faqs/;What is S3 Inventory?;The S3 Inventory report provides a scheduled alternative to Amazon S3’s synchronous List API. You can configure S3 Inventory to provide a CSV, ORC, or Parquet file output of your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or prefix. You can simplify and speed up business workflows and big data jobs with S3 Inventory. You can also use S3 inventory to verify encryption and replication status of your objects to meet business, compliance, and regulatory needs. Learn more at the Amazon S3 Inventory user guide. /s3/faqs/;How do I get started with S3 Inventory?;You can use the AWS Management Console or the PUT Bucket Inventory Configuration API to configure a daily or weekly inventory report for all the objects within your S3 bucket or a subset of the objects under a shared prefix. As part of the configuration, you can specify a destination S3 bucket for your S3 Inventory report, the output file format (CSV, ORC, or Parquet), and specific object metadata necessary for your business application, such as object name, size, last modified date, storage class, version ID, delete marker, non-current version flag, multipart upload flag, replication status, or encryption status. You can use S3 Inventory as a direct input into your application workflows or Big Data jobs. You can also query S3 Inventory using Standard SQL language with Amazon Athena, Amazon Redshift Spectrum, and other tools such as Presto, Hive, and Spark. /s3/faqs/;How am I charged for using S3 Inventory?;See the Amazon S3 pricing page for S3 Inventory pricing. Once you configure encryption using SSE-KMS, you will incur KMS charges for encryption, refer to the KMS pricing page for detail. /s3/faqs/;How do I get started with S3 Batch Operations?;You can get started with S3 Batch Operations by going into the Amazon S3 console or using the AWS CLI or SDK to create your first S3 Batch Operations job. A S3 Batch Operations job consists of the list of objects to act upon and the type of operation to be performed (see the full list of available operations). Start by selecting an S3 Inventory report or providing your own custom list of objects for S3 Batch Operations to act upon. An S3 Inventory report is a file listing all objects stored in an S3 bucket or prefix. Next, you choose from a set of S3 operations supported by S3 Batch Operations, such as replacing tag sets, changing ACLs, copying storage from one bucket to another, or initiating a restore from S3 Glacier Flexible Retrieval to S3 Standard storage class. You can then customize your S3 Batch Operations jobs with specific parameters such as tag values, ACL grantees, and restoration duration. To further customize your storage actions, you can write your own Lambda function and invoke that code through S3 Batch Operations. /s3/faqs/;What is Amazon S3 Object Lock?;Amazon S3 Object Lock is an Amazon S3 feature that prevents an object version from being deleted or overwritten for a fixed amount of time or indefinitely, so that you can enforce retention policies as an added layer of data protection or for regulatory compliance. You can migrate workloads from existing write-once-read-many (WORM) systems into Amazon S3, and configure S3 Object Lock at the object- and bucket-level to prevent object version deletions prior to pre-defined Retain Until Dates or indefinitely (Legal Hold Dates). S3 Object Lock protection is maintained regardless of which storage class the object version resides in and throughout S3 Lifecycle transitions between storage classes. /s3/faqs/;How does Amazon S3 Object Lock work?;Amazon S3 Object Lock prevents deletion of an object version for the duration of a specified retention period or indefinitely until a legal hold is removed. With S3 Object Lock, you’re able to ensure that an object version remains immutable for as long as WORM protection is applied. You can apply WORM protection by either assigning a Retain Until Date or a Legal Hold to an object version using the AWS SDK, CLI, REST API, or the S3 Management Console. You can apply retention settings within a PUT request, or apply them to an existing object after it has been created. /s3/faqs/;What AWS electronic storage services have been assessed based on financial services regulations?;For customers in the financial services industry, S3 Object Lock provides added support for broker-dealers who must retain records in a non-erasable and non-rewritable format to satisfy regulatory requirements of SEC Rule 17a-4(f), FINRA Rule 4511, or CFTC Regulation 1.31. You can easily designate the records retention time frame to retain regulatory archives in the original form for the required duration, and also place legal holds to retain data indefinitely until the hold is removed. /s3/faqs/;What AWS documentation supports the SEC 17a-4(f)(2)(i) and CFTC 1.31(c) requirement for notifying my regulator?;Provide notification to your regulator or “Designated Examining Authority (DEA)” of your choice to use Amazon S3 for electronic storage along with a copy of the Cohasset Assessment. For the purposes of these requirements, AWS is not a designated third party (D3P). Be sure to select a D3P and include this information in your notification to your DEA. /s3/faqs/;What alarms can I set on my storage metrics?;You can use CloudWatch to set thresholds on any of the storage metrics counts, timers, or rates and trigger an action when the threshold is breached. For example, you can set a threshold on the percentage of 4xx Error Responses and when at least three data points are above the threshold trigger a CloudWatch alarm to alert a DevOps engineer. /s3/faqs/;What is S3 Lifecycle management?;S3 Lifecycle management provides the ability to define the lifecycle of your object with a predefined policy and reduce your cost of storage. You can set a lifecycle transition policy to automatically migrate objects stored in the S3 Standard storage class to the S3 Standard-IA, S3 One Zone-IA, and/or S3 Glacier storage classes based on the age of the data. You can also set lifecycle expiration policies to automatically remove objects based on the age of the object. You can set a policy for multipart upload expiration, which expires incomplete multipart uploads based on the age of the upload. /s3/faqs/;How do I set up an S3 Lifecycle management policy?;You can set up and manage Lifecycle policies in the AWS Management Console, S3 REST API, AWS SDKs, or AWS Command Line Interface (CLI). You can specify the policy at the prefix or at the bucket level. /s3/faqs/;How can I use Amazon S3 Lifecycle management to help lower my Amazon S3 storage costs?;With Amazon S3 Lifecycle policies, you can configure your objects to be migrated from the S3 Standard storage class to S3 Standard-IA or S3 One Zone-IA and/or archived to S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or S3 Glacier Deep Archive storage classes. You can also specify an S3 Lifecycle policy to delete objects after a specific period of time. You can use this policy-driven automation to quickly and easily reduce storage costs as well as save time. In each rule you can specify a prefix, a time period, a transition to S3 Standard-IA, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, and/or an expiration. For example, you could create a rule that archives into S3 Glacier Flexible Retrieval all objects with the common prefix “logs/” 30 days from creation and expires these objects after 365 days from creation. You can also create a separate rule that only expires all objects with the prefix “backups/” 90 days from creation. S3 Lifecycle policies apply to both existing and new S3 objects, helping you optimize storage and maximize cost savings for all current data and any new data placed in S3 without time-consuming manual data review and migration. Within a lifecycle rule, the prefix field identifies the objects subject to the rule. To apply the rule to an individual object, specify the key name. To apply the rule to a set of objects, specify their common prefix (e.g. “logs/”). You can specify a transition action to have your objects archived and an expiration action to have your objects removed. For time period, provide the creation date (e.g. January 31, 2015) or the number of days from creation date (e.g. 30 days) after which you want your objects to be archived or removed. You may create multiple rules for different prefixes. /s3/faqs/;How much does it cost to use S3 Lifecycle management?;There is no additional cost to set up and apply Lifecycle policies. A transition request is charged per object when an object becomes eligible for transition according to the Lifecycle rule. Refer to the Amazon S3 pricing page for pricing information. /s3/faqs/;Why would I use an S3 Lifecycle policy to expire incomplete multipart uploads?;The S3 Lifecycle policy that expires incomplete multipart uploads allows you to save on costs by limiting the time non-completed multipart uploads are stored. For example, if your application uploads several multipart object parts, but never commits them, you will still be charged for that storage. This policy can lower your S3 storage bill by automatically removing incomplete multipart uploads and the associated storage after a predefined number of days. /s3/faqs/;Can I set up Amazon S3 Event Notifications to send notifications when S3 Lifecycle transitions or expires objects?;Yes, you can set up Amazon S3 Event Notifications to notify you when S3 Lifecycle transitions or expires objects. For example, you can send S3 Event Notifications to an Amazon SNtopic, Amazon SQS queue, or AWS Lambda function when S3 Lifecycle moves objects to a different S3 storage class or expires objects. /s3/faqs/;What features are available to analyze my storage usage on Amazon S3?;S3 Storage Lens delivers organization-wide visibility into object storage usage, activity trends, and makes actionable recommendations to optimize costs and apply data protection best practices. S3 Storage Class Analysis enables you to monitor access patterns across objects to help you decide when to transition data to the right storage class to optimize costs. You can then use this information to configure an S3 Lifecycle policy that makes the data transfer. Amazon S3 Inventory provides a report of your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or prefix. This report can be used to help meet business, compliance, and regulatory needs by verifying the encryption, and replication status of your objects. /s3/faqs/;What is Amazon S3 Storage Lens?;Amazon S3 Storage Lens provides organization-wide visibility into object storage usage and activity trends, as well as actionable recommendations to optimize costs and apply data protection best practices. Storage Lens offers an interactive dashboard containing a single view of your object storage usage and activity across tens or hundreds of accounts in your organization, with drill-downs to generate insights at multiple aggregation levels. This includes metrics like bytes, object counts, and requests, as well as metrics detailing S3 feature utilization, such as encrypted object counts and S3 Lifecycle rule counts. S3 Storage Lens also delivers contextual recommendations to find ways for you to reduce storage costs and apply best practices on data protection across tens or hundreds of accounts and buckets. S3 Storage Lens free metrics are enabled by default for all Amazon S3 users. If you want to get more out of S3 Storage Lens, you can activate advanced metrics and recommendations. Learn more by visiting the S3 Storage Lens user guide. /s3/faqs/;How does S3 Storage Lens work?;S3 Storage Lens aggregates your storage usage and activity metrics on a daily basis to be visualized in the S3 Storage Lens interactive dashboard, or available as a metrics export in CSV or Parquet file format. A default dashboard is created for you automatically at the account level, and you have the option to create additional custom dashboards. S3 Storage Lens dashboards can be scoped to your AWS organization or specific accounts, Regions, buckets, or even prefix level (available with S3 Storage Lens advanced metrics). In configuring your dashboard you can use the default metrics selection, or upgrade to receive 35 additional metrics and recommendations for an additional cost. Also, S3 Storage Lens provides recommendations contextually with storage metrics in the dashboard, so you can take action to optimize your storage based on the metrics. /s3/faqs/;What are the key questions that can be answered using S3 Storage Lens metrics?;The S3 Storage Lens dashboard is organized around four main types of questions that can be answered about your storage. With the Summary filter, top-level questions related to overall storage usage and activity trends can be explored. For example, “How rapidly is my overall byte count and request count increasing over time?” With the Cost Optimization filter, you can explore questions related to storage cost reduction, for example, “Is it possible for me to save money by retaining fewer non-current versions?” With the Data Protection and Access Management filters you can answer questions about securing your data, for example, “Is my storage protected from accidental or intentional deletion?” Finally, with the Performance and Events filters you can explore ways to improve performance of workflows. Each of these questions represent a first layer of inquiry that would likely lead to drill-down analysis. /s3/faqs/;What metrics are available in S3 Storage Lens?;"S3 Storage Lens contains more than 60 metrics, grouped into free metrics and advanced metrics (available for an additional cost). Within free metrics, you receive metrics to analyze usage (based on a daily snapshot of your objects), which are organized into the categories of cost optimization, data protection, access management, performance, and events. Within advanced metrics, you receive metrics related to activity (such as request counts), deeper cost optimization (such as S3 Lifecycle rule counts), additional data protection (such as S3 Replication rule counts), and detailed status codes (such as 403 authorization errors). In addition, derived metrics are also provided by combining any base metrics. For example, “Retrieval Rate"" is a metric calculated by dividing the ""Bytes Downloaded Count"" by the ""Total Storage.” To view the complete list of metrics, visit the S3 Storage Lens documentation." /s3/faqs/;What are my dashboard configuration options?;A default dashboard is configured automatically provided for your entire account, and you have the option to create additional custom dashboards that can be scoped to your AWS organization, specific regions, or buckets within an account. You can set up multiple custom dashboards, which can be useful if you require some logical separation in your storage analysis, such as segmenting on buckets to represent various internal teams. By default, your dashboard will receive the S3 Storage Lens free metrics, but you have the option to upgrade to receive S3 Storage Lens advanced metrics and recommendations (for an additional cost). S3 Storage Lens advanced metrics have 6 distinct options: Activity metrics, Advanced Cost Optimization metrics, Advanced Data Protection metrics, Detailed Status Code metrics, Prefix aggregation, and CloudWatch publishing. Additionally, for each dashboard you can enable metrics export, with additional options to specify destination bucket and encryption type. /s3/faqs/;How much historical data is available in S3 Storage Lens?;For metrics displayed in the interactive dashboard, Storage Lens free metrics retains 14 days of historical data, and Storage Lens advanced metrics (for an additional cost) retains 15 months of historical data. For the optional metrics export, you can configure any retention period you wish, and standard S3 storage charges will apply. /s3/faqs/;How will I be charged for S3 Storage Lens?;S3 Storage Lens is available in two tiers of metrics. The free metrics are enabled by default and available at no additional charge to all S3 customers. The S3 Storage Lens advanced metrics and recommendations pricing details are available on the S3 pricing page. With S3 Storage Lens free metrics you receive 28 metrics at the bucket level, and can access 14 days of historical data in the dashboard. With S3 Storage Lens advanced metrics and recommendations you receive 35 additional metrics, prefix-level aggregation, CloudWatch metrics support and can access 15 months of historical data in the dashboard. /s3/faqs/;What is the difference between S3 Storage Lens and S3 Inventory?;S3 Inventory provides a list of your objects and their corresponding metadata for an S3 bucket or a shared prefix, which can be used to perform object-level analysis of your storage. S3 Storage Lens provides metrics aggregated by organization, account, region, storage class, bucket, and prefix levels, which improve organization-wide visibility of your storage. /s3/faqs/;What is the difference between S3 Storage Lens and S3 Storage Class Analysis (SCA)?;S3 Storage Class Analysis provides recommendations for an optimal storage class by creating object age groups based on object-level access patterns within an individual bucket/prefix/tag for the previous 30 – 90 days. S3 Storage Lens provides daily organization level recommendations on ways to improve cost efficiency and apply data protection best practices, with additional granular recommendations by account, region, storage class, bucket or prefix (available with S3 Storage Lens advanced metrics). /s3/faqs/;What is Storage Class Analysis?;With Storage Class Analysis, you can analyze storage access patterns to determine the optimal storage class for your storage. This S3 feature automatically identifies infrequent access patterns to help you transition storage to S3 Standard-IA. You can configure a Storage Class Analysis policy to monitor an entire bucket, prefix, or object tag. Once an infrequent access pattern is observed, you can easily create a new S3 Lifecycle age policy based on the results. Storage Class Analysis also provides daily visualizations of your storage usage on the AWS Management Console and you can also enable an export report to an S3 bucket to analyze using business intelligence tools of your choice such as Amazon QuickSight. /s3/faqs/;How often is the Storage Class Analysis updated?;Storage Class Analysis is updated on a daily basis in the S3 Management Console, but initial recommendations for storage class transitions are provided after 30 days. /s3/faqs/;What is S3 Select?;S3 Select is an Amazon S3 feature that makes it easy to retrieve specific data from the contents of an object using simple SQL expressions without having to retrieve the entire object. S3 Select simplifies and improves the performance of scanning and filtering the contents of objects into a smaller, targeted dataset by up to 400%. With S3 Select, you can also perform operational investigations on log files in Amazon S3 without the need to operate or manage a compute cluster. /s3/faqs/;What is Amazon Athena?;"Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL queries. Athena is serverless, so there is no infrastructure to set up or manage, and you can start analyzing data immediately. You don’t even need to load your data into Athena; it works directly with data stored in any S3 storage class. To get started, just log into the Athena Management Console, define your schema, and start querying. Amazon Athena uses Presto with full standard SQL support and works with a variety of standard data formats, including CSV, JSONORC, Apache Parquet and Avro. While Athena is ideal for quick, ad-hoc querying and integrates with Amazon QuickSight for easy visualization, it can also handle complex analysis, including large joins, window functions, and arrays." /s3/faqs/;What is Amazon Redshift Spectrum?;Amazon Redshift Spectrum is a feature of Amazon Redshift that lets you run queries against exabytes of unstructured data in Amazon S3 with no loading or ETL required. When you issue a query, it goes to the Amazon Redshift SQL endpoint, which generates and optimizes a query plan. Amazon Redshift determines what data is local and what is in Amazon S3, generates a plan to minimize the amount of Amazon S3 data that needs to be read, and requests Redshift Spectrum workers out of a shared resource pool to read and process data from Amazon S3. /s3/faqs/;What is Amazon S3 Replication?;Amazon S3 Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can replicate new objects written to the bucket to one or more destination buckets between different AWS Regions (S3 Cross-Region Replication), or within the same AWS Region (S3 Same-Region Replication). You can also replicate existing bucket contents (S3 Batch Replication), including existing objects, objects that previously failed to replicate, and objects replicated from another source. Learn more by visiting the S3 Replication user guide. /s3/faqs/;What is Amazon S3 Cross-Region Replication (CRR)?;CRR is an Amazon S3 feature that automatically replicates data between buckets across different AWS Regions. With CRR, you can set up replication at a bucket level, a shared prefix level, or an object level using S3 object tags. You can use CRR to provide lower-latency data access in different geographic regions. CRR can also help if you have a compliance requirement to store copies of data hundreds of miles apart. You can use CRR to change account ownership for the replicated objects to protect data from accidental deletion. To learn more visit the S3 CRR user guide. /s3/faqs/;What is Amazon S3 Same-Region Replication (SRR)?;SRR is an Amazon S3 feature that automatically replicates data between buckets within the same AWS Region. With SRR, you can set up replication at a bucket level, a shared prefix level, or an object level using S3 object tags. You can use SRR to create one or more copies of your data in the same AWS Region. SRR helps you address data sovereignty and compliance requirements by keeping a copy of your data in a separate AWS account in the same region as the original. You can use SRR to change account ownership for the replicated objects to protect data from accidental deletion. You can also use SRR to easily aggregate logs from different S3 buckets for in-region processing, or to configure live replication between test and development environments. To learn more visit the S3 SRR user guide. /s3/faqs/;How do I enable Amazon S3 Replication (Cross-Region Replication and Same-Region Replication)?;Amazon S3 Replication (CRR and SRR) is configured at the S3 bucket level, a shared prefix level, or an object level using S3 object tags. You add a replication configuration on your source bucket by specifying a destination bucket in the same or different AWS Region for replication. /s3/faqs/;How do I use S3 Batch Replication?;You would first need to enable S3 Replication at the bucket level. See the previous question for how you can do so. You may then initiate an S3 Batch Replication job in the S3 console after creating a new replication configuration, changing a replication destination in a replication rule from the replication configuration page, or from the S3 Batch Operations Create Job page. Alternatively, you can initiate an S3 Batch Replication jobs via the AWS CLI or SDKs. To learn more, visit S3 Replication in the Amazon S3 documentation. /s3/faqs/;Can I use S3 Replication with S3 Lifecycle rules?;With S3 Replication, you can establish replication rules to make copies of your objects into another storage class, in the same or a different region. Lifecycle actions are not replicated, and if you want the same lifecycle configuration applied to both source and destination buckets, enable the same lifecycle configuration on both. /s3/faqs/;Can I use replication across AWS accounts to protect against malicious or accidental deletion?;Yes, for CRR and SRR, you can set up replication across AWS accounts to store your replicated data in a different account in the target region. You can use Ownership Overwrite in your replication configuration to maintain a distinct ownership stack between source and destination, and grant destination account ownership to the replicated storage. /s3/faqs/;Will my object tags be replicated if I use Cross-Region Replication?;Object tags can be replicated across AWS Regions using Cross-Region Replication. For customers with Cross-Region Replication already enabled, new permissions are required in order for tags to replicate. For more information about setting up Cross-Region Replication, visit How to Set Up Cross-Region Replication in the Amazon S3 documentation. /s3/faqs/;Can I replicate delete markers from one bucket to another?;Yes, you can replicate delete markers from source to destination if you have delete marker replication enabled in your replication configuration. When you replicate delete markers, Amazon S3 will behave as if the object was deleted in both buckets. You can enable delete marker replication for a new or existing replication rule. You can apply delete marker replication to the entire bucket or to Amazon S3 objects that have a specific prefix, with prefix based replication rules. Amazon S3 Replication does not support delete marker replication for object tag based replication rules. To learn more about enabling delete marker replication see Replicating delete markers from one bucket to another. /s3/faqs/;Can I replicate data from other AWS Regions to China? Can a customer replicate from one China Region bucket outside of China Regions?;No, Amazon S3 Replication is not available between AWS China Regions and AWS Regions outside of China. You are only able to replicate within the China regions. /s3/faqs/;Can I replicate existing objects?;Yes. You can use S3 Batch Replication to replicate existing objects between buckets. To learn more, visit the S3 User Guide. /s3/faqs/;Can I re-try replication if object fail to replicate initially?;Yes. You can use S3 Batch Replication to re-replicate objects that fail to replicate initially. To learn more, visit the S3 User Guide. /s3/faqs/;What encryption types does S3 Replication support?;S3 Replication supports all encryption types that S3 offers. S3 offers both server-side encryption and client-side encryption – the former requests S3 to encrypt the objects for you, and the latter is for you to encrypt data on the client-side before uploading it to S3. For server-side encryption, S3 offers server-side encryption with Amazon S3-managed keys (SSE-S3), server-side encryption with KMS keys stored in AWS Key Management Service (SSE-KMS), and server-side encryption with customer-provided keys (SSE-C). For further details on these encryption types and how they work, visit the S3 documentation on using encryption. /s3/faqs/;What is the pricing for cross account data replication?;With S3 Replication, you can configure cross account replication where the source and destination buckets are owned by different AWS accounts. Excluding S3 storage and applicable retrieval charges, customers pay for replication PUT requests and inter-region Data Transfer OUT from S3 to your destination region when using S3 Replication. If you have S3 Replication Time Control (S3 RTC) enabled on your replication rules, you will see a different Data Transfer OUT and replication PUT request charges specific to S3 RTC. For cross account replication, the source account pays for all data transfer (S3 RTC and S3 CRR) and the destination account pays for the replication PUT requests. Data transfer charges only apply for S3 Cross Region Replication (S3 CRR) and S3 Replication Time Control (S3 RTC), there are no data transfer charges for S3 Same Region Replication (S3 SRR). /s3/faqs/;What is Amazon S3 Replication Time Control?;Amazon S3 Replication Time Control provides predictable replication performance and helps you meet compliance or business requirements. S3 Replication Time Control is designed to replicate most objects in seconds, and 99.99% of objects within 15 minutes. S3 Replication Time Control is backed by a Service Level Agreement (SLA) commitment that 99.9% of objects will be replicated in 15 minutes for each replication region pair during any billing month. Replication Time works with all S3 Replication features. To learn more, visit the replication documentation. /s3/faqs/;How do I enable Amazon S3 Replication Time Control?;You can enable S3 Replication Time Control as an option for each replication rule. You can either create a new S3 Replication rule with S3 Replication Time Control, or enable the feature on an existing replication rule. /s3/faqs/;Can I use S3 Replication Time Control to replicate data within and between China Regions?;Yes, you can enable Amazon S3 Replication Time Control to replicate data within and between the AWS China (Ningxia) and China (Beijing) Regions. /s3/faqs/;What are Amazon S3 Replication metrics and events?;Amazon S3 Replication metrics and events provide visibility into Amazon S3 Replication. With S3 Replication metrics, you can monitor the total number of operations and size of objects that are pending replication, and the replication latency between source and destination buckets for each S3 Replication rule. Replication metrics are available through the Amazon S3 Management Console and through Amazon CloudWatch. S3 Replication events will notify of you of replication failures so you can quickly diagnose and correct issues. If you have S3 Replication Time Control (S3 RTC) enabled you will also receive notifications when an object takes more than 15 minutes to replicate, and when that object replicates successfully to their destination. Like other Amazon S3 events, S3 Replication events are available through Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), or AWS Lambda. /s3/faqs/;How do I enable Amazon S3 Replication metrics and events?;You can enable Amazon S3 Replication metrics and events for new or existing replication rules, and they are enabled by default for S3 Replication Time Control enabled rules. You can access S3 Replication metrics through the Amazon S3 Management Console and Amazon CloudWatch. Like other Amazon S3 events, S3 Replication events are available through Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), or AWS Lambda. To learn more, visit Monitoring progress with replication metrics and Amazon S3 Event Notifications in the Amazon S3 documentation. /s3/faqs/;What is the Amazon S3 Replication Time Control Service Level Agreement (SLA)?;Amazon S3 Replication Time Control is designed to replicate 99.99% of your objects within 15 minutes, and is backed by a service level agreement. If fewer than 99.9% of your objects are replicated in 15 minutes for each replication region pair during a monthly billing cycle, the S3 RTC SLA provides a service credit on any object that takes longer than 15 minutes to replicate. The service credit covers a percentage of all replication-related charges associated with the objects that did not meet the SLA, including the RTC charge, replication bandwidth and request charges, and the cost associated with storing your replica in the destination region in the monthly billing cycle affected. To learn more, read the S3 Replication Time Control SLA. /s3/faqs/;What is the pricing for S3 Replication and S3 Replication Time Control?;For S3 Replication (Cross-Region Replication and Same Region Replication), you pay the S3 charges for storage in the selected destination S3 storage classes, the storage charges for the primary copy, replication PUT requests, and applicable infrequent access storage retrieval charges. For CRR, you also pay for inter-region Data Transfer OUT From S3 to your destination region. S3 Replication Metrics are billed at the same rate as Amazon CloudWatch custom metrics. Additionally, when you use S3 Replication Time Control, you also pay a Replication Time Control Data Transfer charge. For more information, visit the Amazon S3 pricing page. /s3/faqs/;What are S3 Multi-Region Access Points?;Amazon S3 Multi-Region Access Points accelerate performance by up to 60% when accessing data sets that are replicated across multiple AWS Regions. Based on AWS Global Accelerator, S3 Multi-Region Access Points consider factors like network congestion and the location of the requesting application to dynamically route your requests over the AWS network to the lowest latency copy of your data. This automatic routing allows you to take advantage of the global infrastructure of AWS while maintaining a simple application architecture. /s3/faqs/;Why should I use S3 Multi-Region Access Points?;S3 Multi-Region Access Points accelerate and simplify storage for your multi-region applications. By dynamically routing S3 requests made to a replicated data set, S3 Multi-Region Access Points reduce request latency, so that applications run up to 60% faster. S3 Multi-Region Access Points can also help you build resilient, multi-region and multi-account applications that are more protected against accidental or unauthorized data deletion. With S3 Multi-Region Access Points, you are able to take advantage of the global infrastructure of AWS while maintaining a simple region-agnostic architecture for your applications. /s3/faqs/;How do S3 Multi-Region Access Points work?;Multi-Region Access Points dynamically route client requests to one or more underlying S3 buckets. You can configure your Multi-Region Access Point to route across one bucket per AWS Region, in up to 20 AWS Regions. When you create a Multi-Region Access Point, S3 automatically generates a DNS-compatible name. This name is used as a global endpoint that can be used by your clients. When your clients make requests to this endpoint, S3 will dynamically route those requests to one of the underlying buckets that are specified in the configuration of your Multi-Region Access Point. Internet-based requests are onboarded to the AWS global network to avoid congested network segments on the internet, which reduces network latency and jitter while improving performance. Based on AWS Global Accelerator, applications that access S3 over the internet can see performance further improved up to 60% by S3 Multi-Region Access Points. /s3/faqs/;How do S3 Multi-Region Access Points failover controls work?;By default, S3 Multi-Region Access Points route requests to the underlying bucket closest to the client, based on network latency in an active-active configuration. For example, you can configure a Multi-Region Access Point with underlying buckets in US East (NVirginia) and in Asia Pacific (Mumbai). With this configuration, your clients in North America route to US East (NVirginia), while your clients in Asia route to Asia Pacific (Mumbai). This lowers latency for your requests made to S3, improving the performance of your application. If you prefer an active-passive configuration, all S3 data request traffic can be routed through the S3 Multi-Region Access Point to US East (NVirginia) as the active Region and no traffic will be routed to Asia Pacific (Mumbai). If there is a planned or unplanned need to failover all of the S3 data request traffic to Asia Pacific (Mumbai), you can initiate a failover to switch to Asia Pacific (Mumbai) as the new active Region within minutes. Any existing uploads or downloads in progress in US East (NVirginia) continue to completion and all new S3 data request traffic through the S3 Multi-Region Access Point is routed to Asia Pacific (Mumbai). /s3/faqs/;Can S3 Multi-Region Access Points work with buckets owned by different AWS accounts?; Each S3 Multi-Region Access Point has distinct settings for Amazon S3 Block Public Access. These settings operate in conjunction with the Block Public Access settings for the buckets that underlie the Multi-Region Access Point, the Block Public Access settings for the AWS accounts that owns the Multi-Region Access Point, and the Block Public Access settings for the AWS accounts that own underlying buckets. When Amazon S3 authorizes a request, it applies the most restrictive combination of these settings. If the Block Public Access settings for any of these resources (the Multi-Region Access Point, the underlying bucket, the Multi-Region Access Point owner account, or the bucket owner account) block access for the requested action or resource, Amazon S3 rejects the request. This behavior is consistent with cross-account S3 Access Points. The same authorization logic is applied in when serving requests for cross-account S3 Access Points and cross-account S3 Multi-Region Points. /s3/faqs/;How do Block Public Access settings work for Multi-Region Access Points that span multiple AWS accounts?; S3 CRR and S3 Multi-Region Access Points are complementary features that work together to replicate data across AWS Regions and then to automatically route requests to the replicated copy with the lowest latency. S3 Multi-Region Access Points help you to manage requests across AWS Regions, while CRR allows you to move data across AWS Regions to create isolated replicas. You use S3 Multi-Region Access Points and CRR together to create a replicated multi-Region dataset that is addressable by a single global endpoint. /s3/faqs/;What is the difference between S3 Cross-Region Replication (S3 CRR) and S3 Multi-Region Access Points?; When you use an S3 Multi-Region Access Point to route requests within AWS, you pay a low per-GB data routing charge for each GB processed, as well as standard charges for S3 requests, storage, data transfer, and replication. If your application runs outside of AWS and accesses S3 over the internet, S3 Multi-Region Access Points increase performance by automatically routing your requests through an AWS edge location, over the global private AWS network, to the closest copy of your data based on access latency. When you accelerate requests made over the internet, you pay the data routing charge and an internet acceleration charge. S3 Multi-Region Access Points internet acceleration pricing varies based on whether the source client is in the same or in a different location as the destination AWS Region, and is in addition to standard S3 data transfer pricing. To use S3 Multi-Region Access Points failover controls, you are only charged for standard S3 API costs to view the current routing control status of each Region and submit any routing control changes for initiating a failover. See the Amazon S3 pricing page and the data transfer tab for more pricing information. /s3/faqs/;How much do S3 Multi-Region Access Points cost?; Yes, you can configure the underlying buckets of the S3 Multi-Region Access Point to be Requester Pays buckets. With Requester Pays, the requester pays all of the cost associated to the endpoint usage, including the cost for requests and data transfer cost associated with both the bucket and the Multi-Region Access Point. Typically, you want to configure your buckets as Requester Pays buckets if you wish to share data but not incur charges associated with others accessing the data. In general, bucket owners pay for all Amazon S3 storage associated with their bucket. To learn more, please visit S3 Requester Pays. /s3/faqs/;How is S3 Transfer Acceleration different than S3 Multi-Region Access Points?;S3 Multi-Region Access Points and S3 Transfer Acceleration provide similar performance benefits. You can use S3 Transfer Acceleration to speed up content transfers to and from Amazon S3 using the AWS global network. S3 Transfer Accelerator can help accelerate long-distance transfers of larger objects to and from a single Amazon S3 bucket. With S3 Multi-Region Access Points, you can perform similar accelerated transfers using the AWS global network, but across many S3 buckets in multiple AWS Regions for internet-based, VPC-based, and on-premises requests to and from S3. When you combine S3 Multi-Region Access Points with S3 Cross Replication, you provide the capability for S3 Multi-Region Access Points to dynamically route your requests to the lowest latency copy of your data for applications from clients in multiple locations. /s3/faqs/;How do I get started with S3 Multi-Region Access Points and failover controls?;The S3 console provides a simple guided workflow to quickly set up everything you need to run multi-Region storage on S3 in just three simple steps. First, create an Amazon S3 Multi-Region Access Point endpoint and specify the AWS Regions you want to replicate and failover between. You can add buckets in multiple AWS accounts to a new S3 Multi-Region Access Point by entering the account IDs that own the buckets at the time of creation. Second, for each AWS Region and S3 bucket behind your S3 Multi-Region Access Point endpoint, specify whether their routing status is active or passive, where active AWS Regions accept S3 data request traffic, and passive Regions are not be routed to until you initiate a failover. Third, configure your S3 Cross-Region Replication rules to synchronize your data in S3 between the Regions and/or accounts. You can then initiate a failover at any time between the AWS Regions within minutes to shift your S3 data requests and monitor the shift of your S3 traffic to your new active AWS Region in Amazon CloudWatch. Alternatively, you can use AWS CloudFormation to automate your multi-Region storage configuration. All of the building blocks required to set up multi-Region storage on S3, including S3 Multi-Region Access Points, are supported by CloudFormation, allowing you to automate a repeatable setup process outside of the S3 console. /s3/faqs/;What is S3 Object Lambda?;S3 Object Lambda allows you to add your own code to S3 GET, LIST, and HEAD requests to modify and process data as it is returned to an application. You can use custom code to modify the data returned by S3 GET requests to filter rows, dynamically resize images, redact confidential data, and much more. You can also use S3 Object Lambda to modify the output of S3 LIST requests to create a custom view of objects in a bucket and S3 HEAD requests to modify object metadata like object name and size. S3 Object Lambda helps you to easily meet the unique data format requirements of any application without having to build and operate additional infrastructure, such as a proxy layer, or having to create and maintain multiple derivative copies of your data. S3 Object Lambda uses AWS Lambda functions to automatically process the output of a standard S3 GET, LIST, or HEAD request. AWS Lambda is a serverless compute service that runs customer-defined code without requiring management of underlying compute resources. /s3/faqs/;Why should I use S3 Object Lambda?;You should use S3 Object Lambda if you want to process data inline with an S3 GET, LIST, or HEAD request. You can use S3 Object Lambda to share a single copy of your data across many applications, avoiding the need to build and operate custom processing infrastructure or to store derivative copies of your data. For example, by using S3 Object Lambda to process S3 GET requests, you can mask sensitive data for compliance purposes, restructure raw data for the purpose of making it compatible with machine learning applications, filter data to restrict access to specific content within an S3 object, or to address a wide range of additional use cases. You can use S3 Object Lambda to enrich your object lists by querying an external index that contains additional object metadata, filter and mask your object lists to only include objects with a specific object tag, or add a file extension to all the object names in your object lists. For example, if you have an S3 bucket with multiple discrete data sets, you can use S3 Object Lambda to filter an S3 LIST response depending on the requester. /s3/faqs/;How does S3 Object Lambda work?;S3 Object Lambda uses Lambda functions specified by you to process the output of GET, LIST, and HEAD requests. Once you have defined a Lambda function to process requested data, you can attach that function to an S3 Object Lambda Access Point. GET, LIST, and HEAD requests made through an S3 Object Lambda Access Point will now invoke the specified Lambda function. Lambda will then fetch the S3 object requested by the client and process that object. Once processing has completed, Lambda will stream the processed object back to the calling client. Read the S3 Object Lambda user guide to learn more. /s3/faqs/;How do I get started with S3 Object Lambda?;S3 Object Lambda can be set up in multiple ways. You can set up S3 Object Lambda in the S3 console by navigating to the Object Lambda Access Point tab. Next, create an S3 Object Lambda Access Point, the Lambda function that you would like S3 to execute against your GET, LIST, and HEAD requests, and a supporting S3 Access Point. Grant permissions to all resources to interact with Object Lambda. Lastly, update your SDK and application to use the new S3 Object Lambda Access Point to retrieve data from S3 using the language SDK of your choice. You can use an S3 Object Lambda Access Point alias when making requests. Aliases for S3 Object Lambda Access Points are automatically generated and are interchangeable with S3 bucket names for data accessed through S3 Object Lambda. For existing S3 Object Lambda Access Points, aliases are automatically assigned and ready for use. There are example Lambda function implementations in the AWS documentation to help you get started. /s3/faqs/;What kinds of operations can I perform with S3 Object Lambda?;Any operation supported in a Lambda function is supported with S3 Object Lambda. This gives you a wide range of available options for processing your requests. You supply your own Lambda function to run custom computations against GET, LIST, and HEAD requests, giving you the flexibility to process data according to the needs of your application. Lambda processing time is limited to a maximum of 60 seconds. For more details, see the S3 Object Lambda documentation. /s3/faqs/;Which S3 request types does S3 Object Lambda support?;S3 Object Lambda supports GET, LIST and HEAD requests. Any other S3 API calls made to an S3 Object Lambda Access Point will return the standard S3 API response. Learn more about S3 Object Lambda in the user guide. /s3/faqs/;What will happen when a S3 Object Lambda function fails?;When a S3 Object Lambda function fails, you will receive a request response detailing the failure. Like other invocations of Lambda functions, AWS also automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures, Lambda logs all requests processed by your function and automatically stores logs generated by your code with Amazon CloudWatch Logs. For more information about accessing CloudWatch logs for AWS Lambda, visit CloudWatch documentation. /s3/faqs/;Does S3 Object Lambda affect the S3 availability SLA or S3 durability?;S3 Object Lambda connects Amazon S3, AWS Lambda, and optionally, other AWS services of your choosing to deliver objects relevant to requesting applications. All AWS services used in connection with S3 Object Lambda will continue to be governed by their respective Service Level Agreements (SLA). For example, in the event that any AWS Service does not meet its Service Commitment, you will be eligible to receive a Service Credit as documented in that service’s SLA. Creating an S3 Object Lambda Access Point does not impact the durability of your objects. However, S3 Object Lambda invokes your specified AWS Lambda function and you must ensure your specified Lambda function is intended and correct. See the latest Amazon S3 SLA here. /s3/faqs/;How much does S3 Object Lambda cost?;When you use S3 Object Lambda, you pay a per GB charge for every gigabyte of data returned to you through S3 Object Lambda. You are also charged for requests based on the request type (GET, LIST, and HEAD requests) and AWS Lambda compute charges for the time your specified function is running to process the requested data. To see pricing details and an example, read the S3 pricing page. /backup/faqs/;What is AWS Backup?;Amazon Elastic Block Store (EBS) volumes Amazon EC2 instances (including Windows applications) AWS CloudFormation stacks Windows Volume Shadow Copy Service (VSS) supported applications (including Windows Server, Microsoft SQL Server, and Microsoft Exchange Server) on EC2. Amazon RDS databases (including Amazon Aurora clusters) Amazon DynamoDB tables, Amazon Elastic File System (EFS) file systems Amazon FSx for NetApp ONTAP file systems Amazon FSx for OpenZFS file systems Amazon FSx for Windows File Server file systems Amazon FSx for Lustre file systems Amazon Neptune databases Amazon DocumentDB (with MongoDB compatibility) databases AWS Storage Gateway volumes Amazon S3 VMware CloudTM on AWS and on-premises VMware virtual machines Amazon Redshift manual snapshot SAP HANon EC2 Amazon Timestream databases /backup/faqs/;How does AWS Backup work?;Amazon Elastic Block Store (EBS) volumes Amazon EC2 instances (including Windows applications) AWS CloudFormation stacks Windows Volume Shadow Copy Service (VSS) supported applications (including Windows Server, Microsoft SQL Server, and Microsoft Exchange Server) on EC2. Amazon RDS databases (including Amazon Aurora clusters) Amazon DynamoDB tables, Amazon Elastic File System (EFS) file systems Amazon FSx for NetApp ONTAP file systems Amazon FSx for OpenZFS file systems Amazon FSx for Windows File Server file systems Amazon FSx for Lustre file systems Amazon Neptune databases Amazon DocumentDB (with MongoDB compatibility) databases AWS Storage Gateway volumes Amazon S3 VMware CloudTM on AWS and on-premises VMware virtual machines Amazon Redshift manual snapshot SAP HANon EC2 Amazon Timestream databases /backup/faqs/;Why should I use AWS Backup?;Amazon Elastic Block Store (EBS) volumes Amazon EC2 instances (including Windows applications) AWS CloudFormation stacks Windows Volume Shadow Copy Service (VSS) supported applications (including Windows Server, Microsoft SQL Server, and Microsoft Exchange Server) on EC2. Amazon RDS databases (including Amazon Aurora clusters) Amazon DynamoDB tables, Amazon Elastic File System (EFS) file systems Amazon FSx for NetApp ONTAP file systems Amazon FSx for OpenZFS file systems Amazon FSx for Windows File Server file systems Amazon FSx for Lustre file systems Amazon Neptune databases Amazon DocumentDB (with MongoDB compatibility) databases AWS Storage Gateway volumes Amazon S3 VMware CloudTM on AWS and on-premises VMware virtual machines Amazon Redshift manual snapshot SAP HANon EC2 Amazon Timestream databases /backup/faqs/;What are the key features of AWS Backup?;Amazon Elastic Block Store (EBS) volumes Amazon EC2 instances (including Windows applications) AWS CloudFormation stacks Windows Volume Shadow Copy Service (VSS) supported applications (including Windows Server, Microsoft SQL Server, and Microsoft Exchange Server) on EC2. Amazon RDS databases (including Amazon Aurora clusters) Amazon DynamoDB tables, Amazon Elastic File System (EFS) file systems Amazon FSx for NetApp ONTAP file systems Amazon FSx for OpenZFS file systems Amazon FSx for Windows File Server file systems Amazon FSx for Lustre file systems Amazon Neptune databases Amazon DocumentDB (with MongoDB compatibility) databases AWS Storage Gateway volumes Amazon S3 VMware CloudTM on AWS and on-premises VMware virtual machines Amazon Redshift manual snapshot SAP HANon EC2 Amazon Timestream databases /backup/faqs/;What can I back up using AWS Backup?;Amazon Elastic Block Store (EBS) volumes Amazon EC2 instances (including Windows applications) AWS CloudFormation stacks Windows Volume Shadow Copy Service (VSS) supported applications (including Windows Server, Microsoft SQL Server, and Microsoft Exchange Server) on EC2. Amazon RDS databases (including Amazon Aurora clusters) Amazon DynamoDB tables, Amazon Elastic File System (EFS) file systems Amazon FSx for NetApp ONTAP file systems Amazon FSx for OpenZFS file systems Amazon FSx for Windows File Server file systems Amazon FSx for Lustre file systems Amazon Neptune databases Amazon DocumentDB (with MongoDB compatibility) databases AWS Storage Gateway volumes Amazon S3 VMware CloudTM on AWS and on-premises VMware virtual machines Amazon Redshift manual snapshot SAP HANon EC2 Amazon Timestream databases /backup/faqs/;Can I use AWS Backup to back up on-premises data?;Yes, you can use AWS Backup can back up on-premises Storage Gateway volumes and VMware virtual machines, providing a common way to manage the backups of your application data both on premises and on AWS. Amazon Data Lifecycle Manager policies and backup plans created in AWS Backup work independently from each other and provide two ways to manage EBS snapshots. Amazon Data Lifecycle Manager provides a streamlined way to manage the lifecycle of EBS resources, such as volume snapshots. Use Amazon Data Lifecycle Manager when you want to automate the creation, retention, and deletion of EBS snapshots. Use AWS Backup to manage and monitor backups across the AWS services you use, including EBS volumes, from a single place. /backup/faqs/;What is AWS Backup Audit Manager?; You can use AWS Backup Audit Manager through the AWS Management Console, CLI, API, or SDK. AWS Backup Audit Manager provides built-in compliance controls. You can customize these controls to define your data protection policies. It is designed to automatically detect violations of your defined data protection policies and will prompt you to take corrective actions. With AWS Backup Audit Manager, continuously evaluate backup activity and generate audit reports to demonstrate compliance with regulatory requirements. /backup/faqs/;Why should I use AWS Backup Audit Manager?; An AWS Backup Audit Manager control is a procedure designed to audit the compliance of a backup requirement, such as backup frequency or backup retention period. An AWS Backup Audit Manager framework is a collection of controls that can be deployed and managed as a single entity. /backup/faqs/;How can I use AWS Backup Audit Manager?; An AWS Backup Audit Manager control is a procedure designed to audit the compliance of a backup requirement, such as backup frequency or backup retention period. An AWS Backup Audit Manager framework is a collection of controls that can be deployed and managed as a single entity. /backup/faqs/;What is an AWS Backup Audit Manager control and framework?; An AWS Backup Audit Manager control evaluates the configuration of your backup resources against your defined configuration settings. If the resource meets the configuration defined in the control, then the compliance status of the resource for that control is COMPLIANT. If it does not, then the status is NON_COMPLIANT. If all the resources evaluated by an AWS Backup Audit Manager control are compliant, then the compliance status of the control is COMPLIANT. Similarly, if all the controls in a framework are compliant, then the compliance status of the framework is COMPLIANT. /backup/faqs/;How does an AWS Backup Audit Manager control work?; On the AWS Backup console, navigate to the AWS Backup Audit Manager Frameworks section and select the framework name to view the compliance status of your framework and controls. /backup/faqs/;How can I view the compliance results of my AWS Backup Audit Manager controls and frameworks?; You can create reports related to your AWS Backup activity. These reports help you get details of your backup, copy, and restore jobs. You can use these reports to monitor your operational posture and identify any failures that might need further action. /backup/faqs/;What kind of reports can I create in AWS Backup Audit Manager?; Yes. AWS Backup is HIPAA eligible, which means if you have a HIPAA BAA in place with AWS, you can use AWS Backup to transfer protected health information (PHI). /datasync/faqs/;What is AWS DataSync?;AWS DataSync is an online data movement and discovery service that simplifies and accelerates data migrations to AWS as well as moving data between on-premises storage, edge locations, other clouds, and AWS Storage. AWS DataSync Discovery (Preview) helps you simplify migration planning and accelerate your data migration to AWS by giving you visibility into your on-premises storage performance and utilization, and providing recommendations for migrating your data to AWS Storage services. DataSync Discovery enables you to better understand your on-premises storage performance and capacity usage through automated data collection and analysis, enabling you to quickly identify data to be migrated and use generated recommendations to select AWS Storage services that align to your performance and capacity needs. /datasync/faqs/;Why should I use AWS DataSync?;AWS DataSync enables you to discover and move your data, securely and quickly. Using DataSync Discovery (Preview), you can better understand your on-premises storage utilization and receive recommendations to inform your cost estimates and plans for migrating to AWS. For data movement, you can use DataSync to copy large datasets with millions of files, without having to build custom solutions with open-source tools, or license and manage expensive commercial network acceleration software. You can use DataSync to migrate active data to AWS, archive data to free up on-premises storage capacity, replicate data to AWS for business continuity, or transfer data to the cloud for analysis and processing. /datasync/faqs/;What problem does AWS DataSync Discovery (Preview) solve for me?;AWS DataSync Discovery (Preview) simplifies and accelerates data migration to AWS. Using DataSync Discovery, you can automatically collect data about your on-premises storage systems and view the aggregated results in the DataSync console. DataSync Discovery analyzes performance, capacity, and utilization from the collected data and recommends AWS Storage services for migration. With DataSync Discovery you can better understand your on-premises storage utilization, quickly identify data to be migrated, and select AWS Storage services that meet your performance needs and optimize your storage costs. /datasync/faqs/;What problem does AWS DataSync solve for me?;AWS DataSync reduces the complexity and cost of online data transfer, making it simple to transfer datasets between on-premises, edge, or other cloud storage and AWS Storage services, as well as between AWS Storage services. DataSync connects to existing storage systems and data sources with standard storage protocols (NFS, SMB), as an HDFS client, using the Amazon S3 API, or using other cloud storage APIs. It uses a purpose-built network protocol and scale-out architecture to accelerate data transfer between storage systems and AWS services. DataSync automatically scales and handles moving files and objects, scheduling data transfers, monitoring the progress of transfers, encryption, verification of data transfers, and notifying customers of any issues. With DataSync you pay only for the amount of data copied, with no minimum commitments or upfront fees. /datasync/faqs/;What storage systems are supported by AWS DataSync Discovery (Preview)?; Collected data will be stored and managed by the DataSync service. Data can be viewed in the AWS DataSync console or accessed using the AWS CLI or AWS Software Development Kit (SDK). /datasync/faqs/;What information does AWS DataSync Discovery (Preview) collect about my storage system?; Collected data will be stored and managed by the DataSync service. Data can be viewed in the AWS DataSync console or accessed using the AWS CLI or AWS Software Development Kit (SDK). /datasync/faqs/;How does AWS DataSync Discovery (Preview) determine its recommendations?; Collected data will be stored and managed by the DataSync service. Data can be viewed in the AWS DataSync console or accessed using the AWS CLI or AWS Software Development Kit (SDK). /datasync/faqs/;How do I use AWS DataSync to migrate data to AWS?;"You can use AWS DataSync to migrate data located on premises, at the edge, or in other clouds to Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, and Amazon FSx for NetApp ONTAP. Configure DataSync to make an initial copy of your entire dataset, and schedule subsequent incremental transfers of changing data until the final cut-over from on-premises to AWS. DataSync includes encryption and integrity validation to help make sure your data arrives securely, intact, and ready to use. To minimize impact on workloads that rely on your network connection, you can schedule your migration to run during off-hours, or limit the amount of network bandwidth that DataSync uses by configuring the built-in bandwidth throttle. DataSync preserves metadata between storage systems that have similar metadata structures, enabling a smooth transition of end users and applications to using your target AWS Storage service. Read the storage blog, ""Migrating storage with AWS DataSync,"" to learn more about migration best practices and tips." /datasync/faqs/;How do I use AWS DataSync to archive cold data?;You can use AWS DataSync to move cold data from on-premises storage systems directly to durable and secure long-term storage, such as Amazon S3 Glacier Flexible Retrieval (formerly S3 Glacier) or Amazon S3 Glacier Deep Archive. Use DataSync’s filtering functionality to exclude copying temporary files and folders or copying only a subset of files from your source location. You can select the most cost-effective storage service for your needs: transfer data to any S3 storage class, or use DataSync with EFS Lifecycle Management to store data in Amazon EFS Infrequent Access storage class (EFS IA). Use the built-in task scheduling functionality to regularly archive data that should be retained for compliance or auditing purposes, such as logs, raw footage, or electronic medical records. /datasync/faqs/;How do I use AWS DataSync to replicate data to AWS for business continuity?;With AWS DataSync, you can periodically replicate files into any Amazon S3 storage classes, or send the data to Amazon EFS, Amazon FSx for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, or Amazon FSx for NetApp ONTAP for a standby file system. Use the built-in task scheduling functionality to ensure that changes to your dataset are regularly copied to your destination storage. Read this AWS Storage blog to learn more about data protection using AWS DataSync. /datasync/faqs/;How do I use AWS DataSync for recurring transfers between on-premises and AWS for ongoing workflows?;You can use AWS DataSync for ongoing transfers from on-premises systems into or out of AWS for processing. DataSync can help speed up your critical hybrid cloud storage workflows in industries that need to move active files into AWS quickly. This includes machine learning in life sciences, video production in media and entertainment, big data analytics in financial services, and seismic research in oil and gas. DataSync provides timely delivery to ensure dependent processes are not delayed. You can specify exclude filters, include filters, or both, to determine which files, folders or objects gets transferred each time your task runs. /datasync/faqs/;Can I use AWS DataSync to copy data from other public clouds to AWS?;Yes. Using AWS DataSync, you can copy data from Google Cloud Storage using the S3 API, from Azure Files using the SMB protocol, or from Azure Blob Storage, including Azure Data Lake Storage Gen2 (Preview). Simply deploy the DataSync agent in your cloud environment or on Amazon EC2, create your source and destination locations, and then start your task to begin copying data. Learn more about using DataSync to copy data from Google Cloud Storage or from Azure Files. /datasync/faqs/;Can I use AWS DataSync to build my data lake?;Yes. With AWS DataSync, you can easily build your data lake, by automating the transfer of on-premises datasets or data in other clouds to Amazon S3. DataSync enables a simple and fast transfer of your entire data set using standard storage protocols (NFS, SMB), as an HDFS client, using the Amazon S3 API, or using other cloud storage APIs. After transferring your initial dataset, you can schedule subsequent transfers of new data to AWS. DataSync includes encryption and integrity validation to help make sure your data arrives securely, intact, and ready to use. To minimize impact on workloads that rely on your network connection, you can schedule transfer tasks to run during off-hours, or limit the amount of network bandwidth that DataSync uses by configuring the built-in bandwidth throttle. Once your data lands in Amazon S3, you can use native AWS services to run big data analytics, artificial intelligence (AI), machine learning (ML), high-performance computing (HPC) and media data processing applications to gain insights from your unstructured data sets. Read the AWS data lake storage web page to learn more about building and leveraging your data lake. /datasync/faqs/;How do I use AWS DataSync to transfer data between AWS Storage services?;You can use DataSync to transfer files or objects between Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, or Amazon FSx for NetApp ONTAP within the same AWS account. You can transfer data between AWS services in the same AWS Region, between services in different Commercial AWS Regions except for China, or between AWS GovCloud (US-East and US-West) Regions. This does not require deploying a DataSync agent, and can be configured end to end using the AWS DataSync console, AWS Command Line Interface (CLI), or AWS Software Development Kit (SDK). /datasync/faqs/;Can I use AWS DataSync to migrate to Amazon WorkDocs?;Yes. AWS DataSync accelerates a required step for Amazon WorkDocs Migration Service by automating file upload to the Amazon S3 bucket that is used for the migration. DataSync makes it easier and faster to migrate home directories and department shares to WorkDocs. To learn more about using DataSync for migrations to WorkDocs, read the blog 'Migrating network file shares to Amazon WorkDocs using AWS DataSync.' /datasync/faqs/;How do I get started moving my data with AWS DataSync?;You can transfer data using AWS DataSync with a few clicks in the AWS Management Console or through the AWS Command Line Interface (CLI). To get started, follow these 3 steps: /datasync/faqs/;How do I deploy an AWS DataSync agent?;You deploy an AWS DataSync agent to your on-premises hypervisor, in your public cloud environment, or in Amazon EC2. To copy data to or from an on-premises file server, you download the agent virtual machine image from the AWS Console and deploy to your on-premises VMware ESXi, Linux Kernel-based Virtual Machine (KVM), or Microsoft Hyper-V hypervisor. When a DataSync agent is used, the agent must be deployed so that it can access your file server using the NFS, SMB protocol, access NameNodes and DataNodes in your Hadoop cluster, or access your self-managed object storage using the Amazon S3 API. To set up transfers between your S3 on AWS Outposts buckets and S3 buckets in AWS Regions, deploy the agent on your Outpost. To set up transfers between your AWS Snowcone device and AWS storage, use the DataSync agent AMI that comes pre-installed on your device. /datasync/faqs/;What are the resource requirements for the AWS DataSync agent?;You can find the minimum required resources to run the agent here. /datasync/faqs/;How can I monitor the status of data being transferred by AWS DataSync?;You can use the AWS Management Console or CLI to monitor the status of data being transferred. Using Amazon CloudWatch Metrics, you can see the number of files and amount of data which has been copied. You can also enable logging of individual files to CloudWatch Logs, to identify what was transferred at a given time, as well as the results of the content integrity verification performed by DataSync. This simplifies monitoring, reporting, and troubleshooting, and enables you to provide timely updates to stakeholders. You can find additional information, such as transfer progress, in the AWS Management Console or CLI. /datasync/faqs/;Can I filter the files and folders that AWS DataSync transfers?;Yes. You can specify an exclude filter, an include filter, or both to limit which files, folders, or objects are transferred each time a task runs. Include filters specify the file paths or object keys that should be included when the task runs and limits the scope of what is scanned by DataSync on the source and destination. Exclude filters specify the file paths or object keys that should be excluded from being copied. If no filters are configured, each time a task runs it will transfer all changes from the source to the destination. When creating or updating a task, you can configure both exclude and include filters. When starting a task, you can override filters configured on the task. Read this AWS storage blog to learn more about using common filters with DataSync for more detail. /datasync/faqs/;Can I configure AWS DataSync to transfer on a schedule?;Yes. You can schedule your tasks using the AWS DataSync Console or AWS Command Line Interface (CLI), without needing to write and run scripts to manage repeated transfers. Task scheduling automatically runs tasks on the schedule you configure, with hourly, daily, or weekly options provided directly in the Console. This enables you to ensure that changes to your dataset are automatically detected and copied to your destination storage. /datasync/faqs/;Does AWS DataSync preserve the directory structure when copying files?;Yes. When transferring files, AWS DataSync creates the same directory structure on the destination as on the source location's structure. /datasync/faqs/;What happens if an AWS DataSync task is interrupted?;If a task is interrupted, for instance, if the network connection goes down or the AWS DataSync agent is restarted, the next run of the task will transfer missing files, and the data will be complete and consistent at the end of this run. Each time a task is started it performs an incremental copy, transferring only the changes from the source to the destination. /datasync/faqs/;Can I use AWS DataSync with AWS Direct Connect?;You can use AWS DataSync with your Direct Connect link to access public service endpoints or private VPC endpoints. When using VPC endpoints, data transferred between the DataSync agent and AWS services does not traverse the public internet or need public IP addresses, increasing the security of data as it is copied over the network. DataSync Discovery (Preview) is currently only supported with public service endpoints. /datasync/faqs/;Does AWS DataSync support VPC endpoints or AWS PrivateLink?;Yes, VPC endpoints are supported for data movement use cases. You can use VPC endpoints to ensure data transferred between your AWS DataSync agent, either deployed on-premises or in-cloud, doesn't traverse the public internet or need public IP addresses. Using VPC endpoints increases the security of your data by keeping network traffic within your Amazon Virtual Private Cloud (Amazon VPC). VPC endpoints for DataSync are powered by AWS PrivateLink, a highly available, scalable technology that enables you to privately connect your VPC to supported AWS services. /datasync/faqs/;How do I configure AWS DataSync to use VPC endpoints?;To use VPC endpoints with AWS DataSync, you create an AWS PrivateLink interface VPC endpoint for the DataSync service in your chosen VPC, and then choose this endpoint elastic network interface (ENI) when creating your DataSync agent. Your agent will connect to this ENto activate, and subsequently all data transferred by the agent will remain within your configured VPC. You can use either the AWS DataSync Console, AWS Command Line Interface (CLI), or AWS SDK, to configure VPC endpoints. To learn more, see Using AWS DataSync in a Virtual Private Cloud. /datasync/faqs/;Which AWS Storage services are supported by AWS DataSync?;AWS DataSync supports moving data to, from, or between Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS), Amazon FSx for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, and Amazon FSx for NetApp ONTAP. /datasync/faqs/;Can I copy my data into Amazon S3 Glacier Flexible Retrieval (formerly S3 Glacier), Amazon S3 Glacier Deep Archive, or other S3 storage classes?;Yes. When configuring an S3 bucket for use with AWS DataSync, you can select the S3 storage class that DataSync uses to store objects. DataSync supports storing data directly into S3 Standard, S3 Intelligent-Tiering, S3 Standard-Infrequent Access (S3 Standard-IA), S3 One Zone-Infrequent Access (S3 One Zone-IA), Amazon S3 Glacier Flexible Retrieval, and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive). More information on Amazon S3 storage classes can be found in the Amazon Simple Storage Service Developer Guide. /datasync/faqs/;How does AWS DataSync convert files and folders to or from objects in Amazon S3?;When DataSync copies objects that contain this user metadata back to an NFS server, the file metadata is restored. Symbolic links and hard links are also restored when copying back from NFS to S3. /datasync/faqs/;What object metadata is preserved when transferring objects between self-managed object storage or Azure Blob Storage (Preview) and Amazon S3?;When transferring objects between self-managed object storage or Azure Blob Storage (Preview) and Amazon S3, DataSync copies objects together with object metadata and tags. /datasync/faqs/;What object metadata is preserved when transferring objects between Amazon S3 buckets?;When transferring objects between Amazon S3 buckets, DataSync copies objects together with object metadata and tags. DataSync does not copy other object information such as object ACLs or prior object versions. /datasync/faqs/;Which Amazon S3 request and storage costs apply when using S3 storage classes with AWS DataSync?;To avoid minimum capacity charge per object, AWS DataSync automatically stores small objects in S3 Standard. To minimize data retrieval fees, you can configure DataSync to verify only files that were transferred by a given task. To avoid minimum storage duration charges, DataSync has controls for overwriting and deleting objects. Read about considerations when working with Amazon S3 storage classes in our documentation. /datasync/faqs/;Can I copy object data to and from Amazon S3 buckets on AWS Outposts?;When using DataSync with Amazon S3 on Outposts, you can only transfer data to and from Amazon S3 buckets in AWS Regions. You can learn more about supported sources and destinations for DataSync tasks in our documentation. /datasync/faqs/;How does AWS DataSync access my Amazon EFS file system?;AWS DataSync accesses your Amazon EFS file system using the NFS protocol. The DataSync service mounts your file system from within your VPC from Elastic Network Interfaces (ENIs) managed by the DataSync service. DataSync fully manages the creation, use, and deletion of these ENIs on your behalf. You can choose to mount your EFS file system using a mount target or an EFS Access Point. /datasync/faqs/;Can I use AWS DataSync with all Amazon EFS storage classes?;Yes. You can use AWS DataSync to copy files into Amazon EFS and configure EFS Lifecycle Management to migrate files that have not been accessed for a set period of time to the Infrequent Access (IA) storage class. /datasync/faqs/;How do I use AWS DataSync with Amazon EFS file system resource policies?;You can use both IAM identity policies and resource policies to control client access to Amazon EFS resources in a way that is scalable and optimized for cloud environments. When you create a DataSync location for your EFS file system, you can specify an IAM role that DataSync will assume when accessing EFS. You can then use EFS file system policies to configure access for the IAM role. Because DataSync mounts EFS file systems as the root user, your IAM policy must allow the following action: elasticfilesystem:ClientRootAccess. /datasync/faqs/;Can I use AWS DataSync to replicate my Amazon EFS file system to a different AWS Region?;Yes. In addition to the built-in replication provided by Amazon EFS, you can also use AWS DataSync to schedule periodic replication of your Amazon EFS file system to a second Amazon EFS file system within the same AWS account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent. /datasync/faqs/;How does AWS DataSync access my Amazon FSx for Windows File Server file system?;AWS DataSync accesses your Amazon FSx for Windows File Server file system using the SMB protocol, authenticating with the username and password you configure in the AWS Console or CLI. The DataSync service mounts your file system from within your VPC from Elastic Network Interfaces (ENIs) managed by the DataSync service. DataSync fully manages the creation, use, and deletion of these ENIs on your behalf. /datasync/faqs/;Can I use AWS DataSync to replicate my Amazon FSx for Windows File Server file system to a different AWS Region?;Yes. You can use AWS DataSync to schedule periodic replication of your Amazon FSx for Windows File Server file system to a second file system within the same AWS account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent. /datasync/faqs/;How does AWS DataSync access my Amazon FSx for Lustre file system?;When you create a DataSync task to copy to or from your FSx for Lustre file system, the DataSync service will create Elastic Network Interfaces (ENIs) in the same VPC and subnet where your file system is located. DataSync uses these ENIs to access your FSx for Lustre file system using the Lustre protocol as the root user. When you create a DataSync location resource for your FSx for Lustre file system, you can specify up to five security groups to apply to the ENIs and configure outbound access from the DataSync service. The security groups must be configured to allow outbound traffic on the network ports required by FSx for Lustre. The security groups on your FSx for Lustre file system should be configured to allow inbound access from the security groups you assigned to the DataSync location resource for your FSx for Lustre file system. /datasync/faqs/;Can I use AWS DataSync to migrate data from one FSx for Lustre file system to another?;Yes. You can use AWS DataSync to copy from your FSx for Lustre file system to a second file system within the same AWS account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent. /datasync/faqs/;Can I use AWS DataSync to replicate my Amazon FSx for Lustre file system to a different AWS Region?;Yes. You can use AWS DataSync to schedule periodic replication of your Amazon FSx for Lustre file system to a second file system within the same AWS account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent. /datasync/faqs/;Will DataSync copy the striping or layout settings when copying from one Amazon FSx for Lustre file system to another?;No. Files are written using the file layout and striping configuration on the destination’s file system. /datasync/faqs/;How does AWS DataSync access my Amazon FSx for OpenZFS file system?;When you create a DataSync task to copy to or from your FSx for OpenZFS file system, the DataSync service will create Elastic Network Interfaces (ENIs) in the same VPC and subnet where your file system is located. DataSync uses these ENIs to access your FSx for OpenZFS file system using the OpenZFS protocol as the root user. When you create a DataSync location resource for your FSx for OpenZFS file system, you can specify up to five security groups to apply to the ENIs and configure outbound access from the DataSync service. The security groups must be configured to allow outbound traffic on the network ports required by FSx for OpenZFS. The security groups on your FSx for OpenZFS file system should be configured to allow inbound access from the security groups you assigned to the DataSync location resource for your FSx for OpenZFS file system. /datasync/faqs/;Can I use AWS DataSync to migrate data from one FSx for OpenZFS file system to another?;Yes. You can use AWS DataSync to copy from your FSx for OpenZFS file system to a second file system within the same AWS account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent. /datasync/faqs/;Can I use AWS DataSync to replicate my Amazon FSx for OpenZFS file system to a different AWS Region?;Yes. You can use AWS DataSync to schedule periodic replication of your Amazon FSx for OpenZFS file system to a second file system within the same AWS account. This capability is available for both same-region and cross-region deployments, and does not require using a DataSync agent. /datasync/faqs/;How does AWS DataSync access my Amazon FSx for Netapp ONTAP file system?;When you create a task, DataSync creates Elastic Network Interfaces (ENIs) in the Preferred Subnet of the same VPC where your Amazon FSx for NetApp ONTAP file system is located. The Preferred Subnet is configured when you create your FSx for ONTAP file system, and DataSync uses the ENIs it creates in that subnet to access your FSx for ONTAP file system. When you create a DataSync Location resource for your FSx for ONTAP file system, you can specify up to 5 security groups to apply to the ENIs to configure outbound access from the DataSync service. You should configure the security groups on your FSx for ONTAP file system to allow inbound access from the security groups you assigned to the DataSync Location resource for your FSx for ONTAP file system . /datasync/faqs/;Which protocol versions can AWS DataSync use with Amazon FSx for NetApp ONTAP?;AWS DataSync supports using NFSv3, SMB 2.1, and SMB 3. DataSync does not currently support using NFSv4 or above with FSx for ONTAP. /datasync/faqs/;Does AWS DataSync preserve file system metadata when copying data to or from my Amazon FSx for NetApp ONTAP file system?;Yes, AWS DataSync copies file and folder timestamps and POSIX permissions, including user ID, group ID, and permissions, when using the NFS protocol. When using the SMB protocol, DataSync copies file and folder timestamps, ownership, and ACLs. You can learn more and see the complete list of copied metadata in our documentation. /datasync/faqs/;Which protocol should I use when migrating my data to Amazon FSx for NetApp ONTAP?;When migrating from Windows servers or NAS shares that serve users through the SMB protocol, use a DataSync SMB source location and the SMB protocol for your FSx for ONTAP location, ensuring that the security style for your FSx for ONTAP volume is configured for NTFS. When migrating from Unix or Linux servers or NAS shares that serve users through the NFS protocol, use a DataSync NFS source location and the NFS protocol for your FSx for ONTAP location, ensuring the security style for your FSx for ONTAP volume is configured for Unix. For multi-protocol migrations, you should review the best practices covered in the blog Enabling multiprotocol workloads with Amazon FSx for NetApp ONTAP, and use the SMB protocol to preserve file system metadata with the highest fidelity. For more information on configuring security styles for your FSx for ONTAP volumes, see the documentation on managing FSx for ONTAP volumes. /datasync/faqs/;Can I use AWS DataSync to access the same Amazon FSx for NetApp ONTAP file system using different protocols?;Yes, however you will need to create a separate DataSync location and task resource for each protocol (NFS or SMB). To avoid issues with overwriting data and data verification, we do not recommend using multiple DataSync tasks to copy to the same volume path at the same time (whether using the same protocol or different protocols). /datasync/faqs/;Can I use AWS DataSync to transfer data to or from Amazon FSx for NetApp ONTAP iSCSI LUNs?;No, DataSync only supports copying file data to or from FSx for ONTAP volumes using NFS or SMB protocols. /datasync/faqs/;Can I use AWS DataSync to copy data from one Amazon FSx for NetApp ONTAP file system to another?;Yes. You can use AWS DataSync to copy from your FSx for ONTAP file system to a second file system within the same AWS account. This capability is available for both same-Region and cross-Region deployments, and does not require using a DataSync agent. /datasync/faqs/;Can I use AWS DataSync to replicate my Amazon FSx for NetApp ONTAP file system to a different file system in another AWS Region?;While DataSync can be used to replicate data between your file systems, we recommend using NetApp SnapMirror to replicate between your FSx for ONTAP file systems. SnapMirror enables you to achieve low RPOs, regardless of the number or size of files in your file system. /datasync/faqs/;How do I configure AWS DataSync to not copy snapshot directories?;DataSync will automatically exclude folders named “.snapshot”. You can also use exclude filters to avoid copying files and folders that match patterns you specify. /datasync/faqs/;How do I move data between AWS Snowcone and AWS storage services?;The DataSync agent is pre-installed on your Snowcone device as an AMI. To move data online to AWS, connect the AWS Snowcone device to the external network and use AWS OpsHub or the CLI to launch the DataSync agent AMI. Activate the agent using the AWS Management Console or CLI, and set up your online data move task between AWS Snowcone’s NFS store, and Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, or Amazon FSx for NetApp ONTAP. /datasync/faqs/;How fast can AWS DataSync copy my file system to AWS?;The rate at which AWS DataSync can copy a given dataset is a function of amount of data, I/O bandwidth achievable from the source and destination storage, network bandwidth available, and network conditions. For data transfer between on premises and AWS Storage services, a single DataSync task is capable of fully utilizing a 10 Gbps network link. /datasync/faqs/;Can I control the amount of network bandwidth that an AWS DataSync task uses?;Yes. You can control the amount of network bandwidth that AWS DataSync will use by configuring the built-in bandwidth throttle. You can increase or decrease this limit while your data transfer task is running. This enables you to minimize impact on other users or applications who rely on the same network connection. /datasync/faqs/;How can I monitor the performance of AWS DataSync?;AWS DataSync generates Amazon CloudWatch Metrics to provide granular visibility into the transfer process. Using these metrics, you can see the number of files and amount of data which has been copied, as well as file discovery and verification progress. You can see CloudWatch Graphs with these metrics directly in the DataSync Console. /datasync/faqs/;Will AWS DataSync affect the performance of my source file system?;Depending on the capacity of your on-premises file store, and the quantity and size of files to be transferred, AWS DataSync may affect the response time of other clients when accessing the same source data store, because the agent reads or writes data from that storage system. Configuring a bandwidth limit for a task will reduce this impact by limiting the I/O against your storage system. /datasync/faqs/;When using AWS DataSync Discovery (Preview), how do I specify the credentials for my on-premises storage systems and how are they protected?;When you configure AWS DataSync Discovery (Preview) to discover your storage system, you provide the ID of an AWS Secrets Manager resource containing your system credentials. When DataSync Discovery runs a discovery job, it retrieves the password from the secret, re-encrypts it, and sends the encrypted password to the agent used for your job. The password is retained in memory on the agent only for the duration of the job and at no time is the password persisted outside of memory. /datasync/faqs/;Is my data encrypted while being transferred and stored?;Yes. All data transferred between the source and destination is encrypted via Transport Layer Security (TLS), which replaced Secure Sockets Layer (SSL). Data is never persisted in AWS DataSync itself. The service supports using default encryption for S3 buckets, Amazon EFS file system encryption of data at rest, and Amazon FSx encryption at rest and in transit. /datasync/faqs/;How does AWS DataSync access my NFS server or SMB file share?;AWS DataSync uses an agent that you deploy into your IT environment or into Amazon EC2 to access your files through the NFS or SMB protocol. This agent connects to DataSync service endpoints within AWS, and is securely managed from the AWS Management Console or CLI. /datasync/faqs/;How does AWS DataSync access HDFS on my Hadoop cluster?;AWS DataSync uses an agent that you deploy into your IT environment or into Amazon EC2 to access your Hadoop cluster. The DataSync agent acts as an HDFS client and communicates with the NameNodes and DataNodes in your clusters. When you start a task, DataSync queries the primary NameNode to determine the locations of files and folders on the cluster. DataSync then communicates with the DataNodes in the cluster to copy files and folders to, or from, HDFS. /datasync/faqs/;How does AWS DataSync access my self-managed or cloud object storage that supports the Amazon S3 protocol?;AWS DataSync uses an agent that you deploy into your data center or public cloud environment, or into Amazon EC2 to access your objects using the Amazon S3 API. This agent connects to DataSync service endpoints within AWS, and is securely managed from the AWS Management Console or CLI. /datasync/faqs/;How does AWS DataSync access my Azure Blob Storage containers (Preview)?;AWS DataSync uses an agent that you deploy into your Azure environment or into Amazon EC2 to access objects in your Azure Blob Storage containers. The agent connects to DataSync service endpoints within AWS, and is securely managed from the AWS Management Console or CLI. The agent authenticates to your Azure container using a SAS token that you specify when creating a DataSync Azure Blob location. /datasync/faqs/;Does AWS DataSync require setting up a VPN to connect to my destination storage?;No. When copying data to or from your premises, there is no need to setup a VPN/tunnel or allow inbound connections. Your AWS DataSync agent can be configured to route through a firewall using standard network ports. You can also deploy DataSync within your Amazon Virtual Private Cloud (Amazon VPC) using VPC endpoints. When using VPC endpoints, data transferred between the DataSync agent and AWS services does not need to traverse the public internet or need public IP addresses. /datasync/faqs/;How do my AWS DataSync agents securely connect to AWS?;Your AWS DataSync agent connects to DataSync service endpoints within your chosen AWS Region. You can choose to have the agent connect to public internet facing endpoints, Federal Information Processing Standards (FIPS) validated endpoints, or endpoints within one of your VPCs. Activating your agent securely associates it with your AWS account. To learn more, see Choose a Service Endpoint and Activate Your Agent. /datasync/faqs/;How is my AWS DataSync agent patched and updated?;Updates to the agent VM, including both the underlying operating system and the AWS DataSync software packages, are automatically applied by AWS once the agent is activated. Updates are applied non-disruptively when the agent is idle and not executing a data transfer task. /datasync/faqs/;Which compliance programs does AWS DataSync support?;"AWS has the longest-running compliance program in the cloud. AWS is committed to helping customers navigate their requirements. AWS DataSync has been assessed to meet global and industry security standards. DataSync complies with PCI DSS, ISO 9001, 27001, 27017, and 27018; SOC 1, 2, and 3; in addition to being HIPAA eligible. DataSync is also authorized in the AWS US East/West Regions under FedRAMP Moderate and in the AWS GovCloud (US) Regions under FedRamp High. That makes it easier for you to verify our security and meet your own obligations. For more information and resources, visit our compliance pages. You can also go to the Services in Scope by Compliance Program page to see a full list of services and certifications." /datasync/faqs/;Is AWS DataSync PCI compliant?;Yes. AWS DataSync is PCI-DSS compliant, which means you can use it to transfer payment information. You can download the PCI Compliance Package in AWS Artifact to learn more about how to achieve PCI Compliance on AWS. /datasync/faqs/;Is AWS DataSync HIPAA eligible?;Yes. AWS DataSync is HIPAA eligible, which means if you have a HIPAA BAA in place with AWS, you can use DataSync to transfer protected health information (PHI). /datasync/faqs/;Does AWS DataSync have FedRAMP JAB Moderate Provisional Authorization in the AWS US East/West?;Yes. AWS DataSync has received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) at the Federal Risk and Authorization Management Program (FedRAMP) Moderate baseline in the US East/West Regions. If you are a federal or commercial customer, you can use AWS DataSync in the AWS East/West Region's authorization boundary with data up to the moderate impact level. /datasync/faqs/;Does AWS DataSync have FedRAMP JAB High Provisional Authorization in the AWS GovCloud (US) Regions?;Yes. AWS DataSync has received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) at the Federal Risk and Authorization Management Program (FedRAMP) High baseline in the US GovCloud Region. If you are a federal or commercial customer, you can use AWS DataSync in the AWS GovCloud (US) Region’s authorization boundary with data up to the high impact level. /datasync/faqs/;How is AWS DataSync different from using command line tools such as rsync or the Amazon S3 command line interface?;AWS DataSync fully automates and accelerates moving large active datasets to AWS. It is natively integrated with Amazon S3, Amazon EFS, Amazon FSx, Amazon CloudWatch, and AWS CloudTrail, which provides seamless and secure access to your storage services, as well as detailed monitoring of the transfer. /datasync/faqs/;To transfer objects between my buckets, when do I use AWS DataSync, when do I use S3 Replication, and when do I use S3 Batch Operations?;AWS provides multiple tools to copy objects between your buckets. /datasync/faqs/;When do I use AWS DataSync and when do I use AWS Snowball Edge?;AWS DataSync is ideal for online data transfers. You can use DataSync to migrate active data to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises storage capacity, or replicate data to AWS for business continuity. /datasync/faqs/;When do I use AWS DataSync and when do I use AWS Storage Gateway?;Use AWS DataSync to migrate existing data to Amazon S3, and subsequently use the File Gateway configuration of AWS Storage Gateway to retain access to the migrated data and for ongoing updates from your on-premises file-based applications. /datasync/faqs/;When do I use AWS DataSync, and when do I use Amazon S3 Transfer Acceleration?;If your applications are already integrated with the Amazon S3 API, and you want higher throughput for transferring large files to S3, you can use S3 Transfer Acceleration. If you want to transfer data from existing storage systems (e.g., Network Attached Storage), or from instruments that cannot be changed (e.g., DNsequencers, video cameras), or if you want multiple destinations, you use AWS DataSync. DataSync also automates and simplifies the data transfer by providing additional functionality, such as built-in retry and network resiliency mechanisms, data integrity verification, and flexible configuration to suit your specific needs, including bandwidth throttling, etc. /datasync/faqs/;When do I use AWS DataSync and when do I use AWS Transfer Family?;If you currently use SFTP to exchange data with third parties, AWS Transfer Family provides a fully managed SFTP, FTPS, and FTP transfer directly into and out of Amazon S3, while reducing your operational burden. /disaster-recovery/faqs/;What is AWS Elastic Disaster Recovery?; With AWS DRS, you can use a unified process to test, recover, and fail back a wide range of applications, without requiring specialized skills. During normal operation, use the AWS DRS Console to monitor your replicating servers and view events and metrics. You can verify your disaster recovery readiness at any time by performing non-disruptive drills. /disaster-recovery/faqs/;Why use AWS as a disaster recovery site?; To start using AWS DRS, go here or sign into the console and navigate to AWS Elastic Disaster Recovery in the Storage category. You can follow the steps provided in the console to set up AWS DRS and refer to the Quick start guide. /disaster-recovery/faqs/;What is a disaster recovery plan?; A disaster recovery drill is performed to test the section of your disaster recovery plan that details your response to a disaster. By following the exact steps in the disaster recovery plan and verifying that your disaster recovery site is functioning and is able to provide the required business continuity within the required RPOs and RTOs, you can confirm that this would also be the case if a real disaster occurs. Organizations determine the frequency of disaster recovery drills based on multiple factors, such as requirements by compliance certifications and the cost of each drill for the organization. /disaster-recovery/faqs/;What is a disaster recovery drill?;You can use AWS DRS to perform recovery and failback drills without disrupting the ongoing data replication of your source servers. /disaster-recovery/faqs/;What is RPO? What is RTO?;Recovery time objective (RTO) is the maximum acceptable delay between the interruption of an application and the restoration of its service. This recovery objective determines what is considered an acceptable time window when an application is unavailable. AWS DRS facilitates RTOs of minutes. /disaster-recovery/faqs/;What is the difference between backup and disaster recovery?;Disaster recovery is the process to quickly reestablish access to your applications, data, and IT resources after an outage. This might involve switching over to a redundant set of servers and storage systems until your source data center is functional again. You use disaster recovery to perform a failover to transfer applications to your disaster recovery site, so that your business can continue to function as normal even if the production site is unavailable. /disaster-recovery/faqs/;Can I avoid using the public internet to replicate my data to AWS?; AWS DRS performs block signature comparison before initiating data replication to verify that duplicate or empty blocks aren’t sent across the network. In addition, replicated data is encrypted and compressed before transit. /snowball/faqs/;What is AWS Snowball?;AWS Snowball is a service that provides secure, rugged devices, so you can bring AWS computing and storage capabilities to your edge environments, and transfer data into and out of AWS. Those rugged devices are commonly referred to as AWS Snowball or AWS Snowball Edge devices. Previously, AWS Snowball referred specifically to an early hardware version of these devices, however that model has been replaced by updated hardware. Now the AWS Snowball service operates with Snowball Edge devices, which include on-board computing capabilities as well as storage. /snowball/faqs/;What is AWS Snowball Edge?;AWS Snowball Edge is an edge computing and data transfer device provided by the AWS Snowball service. It has on-board storage and compute power that provides select AWS services for use in edge locations. Snowball Edge comes in two options, Storage Optimized and Compute Optimized, to support local data processing and collection in disconnected environments such as ships, windmills, and remote factories. Learn more about its features here. /snowball/faqs/;What happened with the original 50 TB and 80 TB AWS Snowball devices?;The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for data transfer. /snowball/faqs/;Can I still order the original Snowball 50 TB and 80 TB devices?;No. For data transfer needs now, please select the Snowball Edge Storage Optimized devices. /snowball/faqs/;How does Snowball Edge work?;You start by requesting one or more Snowball Edge Compute Optimized or Snowball Edge Storage Optimized devices in the AWS Management Console based on how much data you need to transfer and the compute needed for local processing. The buckets, data, Amazon EC2 AMIs, and Lambda functions you select are automatically configured, encrypted, and preinstalled on your devices before they are shipped to you. Once a device arrives, you connect it to your local network and set the IP address either manually or automatically with DHCP. Then use the Snowball Edge client software, job manifest, and unlock code to verify the integrity of the Snowball Edge device or cluster, and unlock it for use. The manifest and unlock code are uniquely generated and crypto-logically bound to your account and the Snowball Edge shipped to you, and cannot be used with any other devices. Data copied to Snowball Edge is automatically encrypted and stored in the buckets you specify. /snowball/faqs/;What is the difference between Snowball Edge and Snowball?;AWS Snowball now refers to the service overall, and Snowball Edge are the current types of devices that the service uses – sometimes referred to generically as AWS Snowball devices. Originally, early Snowball hardware designs were for data transport only. Snowball Edge has the additional capability to run computing locally, even when there is no network connection available. /snowball/faqs/;What is the difference between the Snowball Edge Storage Optimized and Snowball Edge Compute Optimized options?;Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It is also a good fit for running general purpose analysis such as IoT data aggregation and transformation. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases. We recommend using Snowball Edge Compute Optimized for use cases that require access to powerful compute and high-speed storage for data processing before transferring it into AWS. It features 104 vCPUs, 28 TB of NVMe SSD, and up to 100 Gb networking to run applications such as high-resolution video processing, advanced IoT data analytics, and real-time optimization of machine learning models in environments with limited connectivity. For more details, see the documentation. /snowball/faqs/;Who should use Snowball Edge?;Consider Snowball Edge if you need to run computing in rugged, austere, mobile, or disconnected (or intermittently connected) environments. Also consider it for large-scale data transfers and migrations when bandwidth is not available for use of a high-speed online transfer service, such as AWS DataSync. Snowball Edge Storage Optimized is the optimal data transfer choice if you need to securely and quickly transfer terabytes to petabytes of data to AWS. You can use Snowball Edge Storage Optimized if you have a large backlog of data to transfer or if you frequently collect data that needs to be transferred to AWS and your storage is in an area where high-bandwidth internet connections are not available or cost-prohibitive. You can also use Snowball Edge to run edge computing workloads, such as performing local analysis of data on a Snowball Edge cluster and writing it to the S3-compatible endpoint. You can streamline it into existing workflows leveraging built-in capabilities such as the NFS file interface and migrate files to the device while maintaining file metadata. /snowball/faqs/;Can I use Snowball Edge to migrate data from one AWS Region to another AWS Region?;No. Snowball Edge is intended to serve as a data transport solution for moving high volumes of data into and out of a designated AWS Region. For use cases that require data transfer between AWS Regions, we recommend using S3 Cross-Region Replication as an alternative. /snowball/faqs/;How much data can I transfer using Snowball Edge?;You can transfer virtually any amount of data with Snowball Edge, from a few terabytes to many petabytes. You can transfer up to approximately 80 TB with a single Snowball Edge Storage Optimized device and can transfer even larger data sets with multiple devices, either in parallel, or sequentially. /snowball/faqs/;How long does it take to transfer my data?;Data transfer speed is affected by a number of factors including local network speed, file size, and the speed at which data can be read from your local servers. The end-to-end time to transfer up to 80 TB of data into AWS with Snowball Edge is approximately one week, including the usual shipping and handling time in AWS data centers. /snowball/faqs/;How long can I have a Snowball Edge for a specific job?;For security purposes, jobs using an AWS Snowball Edge device must be completed within 360 days of being prepared. If you need to keep one or more devices for longer than 360 days, contact AWS Support. Otherwise, after 360 days, the device becomes locked, can no longer be accessed, and must be returned. If the AWS Snowball Edge device becomes locked during an import job, we can still transfer the existing data on the device into Amazon S3. /snowball/faqs/;What are the specifications of the Snowball Edge devices?;Please see the AWS Snowball Features page for feature details and the Snowball Edge documentation page for a complete list of hardware specs, including network connections, thermal and power requirements, decibel output, and dimensions. /snowball/faqs/;What network interfaces does Snowball Edge support?;Snowball Edge Storage Optimized for data transfer devices have two 10G RJ45 ports, one 10/25G SFP28 port, and one 40G/100G QSFP28 port. Snowball Edge Storage Optimized for edge compute devices have one 10G RJ45 port, one 10/25G SFP28 port, and one 40G QSFP+ port. /snowball/faqs/;What is the Snowball Edge default shipping option? Can I choose expedited shipping?;As a default, Snowball Edge uses two-day shipping by UPS. You can choose expedited shipping if your jobs are time-sensitive. /snowball/faqs/;Does Snowball Edge support EC2 instances?;Yes. The Snowball Edge Storage Optimized option supports SBE1 instance. The Snowball Edge Compute Optimized option features more powerful and larger instances, SBE-C for compute-intensive applications. The Snowball Edge Compute Optimized device with an optional GPU, can use SBE-G instances to accelerate your application’s performance. The support for EC2-compatible instances on Snowball Edge devices enables you to build and test on EC2, then operate your AMI on a Snowball Edge to address workloads that sit in remote or disconnected locations. /snowball/faqs/;Does Snowball Edge support Lambda functions?;Yes, Lambda functions are hosted and can be executed on Snowball Edge in response to data storage events. /snowball/faqs/;How do Lambda functions work on Snowball Edge?;Lambda functions are hosted locally on Snowball Edge. As data is written to your appliance, Lambda functions can be triggered to act on that data. In the same way as they act in AWS, Lambda functions can call other services, update objects, or make other changes. /snowball/faqs/;How should I choose between Amazon EC2 compute instances and AWS Lambda functions for my compute needs?;AWS Lambda is a good choice for new applications that want to take advantage of the serverless computing model in AWS and want to run the same applications on the device. Amazon EC2 instances are a good choice when you have existing applications that you would like to run on the device for data pre-processing or when refactoring your existing applications to the serverless model isn’t desirable. /snowball/faqs/;How can I use the GPU with AWS Snowball Edge’s SBE instances?;The GPU option on AWS Snowball Edge Compute Optimized comes with SBE-G instances that can take advantage of the onboard GPU for accelerating the application performance. After receiving the device, select the option to use the SBE-G instance in order to use the on-board GPU with your application. /snowball/faqs/;When should I use the EC2 compatible instances on AWS Snowball Edge?;You should use the EC2 compatible instances when you have an application running on the edge that is managed and deployed as a virtual machine (an Amazon Machine Image, or AMI). /snowball/faqs/;Can multiple Snowball Edge devices be clustered together?;Yes, multiple Snowball Edge Storage Optimized or Compute Optimized devices can be clustered into a larger durable storage pool with a single S3-compatible endpoint. For example, if you have 6 Storage Optimized devices, they can be configured to be a single cluster that exposes a single S3 compatible endpoint with 400 TB of storage. Alternatively, they can be used individually without clustering, each hosting a separate S3 compatible endpoint with 80 TB of usable storage. A durable cluster cannot be created using a mix of Storage Optimized and Compute Optimized devices. /snowball/faqs/;When would I consider clustering Snowball Edge devices together?;With a Snowball Edge cluster, you increase local storage durability and scalability. Clustering Snowballs creates durable, scalable, S3 compatible local storage. Data can be shipped to AWS by swapping Snowballs in and out of the cluster seamlessly. Snowball Edge clusters allow you to scale your local storage capacity up or down depending on your requirements by adding or removing appliances, eliminating the need to buy expensive hardware. /snowball/faqs/;How do I get started with local computing on Snowball Edge?;You can enable and provision Amazon EC2 AMIs or Lambda functions during AWS Snowball Edge job creation using either the AWS Console, AWS Snowball SDK, or AWS CLI. /snowball/faqs/;Can I use existing Amazon EC2 APIs to start, stop, and manage instances on the device?;Yes. AWS Snowball Edge provides an Amazon EC2-compatible endpoint that can be used to start, stop, and manage your instances on AWS Snowball Edge. This endpoint is compatible with the AWS CLI and AWS SDK. /snowball/faqs/;What Amazon EC2 features does AWS Snowball Edge support?;The Amazon EC2 endpoint running on AWS Snowball Edge, provides a set of EC2 features that customers would find most useful for edge computing scenarios. This includes APIs to run, terminate, and describe your installed AMIs and running instances. Snowball Edge also supports block storage for EC2 images, which is managed using a set of the Amazon EBS API commands. /snowball/faqs/;Can I use an existing Amazon EBS volume with AWS Snowball Edge?;No. At this time, you cannot use an existing EBS volume with AWS Snowball Edge, however, Snowball Edge does offer block storage volumes, which are managed with an EBS-compatible API. /snowball/faqs/;What steps do I need to take to run Amazon EC2 instances on AWS Snowball Edge?;To run instances, provide the AMI IDs during job creation and the images come pre-installed when the device is shipped to you. /snowball/faqs/;Can I convert my images from other hypervisors to AMIs and vice versa?;Yes. You can import or export your KVM/VMware images to AMIs using the EC2 VM Import/Export service. Refer to the VM Import/Export documentation for more details. /snowball/faqs/;What operating systems can I run using this feature?;Amazon EC2 on Snowball Edge provides default support for a variety of free-to-use operating systems (OS) like Ubuntu and CentOS. They will appear as AMI’s that can be loaded onto Snowball Edge without any modification. To run other OSes that require licenses on Snowball Edge EC2 instances, you must provide your own license, and then export the AMI using Amazon EC2 VM Import/Export (VMIE). /snowball/faqs/;What kind of workloads can I run on SBE1 and SBE-C instances?;SBE1 instances feature up to 40 vCPUs, ephemeral instance storage for root volumes, and 32GB of memory, and are designed to support edge applications, such as IoT sensor data collection, image compression, data collection, and machine learning. SBE1 instances can also use Snowball Edge SATA SSD and HDD block storage for persistent volumes. /snowball/faqs/;How do I ensure that my AMIs are compatible to run on EC2-compatible instances on AWS Snowball Edge?;AMIs that run on the C5 instance type in AWS are compatible with SBE1 instances available on AWS Snowball Edge Storage Optimized in the vast majority of cases. We recommend that you first test your applications in the C5 instance type to ensure they can be run on the Snowball Edge Storage Optimized device. /snowball/faqs/;Can I install more than one instance on a device?;Yes. You can run multiple instances on a device as long as the total resources used across all instances are within the limits for your Snowball Edge device. /snowball/faqs/;How do I use SBE1, SBE-C, and SBE-G instances on an AWS Snowball Edge cluster?;All the EC2 compatible instances can run on each node of an AWS Snowball Edge cluster. When you provision an AWS Snowball Edge cluster using the AWS Console, you can provide details for instances to run on each node of the cluster, for example, the AMI you want to run and the instance type and size you want to use. Nodes can use the same or different AMIs across each node in a cluster. /snowball/faqs/;How do I launch an instance manually?;Each AMI has an AMI ID associated with it. You can use run-instance command to start the instance by providing this ID. Running this command returns an instance-id value that can be used to manage this instance. /snowball/faqs/;How do I manage the instances on AWS Snowball Edge?;You can check the status of all the images that are installed on the device using the describe-images command. To see the active instances of instances running on the device, you can use the describe-instance-status command. /snowball/faqs/;How do I terminate an existing instance?;You can terminate a running instance using the terminate-instance command. /snowball/faqs/;How are my AMIs protected while in transit?;Snowball Edge encrypts all data, including AMIs, with 256-bit encryption. You manage your encryption keys by using the AWS Key Management Service (KMS). Your keys are never stored on the device and you need both the keys and an unlock code to use the device on-premises. In addition to using a tamper-evident enclosure, Snowball Edge uses industry-standard Trusted Platform Modules (TPM) designed to detect any unauthorized modifications to the hardware, firmware, or software. AWS visually and cryptographically inspects every device for any signs of tampering. /snowball/faqs/;How is software licensing handled with compute instances on AWS Snowball Edge?;You are responsible for licensing any software that you run on your instance. Specifically, for Windows operating systems, you can bring your existing license to the running instances on the device, by installing the licensed OS in your AMI in EC2, and then using VM Import/Export to load the AMI to your Snowball Edge device. /snowball/faqs/;What is block storage on AWS Snowball Edge?;You can run block storage on both Snowball Edge Compute Optimized and Snowball Edge Storage Optimized devices. You attach block storage volumes to EC2 instances using a subset of Amazon EBS capabilities that enable you to configure and manage volumes for EC2 instances on Snowball Edge devices. /snowball/faqs/;What are the types of block storage volumes I can use, and how much capacity can each volume type use?;Snowball Edge block storage provides performance-optimized SSD volumes (sbp1), and capacity-optimized HDD volumes (sbg1), to meet IOPS and throughput requirements for a wide-variety of data processing and data collection applications. Block storage volumes have a maximum size of 10 TB per volume, and you can attach up to 10 volumes to any EC2 instance on Snowball Edge. /snowball/faqs/;How do I get started with block storage on Snowball Edge?;By default, all Snowball Edge devices are now shipped with the block storage feature. Once you unlock the device you can use AWS CLI or SDK to create volumes and attach them to an Amazon EC2 instance. You can attach multiple volumes to each EC2 instance, however, a single volume can only be attached to a single instance at any time. /snowball/faqs/;How is Snowball Edge block storage different from Amazon EBS?;Snowball Edge block storage has different performance, availability, and durability characteristics than Amazon EBS volumes. Also, it provides only a subset of Amazon EBS capabilities. For example, snapshot functionality is not currently supported on Snowball Edge block storage. Please see Snowball Edge’s technical documentation for a complete list of supported APIs. /snowball/faqs/;Which Amazon EBS APIs does SBE block storage support?;To interact with block storage on SBE, you can use create, delete, attach, detach, and describe volumes EBS APIs. Please see Snowball Edge’s technical documentation for a complete list of supported APIs. /snowball/faqs/;Which Amazon Machine Images can I use on Snowball Edge to utilize block storage?;Any Amazon Machine Image (AMI) running on Snowball Edge can access up to 10 block storage volumes at once. Generic AMIs provided by AWS and custom AMIs can access any block storage volume. There are no special requirements to make the block storage volumes work. However, certain operating systems perform better with specific drivers. Please see Snowball Edge’s technical documentation for details. /snowball/faqs/;Can the volumes on one device be accessible to Amazon EC2 instances running on another device?;Volumes created on a single Snowball Edge are only accessible to the EC2 instances running on that device. /snowball/faqs/;How can we monitor storage capacity used by various volumes?;You can use describe-device command from the Snowball client to monitor how much block storage is been used on the device. When you create a volume, all of the storage capacity requested is allocated to it based on the available capacity. /snowball/faqs/;Can I transfer data stored on block storage to Amazon EBS volumes in the cloud?;Not directly, no. Data on block storage volumes on Snowball Edge is deleted when the device returns to AWS. If you wish to preserve data in block storage volumes, you must copy the data into the Amazon S3 compatible storage on Snowball Edge. This data will then be copied into your S3 bucket when the device returns to AWS. /snowball/faqs/;Can I operate object and block storage on the same device?;Yes, you can use the Amazon S3-compatible object storage, and the Snowball Edge block storage on the same device. The object storage and block storage used for sbg1 volumes share the same HDD capacity. The underlying storage features work together so that an increase in I/O demand for block or object storage does not impede the availability and performance of the other. /snowball/faqs/;Do I need to configure volumes or any storage resources when ordering my Snowball Edge from the AWS Console?;No, you add volumes to your Amazon EC2 instances after you have received the device. /snowball/faqs/;Do I need to allocate storage resources on the device between block and object storage?;No. You can dynamically add or remove volumes and objects based on your application needs. /snowball/faqs/;Are the volumes encrypted by default?;Snowball Edge is designed with security in mind for the most sensitive data. All data written into block volumes is encrypted by keys provided by you through AWS Key Management Service (KMS). All volumes are encrypted using the same keys selected during Snowball Edge job creation. The keys are not permanently stored on the device and are erased after loss of power. /snowball/faqs/;What are best practices to achieve optimum performance with Snowball Edge block storage?;Additional volumes attached using the block storage offer up to 10 times higher performance compared to the root volumes. We recommended you use relatively smaller root volumes, and create additional block storage volumes for storing data for your Amazon EC2 applications. Please see Snowball Edge’s technical documentation for performance best practices, and recommended drivers. /snowball/faqs/;In what Regions are Snowball Edge available?;Check the Regional Service Availability pages for the latest information. /snowball/faqs/;Can a Snowball Edge be shipped to an alternate AWS Region?;No. Snowball Edge devices are designed to be requested and used within a single AWS Region. The device may not be requested from one Region and returned to another. Snowball Edge devices used for imports or exports from an AWS Region in the EU may be used with any of the other EU countries. Check the Regional Service Availability pages for the latest information. /snowball/faqs/;Does Snowball Edge encrypt my data?;Snowball Edge encrypts all data with 256-bit encryption. You manage your encryption keys by using the AWS Key Management Service (AWS KMS). Your keys are never stored on the device and all memory is erased when it is disconnected and returned to AWS. /snowball/faqs/;How does Snowball Edge physically secure my data?;In addition to using a tamper-resistant enclosure, Snowball Edge uses industry-standard Trusted Platform Modules (TPM) designed to detect any unauthorized modifications to the hardware, firmware, or software. AWS visually and cryptographically inspects every device for any signs of tampering and to verify that no changes were detected by the TPM. /snowball/faqs/;How does Snowball Edge help digitally secure my data?;Snowball Edge is designed with security in mind for the most sensitive data. All data is encrypted by keys provided by you through AWS Key Management Service (KMS). The keys are not permanently stored on the device and are erased after loss of power. Applications and Lambda functions run in a physically isolated environment and do not have access to storage. Lastly, after your data has been transferred to AWS, your data is erased from the device using standards defined by National Institute of Standards and Technology. Snowball Edge devices are hardened against attack and all configuration files are encrypted and signed with keys that are never present on the device. /snowball/faqs/;Is there a way to easily track my data transfer jobs?;Snowball Edge uses an innovative, E Ink shipping label designed to ensure the device is automatically sent to the correct AWS facility. When you have completed your data transfer job, you can track it by using Amazon SNgenerated text messages or emails, and the console. /snowball/faqs/;How do I transfer my data to the Snowball Edge?;After you have connected and activated the Snowball Edge, you can transfer data from local sources to the device through the S3-compatible endpoint or the NFS file interface, both available on the device. You can also use the Snowball client to copy data. To learn more, please refer to the Snowball Edge documentation. /snowball/faqs/;When can I delete the data on my disk(s) after I’ve copied the data onto Snowball Edge and shipped the Snowball Edge back to AWS?;Wait to confirm that the Snowball Edge has been received by AWS and your data has successfully been transferred into appropriate S3 buckets prior to you deleting any data on your disk(s). While AWS verifies the integrity of files copied to Snowball Edge during the S3 transfer, it is your responsibility to verify the integrity of data before deleting it from your disk(s). AWS is not liable for any lost or corrupted data during copy or transit. /snowball/faqs/;What do I do when the data has been transferred to the Snowball Edge?;When the data transfer job is complete, the E Ink display on the Snowball Edge automatically updates the return shipping label to indicate the correct AWS facility to ship to. Just drop off the Snowball Edge at the nearest UPS and you're all set. You can track the status of your transfer job through Amazon SNgenerated text messages or emails, or directly in the AWS Management Console. /snowball/faqs/;Is it possible to stop data ingestion from an AWS Snowball device once returned to AWS?;Yes. You can stop the data ingestion to an Amazon Simple Storage Service (Amazon S3) bucket by cancelling the job in the AWS Snow Management Console or by contacting AWS support. /snowball/faqs/;What does it cost to export my data?;In addition to the Export job fees detailed on our pricing page, you will also be charged all fees incurred to retrieve your data from Amazon S3. /snowball/faqs/;How quickly can I access my exported data?;We typically start exporting your data within 24 hours of receiving your request, and exporting data can take as long as a week. Once the job is complete and the device is ready, we ship it to you using the shipping options you selected when you created the job. /snowball/faqs/;Do I get a checksum or any kind of receipt on what was loaded into Amazon S3?;Yes. AWS saves an import log report to your bucket. This report contains per file information including the date and time of the upload, the Amazon S3 key, MD5 checksum, and number of bytes. For more details, see the documentation. /snowball/faqs/;I created an export job of my Amazon S3 bucket and it contains keys which are in an Amazon S3 Glacier storage class. Will the keys be exported ?;The Snowball export job from Amazon S3 workflow does not have access to the objects stored in the Amazon S3 Glacier or Amazon S3 Glacier Deep Archive storage classes. You must first restore these objects from the Glacier or Glacier Deep archive for a minimum of 10 days or until the Snow export job completes to ensure that these restored objects are successfully copied to the Snowball device. /snowball/faqs/;Why should I use Large Data Migration Manager?;Large Data Migration Manager helps you plan, and monitor your large data migration from 500TB minimum to petabytes of data. First, Large Data Migration Manager enables you to create a plan for your migration projects that use multiple AWS Snow Family devices to complete your petabyte scale data migration or data movement from the rugged, mobile edge. Creating a plan helps you and your partners onboard to Snow and align on project goals such as data size to be migrated and project duration. Once a plan is in place, Large Data Migration Manager provides a central location in AWS Snow Family management console for you to stay updated with the progress of all your Snow jobs (number of outstanding jobs, current data ingested etc.), and view estimated schedules for placing the next job orders. Finally, you can control the project plan as you monitor the migration and can extend or end the migration when you deem appropriate. /snowball/faqs/;How do I get started with using Large Data Migration Manager?;You start by creating a data migration or data movement project plan in the AWS Management Console. To create a plan, you are prompted for your import job type specifics that includes plan name, service access roles and notification preference. Once a plan is created, you need to create a site where the Snow devices will be shipped. Site information details include name and shipping address for each site, data size amount, number of concurrent Snow jobs, Snow job type, Snow device, fill rate (as per the last monitored data), project start and end dates. After you create your site, you can review your automatically created Snow job ordering schedule, which helps you know when to order your Snow jobs. You can either clone from prior existing jobs or add a job that was already created to the site. /snowball/faqs/;What is AWS OpsHub for Snow Family?;AWS OpsHub is an application that you can download from the Snowball resources page. It offers a graphical user interface for managing the AWS Snow Family devices. AWS OpsHub makes it easy to setup and manage AWS Snowball devices enabling you to rapidly deploy edge computing workloads and simplify data migration to the cloud. With just a few clicks in AWS OpsHub, you can unlock and configure devices, drag-and-drop data to devices, launch and manage EC2 instances on devices, or monitor device metrics. AWS OpsHub is available globally at no extra charge. /snowball/faqs/;How does AWS OpsHub for Snow Family work?;AWS OpsHub is an application that you can download and install on any Windows or Mac client machine, such as a laptop. Once you have installed AWS OpsHub and have your AWS Snow Family device on site, open AWS OpsHub and unlock the device. You will then be presented with a dashboard showing your device and its system metrics. You can then begin deploying your edge applications or migrating your data to the device with just a few clicks. /snowball/faqs/;Can I use AWS OpsHub with a Snow Family device that I ordered before AWS OpsHub launched?;Yes. However, the task automation features are available for only Snow Family devices ordered after AWS OpsHub launched on April 16, 2020. All other functionality will be available for all devices, including those ordered before AWS OpsHub launched. /snowball/faqs/;When do I use AWS OpsHub compared to the AWS Management Console?;You use AWS OpsHub to manage and operate your AWS Snow Family devices and the AWS services that run on them. AWS OpsHub is an application that runs on a local client machine, such as a laptop, and can operate in disconnected or connected environments. In contrast, you use the AWS Management Console to manage and operate the AWS services running in the cloud. The AWS Management Console is a web-based application that operates when you have a connection to the internet. /snowball/faqs/;How do I keep my AWS OpsHub software up to date?;AWS OpsHub will automatically check for AWS OpsHub software updates when the client machine that AWS OpsHub is running on is connected to the internet. When there is a software update, you will be notified on the application and will be given the option to download and update the latest software. Additionally, you can visit the Snowball resources page and check for the latest version of AWS OpsHub. /snowball/faqs/;Does AWS OpsHub validate and encrypt the data I transfer to the AWS Snow Family devices?;Yes. When you copy data to AWS Snow Family devices using AWS OpsHub, checksums are used to ensure that the data you copy to the device is the same as the original. Also, all data written to AWS Snow Family devices is encrypted by default. /snowball/faqs/;How much does it cost to use Snowball Edge?;Please see our AWS Snowball Edge pricing page for pricing details. /snowball/faqs/;How am I charged for Amazon S3 usage?;Snowball Edge transfers data on your behalf into AWS services such as Amazon S3. Standard AWS service charges apply. Data transferred IN to AWS does not incur any data transfer fees, and Standard Amazon S3 pricing fees apply for data stored in S3. /snowball/faqs/;Is there any additional pricing for Snowball Edge block storage?;No, there is no additional charge for this feature. /snowball/faqs/;Can I purchase a Snowball Edge device?;Devices are only available on a per-job pay-as-you-go basis, and are not available for purchase. /snowball/faqs/;Can I export virtual tapes stored on AWS to my on-premises data center using Snowball?;No, you cannot export your virtual tapes stored on AWS to your data center using Snowball. You can use a Snowball Edge Storage Optimized device with Tape Gateway to import data into AWS. To retrieve virtual tapes stored on AWS, you can use a Tape Gateway that runs on premises as a virtual machine or hardware appliance, or on an Amazon EC2 instance. /snowball/faqs/;Where can I learn more about Tape Gateway in AWS Storage Gateway?;Tape Gateway is a tape storage interface in Storage Gateway. Learn more about Tape Gateway. /snowball/faqs/;Does the Snowball Edge support API access?;Yes. The Snowball Job Management API provides programmatic access to the job creation and management features of a Snowball or Snowball Edge. It is a simple, standards-based REST web service interface, designed to work with any Internet development environment. /snowball/faqs/;What can I do with the Snowball Job Management API?;The AWS Snowball Job Management API allows partners and customers to build custom integrations to manage the process of requesting Snowballs and communicating job status. The API provides a simple web service interface that you can use to create, list, update, and cancel jobs from anywhere on the web. Using this web service, developers can easily build applications that manage Snowball jobs. To learn more, please refer to AWS Snowball documentation. /snowball/faqs/;What is the S3 Adapter?;The S3 SDK Adapter for Snowball provides an S3-compatible interface for reading and writing data on a Snowball or Snowball Edge. /snowball/faqs/;What can I do with the S3 Adapter?;The S3 Adapter allows customers to help applications write data from file and non-file sources to S3 buckets on the Snowball or Snowball Edge device. It also includes interfaces to copy data with the same encryption as is available through the Snowball client. To learn more, please refer to the AWS Snowball documentation. /snowball/faqs/;Why would I use the S3 Adapter rather than the Snowball Client?;The Snowball Client is a turnkey tool that makes it easier to copy file-based data to Snowball. Customers who prefer a tighter integration can use the S3 Adapter to easily extend their existing applications and workflows to seamlessly integrate with Snowball. /snowball/faqs/;How is my data secured when I use the S3 Adapter?;The S3 Adapter writes data using the same advanced encryption mechanism that the Snowball Client provides. /snowball/faqs/;Which programming languages does the Snowball S3 Adapter support?;The S3 Adapter communicates over REST which is language-agnostic. /snowmobile/faqs/;What is AWS Snowmobile?;AWS Snowmobile is the first exabyte-scale data migration service that allows you to move very large datasets from on-premises to AWS. Each Snowmobile is a secured data truck with up to 100PB storage capacity that can be dispatched to your site and connected directly to your network backbone to perform high-speed data migration. You can quickly migrate an exabyte of data with ten Snowmobiles in parallel from a single location or multiple data centers. Snowmobile is offered by AWS as a managed service. /snowmobile/faqs/;How does Snowmobile work?;After you have placed your inquiry for a Snowmobile, AWS personnel will contact you to determine requirements for deploying a Snowmobile and schedule the job, and will drive the required Snowmobile equipment to your site. Once on site, they will connect it to your local network so that you can use your high-speed local connection to quickly transfer data from your local storage appliances or servers to the Snowmobile. After the data transfer is complete, the Snowmobile will be returned to your designated AWS region where your data will be uploaded into the AWS storage services you have selected, such as S3 or Glacier. Finally, AWS will work with you to validate that your data has been successfully uploaded. /snowmobile/faqs/;Who should use a Snowmobile?;Snowmobile enables customers to quickly migrate exabyte-scale datasets from on-premises to AWS in a more secure, fast, and low-cost manner. Use cases include migrating 100’s of petabytes of data (such as video libraries, genomic sequences, seismic data, satellite images), and financial records to run big data analytics on AWS, or shutting down legacy data centers and moving all local data in exabytes to AWS. Before Snowmobile, migrating data at such scale would typically take years which was too slow for many customers. With Snowmobile, you can now request multiple data trucks each with up to 100PB capacity to be dispatched on-site, connected to your local high speed network backbone, and transfer your exabyte-scale datasets to AWS in as quickly as a few weeks, plus transport time. /snowmobile/faqs/;What are the specifications of a Snowmobile?;Each Snowmobile comes with up to 100PB of storage capacity housed in a 45-foot long High Cube shipping container that measures 8 foot wide, 9.6 foot tall and has a curb weight of approximately 68,000 pounds. The ruggedized shipping container is tamper-resistant, water-resistant, temperature controlled, and GPS-tracked. /snowmobile/faqs/;How much data can I transfer to Snowmobile?;Each Snowmobile has a total capacity of up to 100 petabytes and multiple Snowmobiles can be used in parallel to transfer exabytes of data. /snowmobile/faqs/;Are there site requirements to use a Snowmobile?;The Snowmobile needs physical access to your data center to allow for network connectivity. It comes with a removable connector rack with up to two kilometers of networking cable that can directly connect to the network backbone in your data center. The Snowmobile can be parked in a covered area at your data center, or in an uncovered area that is adjacent to your data center, and close enough to run the networking cable. The parking area needs to hold a standard 45-foot High Cube trailer with a minimum of 6’-0” (1.83m) of peripheral clearance. Snowmobile can operate at ambient temperatures up to 85F (29.4C) before an auxiliary chiller unit is required. AWS can provide the auxiliary chiller if needed based on the site survey findings. /snowmobile/faqs/;How is a Snowmobile powered?;A fully powered Snowmobile requires ~350KW. Snowmobile can be connected to available utility power sources at your location if sufficient capacity is available. Otherwise, AWS can dispatch a separate generator set along with the Snowmobile if your site permits such generator use. This generator set takes a similar amount of space as the Snowmobile which is parking for a vehicle approximately the same size as a 45-foot container trailer. /snowmobile/faqs/;What is a Snowmobile job?;A Snowmobile job encapsulates the end-to-end data migration process using a Snowmobile. There are five main steps: /snowmobile/faqs/;How do I connect my data center to Snowmobile?;Each Snowmobile comes with a removable high-speed connector rack on wheels with two kilometers of ruggedized networking cable. The connector rack can be rolled to a location inside your data center and connected directly to your network backbone. This way, the Snowmobile will operate as a network storage target inside your network for you to perform high-speed data transfer. /snowmobile/faqs/;How do I copy my data to a Snowmobile?;Once the Snowmobile is connected to your data center, it will appear as a network storage target. You can copy data from local storage devices to the Snowmobile using the same tools and in the same manner as data copied to any network attached storage device with an NFS interface. /snowmobile/faqs/;Can I connect to Snowmobile via a NFS endpoint?;Yes. The Snowmobile will appear as a standard NFS mount on your network that you can connect to via your existing tools and applications. /snowmobile/faqs/;How do I get started with Snowmobile?;Please contact your sales team to request a Snowmobile. /snowmobile/faqs/;How long does it take to transfer my data to a Snowmobile?;The Snowmobile is designed to transfer data at a rate up to 1 Tb/s, which means you could fill a 100PB Snowmobile in less than 10 days. The actual transfer speed may vary depending on the available local network capacity at your site and the speed of your on-premises storage devices. AWS will provide a way for you to test your local network throughput and the copy speed from your data sources to properly size the Snowmobile job, before dispatching the Snowmobile. /snowmobile/faqs/;What type of connections does Snowmobile provide?;The Snowmobile comes with a removable connector cabinet that needs to be mounted on one of your data center racks where it can be connected directly to your high-speed network backbone. The connector rack provides multiple 40Gb/s interfaces that can transfer up to 1 Tb/s in aggregate. /snowmobile/faqs/;After my data has been imported to AWS, what happens to the copy on Snowmobile?;When the data import has been processed and verified, AWS performs a software erasure of the Snowmobile that follows the National Institute of Standards and Technology (NIST) guidelines for media sanitization (NIST 800-88). /snowmobile/faqs/;How do I verify that my data has been successfully copied to Snowmobile?;At the time your data is copied into the Snowmobile, a set of logs will be generated with checksums for each file transferred. These logs are available to you for verification. The logs are also used when data is imported from the Snowmobile to AWS to verify that all data has been transferred successfully. /snowmobile/faqs/;Do I need to keep a local copy of my data while a copy is shipped back to AWS on a Snowmobile?;Yes. You should always keep your source copy until AWS has worked with you to verify that the Snowmobile copy has been successfully uploaded to AWS. /snowmobile/faqs/;Can I export data from AWS with Snowmobile?;Snowmobile does not support data export. It is designed to let you quickly, easily, and more securely migrate exabytes of data to AWS. When you need to export data from AWS, you can use AWS Snowball Edge to quickly export up to 100TB per appliance and run multiple export jobs in parallel as necessary. Visit the Snowball Edge FAQs to learn more. /snowmobile/faqs/;How is Snowmobile designed to keep data secure digitally?;Your data is encrypted with keys you provided before it is written to the Snowmobile. All data is encrypted with 256-bit encryption. You can manage your encryption keys with the AWS Key Management Service (KMS). Your keys are never permanently stored on the Snowmobile, and are erased as soon as power is removed from the Snowmobile. /snowmobile/faqs/;How is Snowmobile designed to keep data secure physically?;In addition to digital encryption, the Snowmobile is only operated by AWS personnel and physical access to the data container is controlled via secure access hardware controls. All data storage equipment is separated from the network access ports used to load or remove data. This way, physical access to the data container is not needed to operate the container after it has been set up. In addition, Snowmobile is protected by 24/7 video surveillance and alarm monitoring, GPS tracking, and may be escorted by a security vehicle during transit. /snowmobile/faqs/;Can I still choose to encrypt my data before transferring to Snowmobile?;Yes. You can always encrypt your data before transferring it to the Snowmobile. /snowmobile/faqs/;What AWS Regions are supported?;Snowmobile can be made available for use with AWS services in specific AWS regions. To discuss data transport needs specific for your region please follow up with AWS Sales, or see the Regional Service Availability pages for more information. /snowmobile/faqs/;How much does a Snowmobile job cost?;Snowmobile provides a practical solution to exabyte-scale data migration and is significantly faster and cheaper than any network-based solutions, which can take decades and millions of dollars of investment in networking and logistics. Snowmobile jobs cost $0.005/GB/month based on the amount of provisioned Snowmobile storage capacity and the end to end duration of the job, which starts when a Snowmobile departs an AWS data center for delivery to the time when data ingestion into AWS is complete. Please see AWS Snowmobile pricing or contact AWS Sales for an evaluation. /storagegateway/faqs/;What is AWS Storage Gateway?;AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Storage Gateway provides a standard set of storage protocols such as iSCSI, SMB, and NFS, which allow you to use AWS storage without rewriting your existing applications. It provides low-latency performance by caching frequently accessed data on premises, while storing data securely and durably in Amazon cloud storage services. Storage Gateway optimizes data transfer to AWS by sending only changed data and compressing data. Storage Gateway also integrates natively with Amazon S3 and Amazon FSx for Windows File Server cloud storage, which makes your data available for in-cloud processing, AWS Identity and Access Management (AWS IAM) for securing access management to services and resources, AWS Key Management Service (AWS KMS) for encrypting data at rest in the cloud, Amazon CloudWatch for monitoring, and AWS CloudTrail for logging account activity. /storagegateway/faqs/;Why should I use AWS Storage Gateway?;Storage Gateway enables you to reduce your on-premises storage footprint and associated costs by leveraging AWS storage services. /storagegateway/faqs/;What use cases does AWS Storage Gateway support?;Storage Gateway supports four key hybrid cloud use cases – (1) move backups and archives to the cloud, (2) reduce on-premises storage with cloud-backed file shares, (3) provide on-premises applications low-latency access to data stored in AWS, and (4) data lake access for pre and post processing workflows. /storagegateway/faqs/;How does AWS Storage Gateway provide on-premises applications access to cloud storage?;Depending on your use case, Storage Gateway provides three types of storage interfaces for your on-premises applications: file, volume, and tape. /storagegateway/faqs/;How do I use the AWS Storage Gateway service?;You can have two touchpoints to use the service: the AWS Management Console and a gateway that is available as a virtual machine (VM) or as a physical hardware appliance. /storagegateway/faqs/;Where can I deploy a Storage Gateway appliance?;On-premises, you can deploy a virtual machine containing the Storage Gateway software on VMware ESXi, Microsoft Hyper-V, or Linux KVM, or you can deploy Storage Gateway as a hardware appliance. You can also deploy the Storage Gateway VM in VMware Cloud on AWS, or as an AMI in Amazon EC2. /storagegateway/faqs/;What is Amazon S3 File Gateway?;Amazon S3 File Gateway presents a file-based interface to Amazon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols. File Gateway allows your existing file-based applications or devices to use secure and durable cloud storage without needing to be modified. With S3 File Gateway, your configured S3 buckets will be available as Network File System (NFS) mount points or Server Message Block (SMB) file shares. Your applications read and write files and directories over NFS or SMB, interfacing to the gateway as a file server. In turn, the gateway translates these file operations into object requests on your S3 buckets. Your most recently used data is cached on the gateway for low-latency access, and data transfer between your data center and AWS is fully managed and optimized by the gateway. Once in S3, you can access the objects directly or manage them using S3 features such as S3 Lifecycle Policies and S3 Cross-Region Replication (CRR). You can run S3 File Gateway on-premises or in EC2. /storagegateway/faqs/;What is Amazon FSx File Gateway?;Amazon FSx File Gateway optimizes on-premises access to Windows file shares on Amazon FSx, making it easy for users to access FSx for Windows File Server data with low latency and conserving shared bandwidth. Users benefit from a local cache of frequently used data that they can access, enabling faster performance and reduced data transfer traffic. File system operations, such as reading and writing files, are all performed against the local cache, while Amazon FSx File Gateway synchronizes changed data to FSx for Windows File Server in the background. With these capabilities, you can consolidate all of your on-premises file share data in AWS on FSx for Windows File Server and benefit from protected, resilient, fully managed file systems. /storagegateway/faqs/;What is Tape Gateway?;Tape Gateway is a cloud-based Virtual Tape Library (VTL). It presents your backup application with a VTL interface, consisting of a media changer and tape drives. You can create virtual tapes in your virtual tape library using the AWS Management Console. Your backup application can read data from or write data to virtual tapes by mounting them to virtual tape drives using the virtual media changer. Virtual tapes are discovered by your backup application using its standard media inventory procedure. Virtual tapes are available for immediate access and are backed by Amazon S3. You can also archive tapes. Archived tapes are stored in Amazon S3 Glacier or Amazon S3 Glacier Deep Archive. /storagegateway/faqs/;What is Volume Gateway?;Volume Gateway provides an iSCSI target, which enables you to create block storage volumes and mount them as iSCSI devices from your on-premises or EC2 application servers. The Volume Gateway runs in either a cached or stored mode. /storagegateway/faqs/;What benefits does AWS Storage Gateway provide?;AWS Storage Gateway provides a set of features that enable you to effectively leverage AWS storage within your existing applications and workflows. It provides a standard set of protocols such as iSCSI, SMB and NFS, which allow you to use your existing applications without any changes. Through its local cache, the gateway provides low-latency access to recently used data. The gateway optimizes data transfer to AWS storage, such as optimization of transfer through intelligent buffering, upload management to address network variations, and bandwidth management. The gateway provides you an effective mechanism to store data in AWS across the range of storage services most suitable for your use cases. The gateway is easy to deploy and can use your existing virtual infrastructure and hypervisor investments, or can be installed in your data center or remote offices as a hardware appliance. The gateway software running as a VM or on the hardware appliance is stateless, allowing you to easily create and manage new instances of your gateway as your storage needs evolve. Finally, the service integrates natively into AWS management services such as Amazon CloudWatch, AWS CloudTrail, AWS Key Management Service (KMS), and AWS Identity and Access Management (IAM). /storagegateway/faqs/;What AWS Storage Gateway types can I manage through AWS Backup?;You can manage backup and retention policies for cached and stored volume modes of Volume Gateway through AWS Backup. /storagegateway/faqs/;What is the maximum supported size of the local cache per gateway?;The maximum supported size of the local cache for a gateway running on a virtual machine is 64 TiB. /storagegateway/faqs/;What is Amazon S3 File Gateway?;Amazon S3 File Gateway is a configuration of the AWS Storage Gateway service that provides your applications a file interface to seamlessly store files as objects in Amazon S3, and access them using industry standard file protocols. /storagegateway/faqs/;What can I do with Amazon S3 File Gateway?;Use cases for Amazon S3 File Gateway include: (a) migrating on-premises file data to Amazon S3, while maintaining fast local access to recently accessed data, (b) backing up on-premises file data as objects in Amazon S3 (including Microsoft SQL Server and Oracle databases and logs), with the ability to use S3 capabilities such as lifecycle management and cross region replication, and, (c) hybrid cloud workflows using data generated by on-premises applications for processing by AWS services such as machine learning, big data analytics or serverless functions. /storagegateway/faqs/;What are the benefits of using File Gateway to store data in S3?;Amazon S3 File Gateway enables your existing file-based applications, devices, and workflows to use Amazon S3, without modification. Amazon S3 File Gateway securely and durably stores both file contents and metadata as objects, while providing your on-premises applications low-latency access to cached data. /storagegateway/faqs/;Which Amazon S3 storage classes does S3 File Gateway support?;Amazon S3 File Gateway supports Amazon S3 Standard, S3 Intelligent-Tiering, S3 Standard - Infrequent Access (S3 Standard-IA) and S3 One Zone-IA. For details on storage classes, refer to the Amazon S3 documentation. You configure the initial storage class for objects that the gateway creates, and then you can use bucket lifecycle policies to move files from Amazon S3 to Amazon S3 Glacier. If an application attempts to access a file/object stored through Amazon File Gateway that is now in Amazon S3 Glacier, you will receive a generic I/O error. /storagegateway/faqs/;What protocols does Amazon S3 File Gateway support?;Amazon S3 File Gateway supports Linux clients connecting to the gateway using Network File System (NFS) versions 3 and 4.1, and supports Windows clients connecting to the gateway using Server Message Block (SMB) versions 2 and 3. /storagegateway/faqs/;How can I create and use a file share?;You can create an NFS or SMB file share using the AWS Management Console or service API and associate the file share with a new or existing Amazon S3 bucket. To access the file share from your applications, you mount it from your application using standard UNIX or Windows commands. For convenience, example command lines for each environment are shown in the management console. /storagegateway/faqs/;What options do I have to configure an NFS file share?;You can configure your NFS file share with administrative controls such as limiting access to specific NFS clients or networks, read-only or read-write, or enabling user permission squashing. /storagegateway/faqs/;What options do I have to configure an SMB file share?;You can configure your SMB file share to be accessed by Active Directory (AD) users only or provide authenticated guest access to users in your organization. You can further limit access to the file share as read-only or read-write, or to specific AD users and groups. /storagegateway/faqs/;Does Amazon S3 File Gateway support access-based enumeration for SMB file shares?;Yes, you can configure access-based enumeration for your SMB file shares to prevent users from seeing folders and files that they would not be able to open based on their access permissions. You can also control whether the file shares on the Amazon S3 File Gateway are browsable by users. /storagegateway/faqs/;Does Amazon S3 File Gateway support integration with on-premises Microsoft Active Directory (AD)?;Yes, Amazon S3 File Gateway integrates with Microsoft Active Directory on-premises as well as with in-cloud Active Directory solutions such as Managed Microsoft AD. /storagegateway/faqs/;Can I export an SMB file share without Active Directory?;Yes. You can export an SMB file share using a guest username and password. You will need to change the default password using the Console or service API before setting up your file share for guest access. /storagegateway/faqs/;Can I export a mix of NFS and SMB file shares on the same gateway?;Yes. /storagegateway/faqs/;Can I export an NFS and SMB file share on the same bucket?;No. Currently, file metadata, such as ownership, stored as S3 object metadata cannot be mapped across different protocols. /storagegateway/faqs/;How does Amazon S3 File Gateway access my S3 bucket?;Amazon S3 File Gateway uses an AWS Identity and Access Management (IAM) role to access your S3 bucket. You can set up an IAM role yourself or have it automatically set up by the AWS Storage Gateway Management Console. For automatic setup, AWS Storage Gateway will create a new IAM role in your account and associate it with an IAM Access Policy to access your S3 bucket. The IAM role and IAM access policy are created in your account and you can fully manage them yourself. /storagegateway/faqs/;How does my application access my file share?;To use the file share, you mount it from your application using standard UNIX or Windows commands. For convenience, example command lines are shown in the management console. /storagegateway/faqs/;How is my file share mapped to my S3 bucket?;The file share can be mapped to the root of the S3 bucket or it can be mapped to an S3 prefix within an S3 bucket. If you specify an S3 prefix when creating a file share you are tying the file share to the S3 prefix. If you do not create an S3 prefix when creating a file share then the file share is tied to the root of the S3 bucket. /storagegateway/faqs/;Can I give my file share a custom name?;Yes, the file share name does not have to be the same as the S3 bucket or S3 prefix names. /storagegateway/faqs/;Can I change my file share name?;Yes, you can change your file share name. /storagegateway/faqs/;What is the relationship between files and objects?;Files are stored as objects in your S3 buckets and you can configure the initial storage class for objects that File Gateway creates. There is a one-to-one relationship between files and objects, and you can configure the initial storage class for objects that Amazon S3 File Gateway creates. /storagegateway/faqs/;What file system operations are supported by Amazon S3 File Gateway?;Your clients can create, read, update, and delete files and directories. Your clients can also change permissions and ownership of files and folders. Files are stored as individual objects in Amazon S3. Directories are managed as folder objects in S3, using the same syntax as the S3 console. Symbolic links and hard links are not supported. Attempting to create a link will result in an error. Common file operations change file metadata, which results in the deletion of the current S3 object and the creation of a new S3 object. /storagegateway/faqs/;What file system metadata can my client access and where is the metadata stored?;Your clients can access POSIX-style metadata including ownership, permissions, and timestamps that are durably stored in S3 in the user metadata of the object associated with the file. When you create a file share on an existing bucket, the stored metadata will be restored and made accessible to your clients. /storagegateway/faqs/;How do I set the Content-Type for files uploaded to S3?;For each file share, you can enable guessing of MIME types for uploaded objects upon creation or enable the feature later. If enabled, File Gateway will use the filename extension to determine the MIME type for the file and set the S3 objects Content-Type accordingly. This is beneficial if you are using File Gateway to manage objects in S3 that you access directly via URL or distribute through Amazon CloudFront. /storagegateway/faqs/;Can I directly access objects stored in S3 by using Amazon S3 File Gateway?;Yes. Once objects are stored in S3, you can access them directly in AWS for in-cloud workloads without requiring Amazon S3 File Gateway. Your objects inherit the properties of the S3 bucket in which they are stored, such as lifecycle management, and cross-region replication. /storagegateway/faqs/;What if my bucket already contains objects?;If your bucket already contains objects when you configure it for use with Amazon S3 File Gateway, object keys will be used to present the objects as files to the NFS and SMB clients. The files are given default file system metadata. /storagegateway/faqs/;How are buckets accessed by the gateway? Are entire bucket or file contents downloaded?;"The gateway does not automatically download full objects or all the data that exists in your bucket; data is only downloaded when it is explicitly accessed by your clients. Additionally, to reduce data transfer overhead, File Gateway uses multipart uploads and copy put, so only changed data in your files is uploaded to S3." /storagegateway/faqs/;What metadata can my NFS client access for objects created outside of the gateway?;For objects uploaded to the S3 bucket directly, i.e. not using File Gateway and an NFS share, you can configure default ownership and permissions. /storagegateway/faqs/;What metadata can my SMB client access for objects created outside of the gateway?;For objects uploaded to the S3 bucket directly, i.e. without using Amazon S3 File Gateway and an SMB share, metadata such as ownership and permissions will be inherited from the object’s parent folder. Permissions at the root of the share are fixed and objects created directly under the root folder will inherit these fixed permissions. Refer to the documentation on metadata settings of objects created outside the gateway. /storagegateway/faqs/;Can I use multiple NFS clients with a single Amazon S3 File Gateway?;You can have multiple NFS clients accessing a single File Gateway. However, as with any NFS server, concurrent modification from multiple NFS clients can lead to unpredictable behavior. Application level coordination is required to do this in a safe way. /storagegateway/faqs/;Can I have multiple writers to my S3 bucket?;No. We recommend a single writer to objects in your S3 bucket. If you directly overwrite or update an object previously written by File Gateway, it results in undefined behavior when the object is accessed through the file share. Concurrent modification of the same object (e.g. via the S3 API and the Amazon S3 File Gateway) can lead to unpredictable results and we recommend against this configuration. /storagegateway/faqs/;Can I have two gateways writing independent data to the same bucket?;We do not recommend configuring multiple writers to a single bucket because it can lead to unpredictable results. You could enforce unique object names or prefixes through your application workflow. S3 File Gateway will emit Health Notifications when conflicts occur in such a setup. /storagegateway/faqs/;Can I have multiple gateways reading data from the same bucket?;Yes, you can have multiple readers on a bucket managed through an Amazon S3 File Gateway. You can configure a file share as read-only, and allow multiple gateways to read objects from the same bucket. Additionally, you can refresh the inventory of objects that your gateway knows about using the Storage Gateway Console, the automated periodic cache refresh process, or the RefreshCache API. /storagegateway/faqs/;Can I monitor my file share using Amazon CloudWatch?;Yes, you can monitor usage of your file share using Amazon CloudWatch metrics and get notified on completion of file operations through CloudWatch Events. To learn more, visit Monitoring your File Share. /storagegateway/faqs/;How do I know when my file is uploaded?;When you write files to your file share with Amazon S3 File Gateway, the data is stored locally first and then asynchronously uploaded to your S3 bucket. You can request notifications through AWS CloudWatch Events when the upload of an individual file completes. These notifications can be used to trigger additional workflows, such as invoking an AWS Lambda function or Amazon EC2 Systems Manager Automation, which is dependent upon the data that is now available in S3. To learn more, please refer to the documentation for File Upload Notification. /storagegateway/faqs/;How is a file upload notification different from an S3 event notification?;The file upload notification provides a notification for each individual file that is uploaded to Amazon S3 through S3 File Gateway. S3 event notifications provide notifications that include partial file uploads so there is no way to tell from the S3 event notification that the file upload has completed. /storagegateway/faqs/;How do I know when my working file set is uploaded?;When you write files to your file share with Amazon S3 File Gateway, the data is stored locally first and then asynchronously uploaded to your S3 bucket. You can request notifications through Amazon CloudWatch Events when the upload of a working file set completes. These notifications can be used to trigger additional workflows, such as invoking an AWS Lambda function or Amazon EC2 Systems Manager Automation, which is dependent upon the data that is now available in S3. To learn more, please refer to the documentation for Working File Set Upload Notification. /storagegateway/faqs/;Can I update my Amazon S3 File Gateway’s view of a bucket to see objects created from an object-based workload or another File Gateway?;Yes, you can refresh the inventory of objects that your Amazon S3 File Gateway knows about using the Console, the file system driven cache refresh process, or the RefreshCache API. You will receive notifications through AWS CloudWatch Events when the RefreshCache API operation has completed. These notifications can be used to send emails using Amazon Simple Notification Service (SNS), or trigger local processing using the updated contents. To learn more, please refer to the documentation. /storagegateway/faqs/;Can I use the gateway to update data in a bucket that belongs to another AWS account?;Yes, you can use the gateway for cross-account access to buckets. To learn more, please refer to the documentation for Using File Share for Cross-Account access. /storagegateway/faqs/;Can I use the gateway to access data in Requester Pays S3 buckets?;Yes, when creating your file share you can enable access to Requester Pays S3 buckets. As a requester, you will incur the charges associated with accessing data from Requester Pays buckets. /storagegateway/faqs/;How do I create multiple shares per bucket in a gateway?;You can create multiple file shares for a single S3 bucket by specifying an S3 prefix during file share creation process. /storagegateway/faqs/;How many file shares can I create per gateway?;You can create up to 10 shares for an S3 bucket in a single gateway. We do not limit the number of file shares per bucket across multiple gateways but each gateway is limited to 10 shares. However, we recommend having a single writer to the bucket, either an Amazon S3 File Gateway or client accessing S3 directly. /storagegateway/faqs/;Can I change the name of a file share?;Yes, you can change the name of a file share. /storagegateway/faqs/;What is the maximum size of an individual file?;"The maximum size of an individual file is 5 TB, which is the maximum size of an individual object in S3. If you write a file larger than 5 TB, you will get a ""file too large"" error message and only the first 5 TB of the file will be uploaded." /storagegateway/faqs/;My application checks storage size before copying data. What storage size does the gateway return?;The gateway returns a large number (8 EB) as your total capacity. Amazon S3 does not limit total storage. /storagegateway/faqs/;Can I use Amazon S3 lifecycle, cross-region replication, and S3 event notification with File Gateway?;Yes. Your bucket policies for lifecycle management, cross-region replication, and S3 event notification, apply directly to objects stored in your bucket through AWS Storage Gateway. /storagegateway/faqs/;Can I use Amazon S3 File Gateway with my backup application?;Amazon S3 File Gateway supports SMB versions 2 and 3 as well as NFS versions 3, 4.0, and 4.1. We are continuing to do ongoing testing with common backup apps. Please let us know via AWS Support or through your AWS account team of any specific apps with which you'd like to see compatibility tested. /storagegateway/faqs/;Can I use Amazon S3 File Gateway to write files to EFS?;No. Amazon S3 File Gateway allows you to store files as objects in S3. /storagegateway/faqs/;When should I use Amazon S3 File Gateway vs. the S3 API?;You can use Amazon S3 File Gateway when you want to access objects in S3 as files using standard filesystem operations. Amazon S3 File Gateway additionally provides low-latency local access and efficient data transfer. You can use the S3 API when your application doesn’t require file system operations and can manage data transfer directly. /storagegateway/faqs/;How does Amazon S3 File Gateway manage the local cache? What data gets stored locally?;Local disk storage on the gateway is used to temporarily hold changed data that needs to be transferred to AWS, and to locally cache data for low-latency read access. File Gateway automatically manages the cache maintaining the most recently accessed data based on client read and write operations. Data is evicted from the cache only when space is needed to store more recently used data. /storagegateway/faqs/;What guidance should I use to provision the size of the gateway’s cache disk? What happens if I provision a smaller cache disk?;You should provision your cache based on: 1/ The size of your working dataset to which you need low-latency access, so you can reduce read latencies by decreasing the frequency with which data is requested from S3, and 2/ The size of files written to the gateway by your applications. /storagegateway/faqs/;When does data in the cache get evicted?;Data written to the cache from your applications or through retrieval from Amazon S3 is evicted from the cache only when space is needed to store more recently accessed data. /storagegateway/faqs/;Does Amazon S3 File Gateway perform data reduction (deduplication or compression)?;No. Files are mapped to objects one-to-one in your bucket without modification, enabling you to access your data directly in S3 without needing to use the gateway or deploy additional software to rehydrate your data. /storagegateway/faqs/;Can I use Amazon S3 File Gateway with Amazon S3 Transfer Acceleration?;File Gateway will not use the accelerated endpoints even if your bucket is configured for S3 Transfer Acceleration. /storagegateway/faqs/;What sort of encryption does Amazon S3 File Gateway use to protect my data?;All data transferred between the gateway and AWS storage is encrypted using SSL. By default, all data stored in S3 is encrypted server-side with Amazon S3-Managed Encryption Keys (SSE-S3). For each file share you can optionally configure to have your objects encrypted with AWS KMS-Managed Keys using SSE-KMS. To learn more, please see “Encrypting Your Data Using AWS Key Management System,” in the Storage Gateway User Guide, which includes critical details about usage of the feature. /storagegateway/faqs/;What is Amazon FSx File Gateway?;Amazon FSx File Gateway optimizes on-premises access to Windows file shares on Amazon FSx, making it easy for users to access FSx for Windows File Server data with low latency and conserving shared bandwidth. Users benefit from a local cache of frequently used data that they can access, enabling faster performance and reduced data transfer traffic. File system operations, such as reading and writing files, are all performed against the local cache, while Amazon FSx File Gateway synchronizes changed data to FSx for Windows File Server in the background. With these capabilities, you can consolidate all of your on-premises file share data in AWS on FSx for Windows File Server and benefit from protected, resilient, fully managed file systems. /storagegateway/faqs/;Why should I use Amazon FSx File Gateway?;Many on-premises desktop applications are latency-sensitive, which may cause delays to your end users and slow performance when they are directly accessing files in AWS from remote locations. Additionally, allowing large numbers of users to directly access data in the cloud can cause congestion on your shared bandwidth resources such as AWS Direct Connect links. Amazon FSx File Gateway allows you to use Amazon FSx for Windows File Server for these workloads, and help replace your on-premises storage with fully managed, scalable, and highly reliable file storage in AWS without impacting your applications or network. /storagegateway/faqs/;How does Amazon FSx File Gateway solve these problems for on-premises applications?;Amazon FSx File Gateway provides an SMB file protocol server for clients to connect to, and an on-premises cache of the frequently used data that they can access with the same low latency as they would experience inside AWS. File system operations, such as reading and writing files, are all performed against the local cache, while Amazon FSx File Gateway synchronizes changed data to Amazon FSx for Windows File Server in the background. Amazon FSx File Gateway also helps minimize the amount of data transfer, while optimizing the usage of network bandwidth to AWS. /storagegateway/faqs/;How do I use Amazon FSx File Gateway?;To use Amazon FSx File Gateway, you need to have at least one running Amazon FSx file system, and ensure that you have on-premises access to Amazon FSx for Windows File Server either through a VPN or through an AWS Direct Connect connection. To get started with FSx for Windows File Server, view the documentation instructions here. You then begin either by downloading and deploying an Amazon FSx File Gateway VMware virtual appliance, or an AWS Storage Gateway hardware appliance into your on-premises environment. Once your Amazon FSx File Gateway is installed and you can access FSx for Windows File Server, you can use the AWS Management Console to attach an FSx for Windows File Server file system. The AWS Management Console will then walk you through all the steps needed to make file shares accessible on premises. /storagegateway/faqs/;What regions is Amazon FSx File Gateway available in?;Amazon FSx File Gateway can be used to access Windows file systems in all AWS regions where FSx for Windows File Server is offered. /storagegateway/faqs/;How much does Amazon FSx File Gateway cost?;You are billed hourly for Amazon FSx File Gateway. For pricing information, please visit the AWS Storage Gateway pricing page. /storagegateway/faqs/;What protocols does Amazon FSx File Gateway support?;Amazon FSx File Gateway supports versions 2.x and 3.x of the Server Message Block (SMB) protocol. SMB is supported by Microsoft Windows, MacOS, and the Linux OS. /storagegateway/faqs/;What is the relationship between files I see in Amazon FSx File Gateway and files I see in Amazon FSx for Windows File Server?;Amazon FSx File Gateway maps local file shares and their contents to file shares stored remotely in Amazon FSx for Windows File Server. There is a 1:1 correspondence between the remote and locally visible files and their shares. /storagegateway/faqs/;Does Amazon FSx File Gateway allow me to access the same file shares in AWS?;"Yes. You may access your file shares from both Amazon FSx File Gateway as well as directly from Amazon FSx in AWS; however, you should ensure that files can only be written from a single location at a time. In this release, Amazon FSx File Gateway will not prevent writes from multiple locations to overlap in a way that creates conflicts." /storagegateway/faqs/;How does Amazon FSx File Gateway allow me to manage my Amazon FSx for Windows File Server?;You can manage Amazon FSx for Windows File Server via a remote management interface using all of the tools provided by FSx for Windows File Server. /storagegateway/faqs/;Can Amazon FSx File Gateway be connected to more than one Amazon FSx for Windows file system?;Yes. You are allowed to attach a gateway to shares on up to 5 file systems as long as they are all members of the same Active Directory domain. Amazon FSx File Gateway will only join a single Active Directory Domain. /storagegateway/faqs/;What deployment options are supported?;You can deploy a virtual machine containing the Amazon FSx File Gateway software on VMware ESXi, Microsoft Hyper-V, or Linux KVM, or you can deploy Storage Gateway as a hardware appliance. /storagegateway/faqs/;How do I use my Active Directory to provide credentials?;Amazon FSx File Gateway becomes a member of the Active Directory domain whether the AD infrastructure is hosted in AWS Directory Service, or if it is managed on-premises. Once Amazon FSx File Gateway is a member of the domain, it has access to all users and policies that are set in that domain for the purposes of enforcing security. Amazon FSx File Gateway then will behave identically to any Windows Server and enforce all applicable file access policies based on what is configured in Active Directory. /storagegateway/faqs/;Is Amazon FSx File Gateway compatible with my existing Windows Access Controls and Active Directory credentials?;Amazon FSx File Gateway uses native Windows Access Controls and is compatible with any existing static access lists that work with Microsoft Windows. The maximum size of an ACL is 64KB or approximately 1820 Access Control Entries. This is identical to Windows Server hosts. Access controls are set and stored on FSx Windows File Server, so you only need to create them once and they will be reflected in all attached File Gateways. /storagegateway/faqs/;Is data encrypted in transit?;Yes. Amazon FSx File Gateway supports SMB encryption up to the latest SMB v3.1.1 specification, including AES 128 CCM and AES 128 GCM. Compatible clients will connect using encryption automatically. Additionally, Amazon FSx File Gateway uses SMB encryption when it communicates with FSx for Windows File Server in AWS. You must either configure a VPN or a Direct Connect link to AWS, and set appropriate policies to allow SMB traffic and management traffic to pass through to AWS. /storagegateway/faqs/;How does Amazon FSx File Gateway provide high availability?;Just like Amazon S3 File Gateway, Amazon FSx File Gateway achieves high availability on VMware by running a series of continuous health checks against the operation of the gateway that connect to the VMware monitoring service. During a hardware, software, or network failure, VMware will trigger a gateway restart on a new host or on its existing host if the host is still operational. At a maximum, users and applications will experience up to 60 seconds of downtime during a restart. After a restart, connections to the gateway are automatically re-established, never needing manual intervention. On re-initialization the gateway will send metrics back to the cloud to give customers a full view of the availability event. /storagegateway/faqs/;What types of failures are covered by Amazon FSx File Gateway with high availability?;Amazon FSx File Gateway, with VMware HA enabled and application monitoring configured, will detect and recover from hardware failures, hypervisor failures, network failures, as well as software issues that lead to connection timeouts or file-share unavailability. /storagegateway/faqs/;How many sessions and file shares does Amazon FSx File Gateway support?;Amazon FSx File Gateway supports up to 50 shares and 500 active client sessions connected to Amazon FSx File Gateway instances in a single instance configuration. /storagegateway/faqs/;How do I get started with AWS Snowball to migrate my tape data?;To get started, in the AWS Snow Family console, order a Snowball Edge Storage Optimized device with Tape Gateway. When you receive the device from AWS, unlock it, and connect to your local network. Then start Tape Gateway, which looks like a physical tape library. Connect to AWS and copy data from physical tapes to virtual tapes on Tape Gateway using your existing backup application. After you complete your data copy, ship the Snowball Edge device back to AWS. Your data will be stored in either S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive. You can view your virtual tapes stored on AWS through the AWS Storage Gateway console and access data on them through a Tape Gateway that runs on premises as a virtual machine or hardware appliance or on an Amazon Elastic Compute Cloud (Amazon EC2) instance on AWS. /storagegateway/faqs/;How much storage is available on a Snowball Edge Storage Optimized device that I can use with Tape Gateway?;The Snowball Edge Storage Optimized device provides 80 terabytes of usable block storage or object storage and can migrate that amount of tape data to S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive. /storagegateway/faqs/;When do I use Tape Gateway with a Snowball Edge Storage Optimized device and when do I use Tape Gateway with a virtual or a hardware appliance?;You use a Snowball Edge Storage Optimized device with Tape Gateway in constrained network bandwidth environments to migrate data stored in your tape archives to AWS. After you complete your data copy to the device, you send it back to AWS. With Tape Gateway on Snowball, your data is migrated to AWS offline. You use Tape Gateway on a virtual or a hardware appliance when you want to copy new backups and archives to AWS and don’t have network constraints. With Tape Gateway on a virtual or hardware appliance, your data is transferred to AWS using the network and you keep the virtual or hardware appliance permanently in your data center. /storagegateway/faqs/;Can I use Snowball with Tape Gateway as an on-premises virtual tape library (VTL) instead of using it for offline data migration?;No, a Snowball Edge Storage Optimized device with Tape Gateway is not designed and built for meeting your on-premises VTL needs—only for meeting your offline data migration needs. After your backup application exports virtual tapes, your virtual tapes on Snowball with Tape Gateway can’t be accessed until they are imported into AWS. For on-premises VTL needs, use a Tape Gateway that runs on a virtual machine, on a hardware appliance, or on an Amazon EC2 instance. /storagegateway/faqs/;What are the benefits of storing virtual tapes in AWS compared to warehousing tapes offsite?;You get 11 9s of data durability, fixity checks by AWS on a regular basis, data encryption, right data when you restore, and cost savings, when storing virtual tapes in AWS using Tape Gateway with S3 Glacier Deep Archive compared to warehousing physical tapes offsite. First, all virtual tapes stored in S3 Glacier Deep Archive are replicated and stored across at least three geographically-dispersed Availability Zones, protected by 11 9s of durability. Second, AWS performs fixity checks on a regular basis to confirm your data can be read and no errors have been introduced. Third, all tapes stored in S3 Glacier Deep Archive are protected by S3 Server Side Encryption using default keys or your KMS keys. In addition, you also avoid physical security risk associated with tape portability. Fourth, compared to the experience of warehousing tapes offsite where you may receive an incorrect or broken tape during restore, with Tape Gateway, you always get correct data. Finally, you can save in monthly storage costs when storing your data in S3 Glacier Deep Archive compared to warehousing tapes offsite. /storagegateway/faqs/;What Amazon S3 storage classes does Tape Gateway support?;Tape Gateway supports S3 Standard, S3 Glacier, and S3 Glacier Deep Archive storage classes. Data on your virtual tapes is stored in a virtual tape library in Amazon S3 when the backup application is writing data to tapes. After you eject tapes from the backup application, your tapes are archived to S3 Glacier or S3 Glacier Deep Archive. /storagegateway/faqs/;How much data can I store on a virtual tape?;The minimum size and maximum size of a virtual tape you can create on a Tape Gateway is 100 GiB and 15 TiB, respectively. Please note, you only pay for the amount of data stored on each tape, and not for the size of the tape. /storagegateway/faqs/;How many tapes can the virtual tape library (VTL) hold?;"A single Tape Gateway can have up to 1,500 virtual tapes in the VTL with a maximum aggregate capacity of 1 PB; however there is no limit to the amount of data or number of virtual tapes you can archive. You can also deploy additional Tape Gateways to scale storage for virtual tapes that are not archived. For more information, please see our documentation on Storage Gateway limits." /storagegateway/faqs/;How much data can I store in tape archives?;There is no limit to the amount or size of virtual tapes that you can archive. /storagegateway/faqs/;Which S3 storage classes can I retrieve my archived virtual tape to?;You can retrieve a virtual tape archived in S3 Glacier or S3 Glacier Deep Archive to S3. A tape archived in S3 Glacier is retrieved to S3 using standard retrieval method typically within 3-5 hours. A tape archived in S3 Glacier Deep Archive is retrieved to S3 using standard retrieval method typically within 12 hours. /storagegateway/faqs/;How do I access my data on virtual tapes?;The virtual tape containing your data must be stored in a virtual tape library before it can be accessed. Access to virtual tapes in your virtual tape library is instantaneous. If the virtual tape containing your data is archived, you can retrieve the virtual tape using the AWS Management Console or API. First select the virtual tape, then choose the virtual tape library into which you want the virtual tape to be loaded. You can retrieve a tape archived in S3 Glacier and S3 Glacier Deep Archive to S3, typically within 3-5 hours and 12 hours, respectively. Once the virtual tape is available in the virtual tape library, you can use your backup application to make use of the virtual tape to restore data. /storagegateway/faqs/;Will I be able to access the virtual tapes in my virtual tape library using Amazon S3 or Amazon S3 Glacier APIs?;No. You cannot access virtual tape data using Amazon S3 or Amazon S3 Glacier APIs. However, you can use the Tape Gateway APIs to manage your virtual tape library and your virtual tape shelf. /storagegateway/faqs/;How do I use Tape Gateway with S3 Glacier Deep Archive storage class?;When creating new tapes through the Storage Gateway console or API, you can set the archival storage target to S3 Glacier Deep Archive. When your backup software ejects the tapes, they will be archived to S3 Glacier Deep Archive. You can retrieve a virtual tape archived in S3 Glacier Deep Archive to S3 using standard retrieval method typically within 12 hours. /storagegateway/faqs/;Can I move my existing virtual tapes in S3 Glacier to S3 Glacier Deep Archive?;Yes. Tape Gateway supports moving your tapes in S3 Glacier to S3 Glacier Deep Archive. You can assign the tape placed in Glacier Pool to Deep Archive Pool using AWS Storage Gateway Console or API. Tape Gateway will then move the virtual tape to Deep Archive Pool associated with the S3 Glacier Deep Archive storage class. You will incur a tape move charge for moving a tape from S3 Glacier to S3 Glacier Deep Archive and if applicable, an early deletion fee for S3 Glacier, if you move a tape from S3 Glacier to S3 Glacier Deep Archive prior to 90 days. /storagegateway/faqs/;Can I move a tape in S3 Glacier Deep Archive to S3 Glacier?;No, you cannot move a tape from S3 Glacier Deep Archive to S3 Glacier. You can retrieve a tape from S3 Glacier Deep Archive to S3 or delete a tape from S3 Glacier Deep Archive. /storagegateway/faqs/;What backup applications can I use with Tape Gateway?;The VTL interface is compatible with backup and archival applications that use the industry-standard iSCSI-based tape library interface. For a full list of the supported backup applications see the Storage Gateway overview page. /storagegateway/faqs/;What sort of encryption does Tape Gateway use to protect my data?;All data transferred between the gateway and AWS storage is encrypted using SSL. By default, all data stored by Tape Gateway in S3 is encrypted server-side with Amazon S3-Managed Encryption Keys (SSE-S3). /storagegateway/faqs/;How much volume data can I manage per gateway? What is the maximum size of a volume?;Each Volume Gateway can support up to 32 volumes. In cached mode, each volume can be up to 32 TB for a maximum of 1 PB of data per gateway (32 volumes, each 32 TB in size). In stored mode, each volume can be up to 16 TB for a maximum of 512 TB of data per gateway (32 volumes, each 16 TB in size). For more information, please refer to our documentation on Storage Gateway limits. /storagegateway/faqs/;When I look in Amazon S3 why can’t I see my volume data?;Your volumes are stored in an Amazon S3 bucket maintained by the AWS Storage Gateway service. Your volumes are accessible for I/O operations through AWS Storage Gateway. You cannot directly access them using Amazon S3 API actions. You can take point-in-time snapshots of gateway volumes that are made available in the form of Amazon EBS snapshots, which can be turned into either Storage Gateway Volumes or EBS Volumes. Use the File Gateway to work with your data natively in S3. /storagegateway/faqs/;What sort of encryption does Volume Gateway use to protect my data?;All data transferred between the gateway and AWS storage is encrypted using SSL. By default, all data stored by Volume Gateway in S3 is encrypted server-side with Amazon S3-Managed Encryption Keys (SSE-S3). /storagegateway/faqs/;Can I create an EBS Snapshot from a KMS-encrypted volume?;Yes. You can create an EBS snapshot from an AWS KMS-encrypted volume using the API. The EBS snapshot will be encrypted using the same key as the one used for volume encryption. /storagegateway/faqs/;Can I create a volume from a KMS-encrypted EBS snapshot?;Yes. You can create an encrypted volume from a KMS-encrypted EBS snapshot using the API. The encrypted volume can use the same key that was used to encrypt the EBS snapshot, or you can specify a different encryption key for encrypting the volume. /storagegateway/faqs/;What data will my snapshot contain? How do I know when to take a snapshot to ensure my data is backed up?;Snapshots represent a point-in-time copy of the volume at the time the snapshot is requested. They contain all of the information needed to restore your data (from the time the snapshot was taken) to a new volume. Data written to the volume by your application prior to taking the snapshot, but not yet uploaded to AWS, will be included in the snapshot. /storagegateway/faqs/;How do I restore a snapshot to a gateway?;Each snapshot is given a unique identifier that you can view using the AWS Management Console. You can create AWS Storage Gateway or Amazon EBS volumes based on any of your existing snapshots by specifying this unique identifier. /storagegateway/faqs/;Do the AWS Storage Gateway’s volumes need to be un-mounted in order to take a snapshot? Does the snapshot need to complete before the volume can be used again?;No, taking snapshots does not require you to un-mount your volumes, nor does it impact your application’s performance. However, snapshots only capture data that has been written to your AWS Storage Gateway volume, which may exclude any data that has been locally buffered by your application or OS. /storagegateway/faqs/;Can I schedule snapshots of my AWS Storage Gateway volumes?;Yes, you can create a snapshot schedule for each of your volumes. You can modify both the time the snapshot occurs each day, as well as the frequency (every 1, 2, 4, 8, 12, or 24 hours). /storagegateway/faqs/;How long does it take to complete a snapshot?;The time it takes to complete a snapshot is largely dependent upon the size of your volume and the speed of your Internet connection to AWS. The AWS Storage Gateway compresses all data prior to upload, reducing the time to take a snapshot. /storagegateway/faqs/;Will I be able to access my snapshot data using Amazon S3’s APIs?;No, snapshots are only accessible from the AWS Storage Gateway and Amazon EBS and cannot be directly accessed using Amazon S3 APIs. /storagegateway/faqs/;What are the snapshot limits per gateway?;There are no limits to the number of snapshots or the amount of snapshot data a single gateway can produce. /storagegateway/faqs/;How do I protect volumes on Volume Gateway using AWS Backup?;You can use AWS Backup to either take a one-time backup or define a backup schedule for Volume Gateway volumes. The volume backups are stored in Amazon S3 as Amazon EBS snapshots and visible in the AWS Backup console or Amazon EBS console. The volume backups created by AWS Backup can manually or automatically be deleted from the AWS Backup console. /storagegateway/faqs/;How do I use AWS Backup to manage backup and retention of my Volume Gateway volumes?;You can start from either the Storage Gateway console or the AWS Backup console to manage your backups. If you start from the Storage Gateway console, you have the ability to navigate to the AWS Backup console to complete your backup plan configuration or initiate an on-demand backup. Alternatively, you can start from the AWS Backup console to configure your backup plan or initiate an on-demand backup of Volume Gateway volumes. /storagegateway/faqs/;Does anything change with how I have been using Volume Gateway volumes today?;No. All existing Volume Gateway snapshot functionality and your existing Amazon EBS Snapshots remain available and unchanged. You can continue to use the Storage Gateway console to create volumes from your EBS Snapshots and use the Amazon EBS console to view or delete your snapshots. /storagegateway/faqs/;If I use AWS Backup, can I also continue to use Volume Gateway snapshot schedules and existing snapshots?;Yes. You can continue to use Volume Gateway’s existing snapshot capabilities to create Amazon EBS snapshots and use your previously created snapshots for restore purposes. AWS Backup’s backup schedule operates independently from the Volume Gateway scheduled snapshots, and provides you an additional way to centrally manage all your backup and retention policies. /storagegateway/faqs/;If I have a KMS-encrypted volume on Volume Gateway, will AWS Backup be able to back up that volume?;Yes. AWS Backup will back up KMS-encrypted volumes on Volume Gateway with the same key as the one used for volume encryption. /storagegateway/faqs/;Can I use AWS Backup to create a backup of my Volume Gateway volume in a different region (e.g. cross region)?;AWS Backup supports backup of Volume Gateway volumes within the same region in which AWS Backup operates. /storagegateway/faqs/;What is the Storage Gateway Hardware Appliance?;AWS Storage Gateway is available as a hardware appliance, which has Storage Gateway software pre-installed on a validated server configuration. You manage the appliance from the AWS Console or API. /storagegateway/faqs/;Why might I need a hardware appliance?;The hardware appliance further simplifies procurement, deployment, and management of AWS Storage Gateway on-premises for IT environments such as remote offices and departments that lack existing virtual server infrastructure, adequate disk and memory resources, or staff with hypervisor management skills. It avoids having to procure additional infrastructure necessary for a virtual environment in order to operate the local Storage Gateway VM appliance. /storagegateway/faqs/;What are the specifications of the hardware appliance?;The hardware appliance is based on validated server configurations. Please refer to the Storage Gateway Hardware Appliance product page for specifications. /storagegateway/faqs/;Where is the hardware appliance available? With which AWS Regions does it work?;The hardware appliance is available for shipping to all international destinations allowed for exporting by the US government. It is supported in 16 AWS Regions including US East (Northern Virginia, Ohio), US West (Northern California, Oregon), Canada (Central), South America (São Paulo), Europe (Ireland, Frankfurt, London, Paris, Stockholm), and Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo). /storagegateway/faqs/;Where do I buy the hardware appliance?;The AWS Storage Gateway Hardware Appliance is available exclusively through resellers. Please contact your preferred reseller for purchasing information and to request a quote. Customers in the United States and Canada can also purchase the appliance directly from CDW. /storagegateway/faqs/;Who owns the hardware appliance?;After purchase, you own the hardware appliance. /storagegateway/faqs/;How do I use the hardware appliance?;Once you receive the hardware appliance, you configure your IP address through the local hardware console, and use this IP address in the AWS Storage Gateway console to activate your appliance. This associates your hardware appliance with your AWS account. Once the hardware appliance is activated, you select your desired gateway type from the console, either file, volume (cached), or tape. The selected type of gateway is then enabled on the appliance. Once activated, you manage and use your new Storage Gateway Hardware Appliance with the AWS Console, CLI, or SDK, similar to how you would with the virtual appliance today. For more information, please see the hardware appliance documentation. /storagegateway/faqs/;Can I run multiple gateways on a single hardware appliance?;No. Currently, the hardware appliance supports running only one gateway at a time. /storagegateway/faqs/;Can I change the type of gateway once it is installed on a hardware appliance?;Yes. To change the gateway type after it is installed on a hardware appliance, you choose Remove Gateway from the Storage Gateway console, which deletes the gateway and all associated resources. At that point, you are free to launch a new gateway on the hardware appliance. /storagegateway/faqs/;How can I purchase and use additional storage on the Storage Gateway Hardware Appliance?;If you ordered the 5 TB hardware appliance model, you can increase the usable local cache to 12 TB by purchasing a 5-pack SSD upgrade kit. You can purchase the SSD upgrade kit by following the same process for ordering the hardware appliance. To expand your storage, simply insert the SSDs into the pre-configured appliance. The SSDs are hot pluggable, and the appliance will automatically recognize the extra storage upon adding SSDs to the appliance. View the documentation for instructions. /storagegateway/faqs/;Can I add more storage to a Storage Gateway Hardware Appliance after it has been activated?;If you have already activated the appliance and associated it with your AWS account, you will need to factory reset it before adding more storage. /storagegateway/faqs/;Can I add any SSD or hard drive to increase storage capacity for my Storage Gateway Hardware Appliance?;No. Only add the SSDs that are available from the manufacturer of the appliance. These SSDs are qualified by AWS for use in the Storage Gateway Hardware Appliance. /storagegateway/faqs/;Does the Storage Gateway Hardware Appliance support RAID?;Yes. The hardware appliance uses software-based ZFS RAID and provides protection against storage drive failure. The base appliance offering 5 TB of usable storage tolerates failure of 1 SSD and the 12 TB usable storage configuration tolerates failure of 2 SSDs. /storagegateway/faqs/;How does Storage Gateway provide high availability?;Storage Gateway achieves high availability by running a series of continuous health-checks against the operation of the gateway that connect to the VMware monitoring service. During a hardware, software, or network failure, VMware will trigger a gateway restart on a new host or on its existing host if the host is still operational. At a maximum, users and applications will experience up to 60 seconds of downtime during a restart. After a restart, connections to the gateway are automatically re-established, never needing manual intervention. On re-initialization the gateway will send metrics back to the cloud to give customers a full view of the availability event. /storagegateway/faqs/;What environments are enabled for Storage Gateway high availability?;Storage Gateway high availability can currently be enabled in clustered VMware vSphere environments that have VMware HA enabled and have shared volume storage available. /storagegateway/faqs/;What does Storage Gateway with high availability cost?;There is no additional cost for running Storage Gateway with the high availability integration enabled. /storagegateway/faqs/;What types of failures are covered by Storage Gateway with high availability?;Storage Gateway with VMware HA enabled and application monitoring configured will detect and recover from hardware failures, hypervisor failures, network failures, as well as software issues that lead to connection timeouts or file-share, volume, or virtual tape library unavailability. /storagegateway/faqs/;Will NFS and SMB sessions be maintained during a gateway restart?;Yes. /storagegateway/faqs/;Will gateway reads or writes fail during a gateway restart?;NFS clients connecting to File Gateways may hang for up to 60 seconds on a read or write operation while the gateway restarts and then will retry, given customers use the recommended mount settings. SMB clients may reject a file read or write during a restart depending on client settings. All iSCSI reads and writes for Volume Gateway and Tape Gateway will hang during a gateway restart and then automatically retry. /storagegateway/faqs/;Will Storage Gateway HA still have the ability to restart if its connection to AWS is broken?;Yes, gateways will be reinitialized using the same underlying shared storage, preserving local cache and upload queues. /storagegateway/faqs/;Will I lose data during a gateway restart?;No, gateways will be reinitialized using the same underlying shared storage, preserving local cache and upload queues. /storagegateway/faqs/;Do I need to make any changes to my VMware environment to take advantage of the HA feature?;If the gateway is deployed to VMware with VMware HA enabled you will be able to configure the restart sensitivity of the Storage Gateway VM in the VMware vSphere control center. The Storage Gateway VM heartbeat will be available giving you the ability to automatically restart the gateway on a specific timeout. /storagegateway/faqs/;What does Storage Gateway HA give me that I don't already have if I operate VMware HA?;VMware HA monitors underlying infrastructure, such as storage and networking. Storage Gateway provides a range of health checks such as file system availability, SMB endpoint availability, and NFS endpoint availability that monitor all of the critical operations of the gateway, ensuring the whole service and not just the underlying infrastructure is continuously available to your users and applications. /storagegateway/faqs/;Will this be available for VMware Cloud on AWS?;Yes. Storage Gateway High Availability can be used on VMware Cloud with no additional requirements. VMware Cloud on AWS has VMware HA enabled by default and shared volumes are available. /storagegateway/faqs/;How will I know if a gateway is capable of high availability and operating in HA-mode?;When setting up a new gateway for VMware, you will be given the option of testing HA. You may also test whether a deployed gateway is HA-capable by choosing the “Test VMware HA” action in the console. /storagegateway/faqs/;What operational visibility will I have during a gateway restart?;The AWS Storage Gateway console will show availability events in log tables and interruptions in performance graphs during a gateway restart. /storagegateway/faqs/;Will I see an availability event in CloudWatch when a gateway restart occurs?;Yes, if you have configured the integration with CloudWatch, availability events triggered from the gateway will be available through CloudWatch. /storagegateway/faqs/;How will I know when a gateway returns to operation?;If you have configured the integration with CloudWatch, a CloudWatch event will be triggered on re-initialization. Additionally, performance graphs will show the gateway’s operational metrics including number of active sessions. /storagegateway/faqs/;Will I be able to set a service timeout that triggers a gateway restart?;Yes, administrators will be able to set a timeout in the vSphere console that will restart the service if the gateway is unreachable for the specified number of seconds. /storagegateway/faqs/;What encryption does AWS Storage Gateway use to protect my data?;All data transferred between any type of gateway appliance and AWS storage is encrypted using SSL. By default, all data stored by AWS Storage Gateway in S3 is encrypted server-side with Amazon S3-Managed Encryption Keys (SSE-S3). Also, you can optionally configure different gateway types to encrypt stored data with AWS Key Management Service (KMS) via the Storage Gateway API. See below for specifics on KMS support by File Gateway, Tape Gateway, and Volume Gateway. /storagegateway/faqs/;Is AWS Storage Gateway HIPAA eligible?;Yes. AWS Storage Gateway is HIPAA eligible. If you have an executed Business Associate Agreement (BAA) with AWS, you can use Storage Gateway to store, back up, and archive protected health information (PHI) on scalable, cost-effective, and secure AWS storage services, including Amazon S3, Amazon S3 Glacier, Amazon S3 Glacier Deep Archive, Amazon FSx for Windows File Server, and Amazon EBS, which are also HIPAA eligible. /storagegateway/faqs/;Is AWS Storage Gateway PCI compliant?;Yes, AWS Storage Gateway is compliant with the Payment Card Industry Data Security Standard (PCI DSS) based on recent assessments. Existing customers can download the Attestation of Compliance (AOC) and PCI Responsibility Summary reports in the AWS Management Console with AWS Artifact. Prospective customers can request the reports by working with the AWS sales team. /storagegateway/faqs/;Is AWS Storage Gateway FedRAMP compliant?;Yes, AWS Storage Gateway is FedRAMP compliant with High authorization level in the AWS GovCloud (US) Regions, and Moderate authorization level in the AWS US Commercial Regions. More information can be found on the AWS FedRAMP compliance page. /storagegateway/faqs/;Does AWS Storage Gateway support FIPS 140-2 compliant endpoints?;The S3 File Gateway, Amazon FSx File Gateway, Volume Gateway, and Tape Gateway support FIPS 140-2 compliant endpoints. /storagegateway/faqs/;Which Regions support AWS Storage Gateway FIPS 140-2 compliant endpoints?;AWS Storage Gateway supports FIPS 140-2 compliant endpoints in the following AWS Regions: US East (NVirginia), US East (Ohio), US West (NCalifornia), US West (Oregon), Canada (Central), GovCloud (US-West), and GovCloud (US-East). /storagegateway/faqs/;What are the FIPS endpoints for AWS Storage Gateway?;For a list of the FIPS endpoints available for AWS Storage Gateway, refer to the AWS Storage Gateway endpoints reference guide or the AWS GovCloud (US) user guide. /storagegateway/faqs/;Is AWS Storage Gateway Hardware Appliance FIPS 140-2 compliant?;No, AWS Storage Gateway Hardware Appliance is not FIPS 140-2 compliant. /storagegateway/faqs/;Does File Gateway provide logging to monitor client file access operations?;Yes, File Gateway audit logs can be used to monitor client operations for folders and files within SMB file shares. /storagegateway/faqs/;Can I monitor client activity for individual file shares?;You can configure File Gateway audit logs to monitor user operations for folders and files at the share level for each SMB share. /storagegateway/faqs/;What types of file shares are supported by File Gateway audit logs?;File Gateway audit logs support SMB shares. /storagegateway/faqs/;What file operations will I see in File Gateway audit logs?;You will see details about the following operations logged for files and directories: open, delete, read, write, rename, change of permissions, and file operation success. User information for each operation, including timestamp, Active Directory domain, user name, and client IP address, is also logged. /storagegateway/faqs/;How do I access File Gateway audit logs?;You can access the File Gateway audit logs in Amazon CloudWatch. Audit logs can also be sent from CloudWatch to the Amazon S3 bucket of your choice. Audit logs can be viewed from Amazon S3 using Amazon Athena and can also be exported to third party security information and event management applications (SIEM) for analysis within those tools. /storagegateway/faqs/;Does Tape Gateway support Write Once Read Many (WORM) capability?;Yes, when creating new virtual tapes manually or using automatic tape creation configuration on Tape Gateway, you can select the WORM tape type. Data on WORM virtual tapes cannot be erased intentionally or accidentally from the backup application. In addition, Tape Gateway’s Tape Retention Lock capability prevents archived virtual tapes from being deleted for a fixed amount of time, or even indefinitely. /storagegateway/faqs/;Can I use AWS Storage Gateway with AWS Direct Connect?;Yes, you can use AWS Direct Connect to increase throughput and reduce your network costs by establishing a dedicated network connection between your on-premises gateway and AWS. Note that AWS Storage Gateway efficiently uses your internet bandwidth to help speed up the upload of your on-premises application data to AWS. /storagegateway/faqs/;Can I route my AWS Storage Gateway internet traffic through a local proxy server?;Yes. Volume and Tape Gateways support configuration of a Socket Secure version 5 (SOCKS5) proxy between your on-premises gateway and AWS. File Gateway supports configuration of a HyperText Transfer Protocol (HTTP) proxy. /storagegateway/faqs/;Can I deploy a Storage Gateway on my private non-routable network? Does Storage Gateway support AWS PrivateLink?;Yes. You can deploy a Storage Gateway on a private, non-routable network if that network is connected to your Amazon VPC via DX or VPNStorage Gateway traffic will be routed via VPC endpoints powered by AWS PrivateLink, a technology that enables private connectivity between AWS services using Elastic Network Interfaces (ENI) with private IPs in your VPCs. To learn more about PrivateLink, visit the PrivateLink documentation. To set up AWS PrivateLink for Storage Gateway, visit the AWS PrivateLink for Storage Gateway documentation. /storagegateway/faqs/;Does Storage Gateway support AWS PrivateLink for all types of gateways?;Yes, the service supports PrivateLink for all gateway types (File/Volume/Tape). /storagegateway/faqs/;What is the cost for using VPC endpoints with Storage Gateway?;You will be billed for each hour that your VPC endpoint remains provisioned. Data processing charges also apply for each Gigabyte processed through the VPC endpoint regardless of the traffic’s source or destination. /storagegateway/faqs/;How do I activate gateways that are connected to AWS via AWS PrivateLink?;PrivateLink enabled gateways can be activated through the AWS Console if your web browser has access to both the internet and your private network, or via the CLI in the region that they are based. /storagegateway/faqs/;How can I use PrivateLink with File Gateway?;To use File Gateway on-premises with PrivateLink and private virtual interfaces (VIFs) to access your Amazon S3 buckets, you will need to set up an Amazon EC2 based proxy server. In order to access Amazon S3 over a private network, you need to use S3's gateway endpoints, and these endpoints are not directly accessible from on-premises environments. The proxy server will provide access through the VPC endpoint for S3, making it accessible to an on-premises File Gateway. We recommend using an EC2 instance family that is optimized for network bandwidth. /storagegateway/faqs/;Can a File Gateway use a VPC endpoint in one region and access an S3 bucket in another region?;No. /storagegateway/faqs/;How can I use PrivateLink with Volume Gateways and Tape Gateways?;Volume and Tape Gateways connect directly to AWS services through the Storage Gateway VPC endpoint without the need for a proxy to S3. /storagegateway/faqs/;Can I use AWS PrivateLink with my Storage Gateway Hardware Appliance?;Yes, but the appliance must be activated before it is moved to the private network. /storagegateway/faqs/;What performance can I expect?;The AWS Storage Gateway sits between your applications and Amazon storage services. The performance you experience depends on the host platform (hardware appliance, virtual machine, Amazon EC2 instance) you are using to run Storage Gateway software, along with a number of other factors. These include the network bandwidth between your iSCSI initiator or NFS client and gateway, the speed and configuration of your underlying local disks, the configuration of your VM, the amount of local storage allocated to your gateway, and the bandwidth between your gateway and Amazon storage. Our technical documentation provides guidance on how to optimize your AWS Storage Gateway environment for best performance. /storagegateway/faqs/;What are the minimum hardware and software requirements for the AWS Storage Gateway?;For running AWS Storage Gateway on a virtual machine or an Amazon EC2 instance, see the requirements section in the AWS Storage Gateway User Guide. AWS Storage Gateway is also available as a Hardware Appliance with pre-validated specifications. /storagegateway/faqs/;What type of data reduction does AWS Storage Gateway perform?;Volume and Tape Gateways perform compression of data in-transit and at-rest which can reduce both data transfer and storage charges. The AWS Storage Gateway only uploads data that has changed, minimizing the amount of data sent over the Internet. /storagegateway/faqs/;Does AWS Storage Gateway support network bandwidth throttling?;Yes, you can throttle network bandwidth used by the gateway to synchronize data with AWS based on a schedule for Volume and Tape Gateways. You can specify day of the week, time, and bandwidth rates for inbound and outbound traffic. /storagegateway/faqs/;How do I monitor my gateway?;"You can use Amazon CloudWatch to monitor the performance metrics and alarms for your gateway, giving you insight into storage, bandwidth, throughput, and latency. These metrics and alarms are accessible directly from CloudWatch; or by following links in the AWS Storage Gateway Console, which take you directly to the CloudWatch metrics or alarms for the resource being viewed. Please refer to the CloudWatch details and pricing pages for additional information." /storagegateway/faqs/;How can I measure the cache performance of my gateway?;You can use Amazon CloudWatch metrics including CachePercentDirty, CacheHitPercent, CacheFree, and CachePercentUsed. These can be viewed by following the Monitoring link on the gateway details tab in the AWS Storage Gateway Console. /storagegateway/faqs/;How can I measure the bandwidth used by my gateway?;You can use Amazon CloudWatch metrics including CloudBytesUploaded and CloudBytesDownloaded. /storagegateway/faqs/;How can I create CloudWatch Alarms for my gateway?;You can create recommended Amazon CloudWatch alarms when creating a new gateway or after creating a new gateway from the AWS Storage Gateway console. You can also create alarms for your gateway in the Amazon CloudWatch console. /storagegateway/faqs/;How does the AWS Storage Gateway manage updates?;AWS Storage Gateway periodically deploys important updates and software patches to your gateway virtual machine (VM). You can configure a weekly maintenance schedule allowing you to control when these updates will be applied to your gateway. Alternatively, you can apply updates manually when they are made available, either through the AWS Storage Gateway Console or API. Updates should take only a few minutes to complete. For more information, please visit the Managing Gateway Updates section of our documentation. /storagegateway/faqs/;How will I be billed for my use of AWS Storage Gateway?;There are 3 elements to how you will be billed for AWS Storage Gateway: Storage, requests, and data transfer. For detailed pricing information, please visit the AWS Storage Gateway Pricing page. /storagegateway/faqs/;How will I be charged for file storage when using a File Gateway?;File Gateways store data directly in Amazon S3. You are billed by Amazon S3 for the objects stored and requests made by your File Gateway. For more information, please visit the Amazon S3 Pricing page. /storagegateway/faqs/;How will I be charged for volume or virtual tape storage when using a volume or Tape Gateway?;You are billed for the amount of volume and virtual tape data you store in AWS. This fee is prorated daily and prices vary by region. You are only billed for the portion of volume or virtual tape capacity that you use, not for the provisioned size of the resource. All volume and virtual tape data is compressed before it is transferred to AWS by the gateway, which can reduce your storage charges. For detailed pricing information, please visit the AWS Storage Gateway Pricing page. /storagegateway/faqs/;How will I be charged for EBS snapshots taken from my AWS Storage Gateway volumes?;EBS snapshots taken from your Storage Gateway volumes are stored and billed by Amazon EBS. When taking a new snapshot only the data that has changed since your last snapshot is stored to reduce your storage charges. For more information, please visit the Amazon EBS Pricing page. /storagegateway/faqs/;How will I be charged for reading and writing data?;When your gateway writes data to AWS you will be charged at a flat rate of $0.01 per GB of data written to AWS up to a monthly maximum of no more than $125 per gateway. There is no charge for reading data from AWS. Since the gateway performs caching, bandwidth optimization, and, for Volume and Tape Gateways, compression, the amount of data written to AWS may be less than the amount of data written to the gateway by your application. You can monitor the amount of data written by your gateway to AWS through the provided Amazon CloudWatch metrics and you can configure bandwidth limits on your gateway to manage your costs. /storagegateway/faqs/;How will I be charged when retrieving data on an archived virtual tape?;You are charged, when retrieving a virtual tape that has been archived in S3 Glacier, at a flat rate of $0.01 per GB of data stored on the tape. For example, retrieving 5 tapes that contain 100 GB each would cost 5 x 100GB x $0.01 = $5.00. /storagegateway/faqs/;How will I be charged for deleting an archived virtual tape?;If a virtual tape is deleted within three months of being archived in S3 Glacier or within six months of being archived in S3 Glacier Deep Archive, you will be charged an early deletion fee. If the virtual tape has been stored for three months or longer in S3 Glacier or for six months or longer in S3 Glacier Deep Archive, there is no charge for deletion. /storagegateway/faqs/;How am I charged for virtual tapes I store in S3 Glacier Deep Archive?;Virtual tapes stored in S3 Glacier Deep Archive will be charged S3 Glacier Deep Archive storage class rate. You can visit Storage Gateway pricing webpage to review Tape Gateway pricing. /storagegateway/faqs/;How will the virtual tapes I store in Deep Archive Pool, associated with S3 Glacier Deep Archive storage class, show up on my AWS bill and in the AWS Cost Management tool?;The usage and cost for virtual tapes you store in Deep Archive Pool will show up as an independent service line item on your monthly AWS bill under AWS Storage Gateway Deep Archive, separate from your AWS Storage Gateway and costs. However, if you are using the AWS Cost Management tool, usage and cost for virtual tapes you store in Deep Archive Pool will be included under AWS Storage Gateway in your detailed monthly spend reports, and not broken out as a separate service line item. /storagegateway/faqs/;How will I be charged for moving a virtual tape archived in S3 Glacier to S3 Glacier Deep Archive?;For AWS US East (NVirginia) region, you are charged, when moving a virtual tape that has been archived in S3 Glacier to S3 Glacier Deep Archive, at a rate of $0.032 per GB of data stored on the tape. For example, moving a 100 GB tape archived in S3 Glacier to S3 Glacier Deep Archive will cost 100 GB x $0.032/GB = $3.2. If you move a tape that’s archived for less than 90 days in S3 Glacier to S3 Glacier Deep Archive, you are also charged for early deletion fee for tape storage in S3 Glacier. /storagegateway/faqs/;How will I be charged for network data transfer to and from AWS when using AWS Storage Gateway?;You are billed for Internet data transfer for each GB downloaded from AWS to your gateway. All data transfer for uploading to AWS is free. /storagegateway/faqs/;How can I tell how much storage I am going to be billed for?;The Billing and Cost Management console shows an estimate of month-to-date usage for each service, including AWS Storage Gateway volumes and virtual tapes. For a breakdown of usage by individual volume or virtual tape Detailed Billing Reports enables you to see usage for each resource on a daily basis. /storagegateway/faqs/;When using File Gateway, will I incur S3 request charges?;You will pay for the S3 requests made by File Gateway on your behalf to store and retrieve your files in S3 as objects. The gateway caches data up to the capacity of the local disks you allocate, which can help reduce costs for data retrieval. /storagegateway/faqs/;Will I incur CloudWatch charges when using File Gateway audit logs?;You will be charged standard rates for Amazon CloudWatch Logs, Amazon CloudWatch Events, and Amazon CloudWatch Metrics if you configure File Gateway audit logs. /storagegateway/faqs/;When does each monthly billing cycle begin?;The billing system follows Coordinated Universal Time (UTC). The calendar month begins at midnight UTC on the first day of every month. /storagegateway/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of the Asia Pacific (Tokyo) Region is subject to Japanese Consumption Tax. /storagegateway/faqs/;How much does the hardware appliance cost?;Please refer to the Storage Gateway pricing page for the current pricing. You may also request a sales quote from the AWS Storage Gateway console. /storagegateway/faqs/;How do I pay for the hardware appliance?;You purchase the hardware appliance through a streamlined procurement process that is integrated in the AWS Console. You will need to submit a purchase order after receiving a sales quote, or you can arrange for pre-payment. /storagegateway/faqs/;Can I lease or rent the hardware appliance?;No. You pay the full price at the time of purchase. /storagegateway/faqs/;Does AWS Premium Support cover the AWS Storage Gateway?;Yes, AWS Premium Support covers issues related to your use of the AWS Storage Gateway. Please see the AWS Premium Support detail page for further information and pricing. /storagegateway/faqs/;What other support options are available?;You can tap into the breadth of existing AWS community knowledge through the AWS Storage Gateway discussion forum. /storagegateway/faqs/;Who do I call for support related to the hardware appliance?;You contact AWS Support, who provides AWS Storage Gateway software and service support. AWS Support also coordinates all hardware related cases with the hardware manufacturer's support team. We recommend that you purchase AWS Premium Support. /storagegateway/faqs/;Where do I find the service tag for the hardware appliance (also known as the serial number)?;The service tag for the hardware appliance can be found in the Hardware view of the AWS Storage Gateway console. /storagegateway/faqs/;What if there is a hardware problem with the hardware appliance?;AWS Support works with the hardware manufacturer for hardware support. Hardware support is included with your appliance purchase and includes 36 months of 24/7 phone support and next-business-day, on-site service for parts replacement. /storagegateway/faqs/;What are the warranty terms of the hardware appliance?;The hardware appliance comes with 3 years of warranty and next business day onsite service for parts replacement provided by the hardware manufacturer. You can find warranty information here. /aws-transfer-family/faqs/;What is AWS Transfer Family?;The AWS Transfer Family offers fully managed support for the transfer of files over SFTP, AS2, FTPS, and FTP directly into and out of Amazon S3 or Amazon EFS. You can seamlessly migrate, automate, and monitor your file transfer workflows by maintaining existing client-side configurations for authentication, access, and firewalls — so nothing changes for your customers, partners, and internal teams, or their applications. /aws-transfer-family/faqs/;What is SFTP?;SFTP stands for Secure Shell (SSH) File Transfer Protocol, a network protocol used for secure transfer of data over the internet. The protocol supports the full security and authentication functionality of SSH, and is widely used to exchange data between business partners in a variety of industries including financial services, healthcare, media and entertainment, retail, advertising, and more. /aws-transfer-family/faqs/;What is FTP?;FTP stands for File Transfer Protocol, a network protocol used for the transfer of data. FTP uses a separate channel for control and data transfers. The control channel is open until terminated or inactivity timeout, the data channel is active for the duration of the transfer. FTP uses cleartext and does not support encryption of traffic. /aws-transfer-family/faqs/;What is FTPS?;FTPS stands for File Transfer Protocol over SSL, and is an extension to FTP. It uses Transport Layer Security (TLS) and Secure Sockets Layer (SSL) cryptographic protocols to encrypt traffic. FTPS allows encryption of both the control and data channel connections either concurrently or independently. /aws-transfer-family/faqs/;What is AS2?;AS2 stands for Applicability Statement 2, a network protocol used for the secure and reliable transfer of business-to-business data over the public internet over HTTP/HTTPS (or any TCP/IP network). /aws-transfer-family/faqs/;Why should I use the AWS Transfer Family?;AWS Transfer Family supports multiple protocols for business-to-business (B2B) file transfers so data can easily and securely be exchanged across stakeholders, third-party vendors, business partners, or customers. Without using Transfer Family, you have to host and manage your own file transfer service which requires you to invest in operating and managing infrastructure, patching servers, monitoring for uptime and availability, and building one-off mechanisms to provision users and audit their activity. The AWS Transfer Family solves these challenges by providing fully managed support for SFTP, FTPS, and FTP that can reduce your operational burden, while preserving your existing transfer workflows for your end users. AWS Transfer Family managed file-processing workflows enables you to create, automate, and monitor your file transfer and data processing without maintaining your own code or infrastructure. The service stores transferred data as objects in your Amazon S3 bucket or as files in your Amazon EFS file system, so you can extract value from them in your data lake, or for your Customer Relationship Management (CRM) or Enterprise Resource Planning (ERP) workflows, or for archiving in AWS. /aws-transfer-family/faqs/;What are the benefits of using the AWS Transfer Family?;The AWS Transfer Family provides you with a fully managed, highly available file transfer service with auto-scaling capabilities, eliminating the need for you to manage file transfer related infrastructure. Your end users’ workflows remain unchanged, while data uploaded and downloaded over the chosen protocols is stored in your Amazon S3 bucket or Amazon EFS file system. With the data in AWS, you can now easily use it with the broad array of AWS services for data processing, content management, analytics, machine learning, and archival, in an environment that can meet your compliance requirements. /aws-transfer-family/faqs/;How do I get started with AWS Transfer for SFTP, FTPS, and FTP?;In 3 simple steps, you get an always-on server endpoint enabled for SFTP, FTPS, and/or FTP. First, you select the protocol(s) you want to enable your end users to connect to your endpoint. Next, you configure user access using AWS Transfer Family built-in authentication manager (service managed), Microsoft Active Directory (AD), or by integrating your own or a third party identity provider such as Okta or Microsoft AzureAD (“BYO” authentication). Finally, select the server to access S3 buckets or EFS file systems. Once the protocol(s), identity provider, and the access to file systems are enabled, your users can continue to use their existing SFTP, FTPS, or FTP clients and configurations, while the data accessed is stored in the chosen file systems. /aws-transfer-family/faqs/;How do I get started with AWS Transfer for AS2?;You can start using AS2 to exchange messages with your trading partners in three simple steps: First, import your certificates and private keys and your trading partners’ certificate and certificate chain. Next, create profiles using yours and your partner’s AS2 IDs. Finally, pair up your own and your partner’s profile information using an agreement for receiving data and connector for sending data. At this point you are ready to exchange messages with your trading partner’s AS2 server. /aws-transfer-family/faqs/;What is the difference between SFTP and FTPS? Which should I use when?;FTPS and SFTP can both be used for secure transfers. Since they are different protocols, they use different clients and technologies to offer a secure tunnel for transmission of commands and data. SFTP is a newer protocol and uses a single channel for commands and data, requiring fewer port openings than FTPS. /aws-transfer-family/faqs/;What is the difference between the SFTP, FTPS, and AS2 protocols? When should I use the AS2 protocol?;SFTP, FTPS, and AS2 can can all be used for secure transfers. Since they are different protocols, they use different clients and technologies to offer secure transmission of data. Aside from support for encrypted and signed messages, AS2’s built in mechanism for Message Disposition Notification (MDNalerts the sender that the message has been successfully received and decrypted by the recipient. This provides proof to the sender that their message was delivered without being tampered in transit. Use of AS2 is prevalent in workflows operating in retail, e-commerce, payments, supply chain for interacting with business partners who are also able to use AS2 to transact messages so that it is securely transmitted and delivered. AS2 provides you with options to ensure identity of the sender and receiver, integrity of the message, and confirm whether the message was successfully delivered and decrypted by the receiver. /aws-transfer-family/faqs/;Can my users continue to use their existing file transfer clients and applications?;Yes, any existing file transfer client application will continue to work as long as you have enabled your endpoint for the chosen protocols. Examples of commonly used SFTP/FTPS/FTP clients include WinSCP, FileZilla, CyberDuck, lftp, and OpenSSH clients. /aws-transfer-family/faqs/;Can I use CloudFormation to automate deployment of my servers and users?;Yes, you can deploy CloudFormation templates to automate creation of your servers and users or for integrating an identity provider. Refer to the usage guide for using AWS Transfer resources in CloudFormation templates. /aws-transfer-family/faqs/;Can my users use SCP or HTTPS to transfer files using this service?;No, your users will need to use SFTP, AS2, FTPS, or FTP to transfer files. Most file transfer clients offer either of these protocols as an option that will need to be selected during authentication. Please let us know via AWS Support or through your AWS account team of any specific protocols you would like to see supported. /aws-transfer-family/faqs/;Can I use my corporate domain name (sftp.mycompanyname.com) to access my endpoint?;Yes. If you already have a domain name, you can use Amazon Route 53 or any DNservice to route your users’ traffic from your registered domain to the server endpoint in AWS. Refer to the documentation on how the AWS Transfer Family uses Amazon Route 53 for custom domain names (applicable to internet facing endpoints only). /aws-transfer-family/faqs/;Can I still use the service if I don’t have a domain name?;Yes, if you don’t have a domain name, your users can access your endpoint using the hostname provided by the service. Alternatively, you can register a new domain using the Amazon Route 53 console or API, and route traffic from this domain to the service supplied endpoint hostname. /aws-transfer-family/faqs/;Can I use my domain that already has a public zone?;Yes, you will need to CNAME the domain to the service supplied endpoint hostname. /aws-transfer-family/faqs/;Can I set up my server to be accessible to resources only within my VPC?;Yes. When you create a server or update an existing one, you have the option to specify whether you want the endpoint to be accessible over the public internet or hosted within your VPC. By using a VPC hosted endpoint for your server, you can restrict it to be accessible only to clients within the same VPC, other VPCs you specify, or in on-premises environments using networking technologies that extend your VPC such as AWS Direct Connect, AWS VPNor VPC peering. You can further restrict access to resources in specific subnets within your VPC using subnet Network Access Control Lists (NACLs) or Security Groups. Refer to the documentation on creating your server endpoint inside your VPC using AWS PrivateLink for details. /aws-transfer-family/faqs/;Can I use FTP with an internet facing endpoint?;No, when you enable FTP, you will only be able to use VPC hosted endpoint‘s internal access option. If traffic needs to traverse the public network, secure protocols such as SFTP or FTPS should be used. /aws-transfer-family/faqs/;What if I need to use FTP for transfers over the public internet?;The service doesn’t allow you to use FTP over public networks because, when you create a server enabled for FTP, the server endpoint is only accessible to resources within your VPC. If you need to use FTP for exchanging data over the public internet, you can front your server’s VPC endpoint with an internet-facing Network Load Balancer (NLB). To support FTP clients that may not work with this configuration, use your server in PASV mode. /aws-transfer-family/faqs/;Can I use FTP without a VPC?;No. VPC is required to host FTP server endpoints. Please refer to the documentation for CloudFormation templates to automate creation of VPC resources to host the endpoint during server creation. /aws-transfer-family/faqs/;Can my end users use fixed IP addresses to allowlist  access to my server’s endpoint in their firewalls?;Yes. You can enable fixed IPs for your server endpoint by selecting the VPC hosted endpoint for your server and choosing the internet-facing option. This will allow you to attach Elastic IPs (including BYO IPs) directly to the endpoint, which is assigned as the endpoint’s IP address. Refer to the section on creating an internet facing endpoint in the documentation: Creating your server endpoint inside your VPC. /aws-transfer-family/faqs/;Can I restrict incoming traffic by end users’ source IP addresses?;Yes. You have three options to restrict incoming traffic by users’ source IP address. If you are hosting your server endpoint within VPC, refer to this blog post on using Security Groups to allow list source IP address or use AWS Network Firewall service. If you are a public EndpointType Transfer server and API Gateway to integrate your identity management system, you can also use AWS WAF to allow, block, or rate limit access by your end users’ Source IP address. /aws-transfer-family/faqs/;Can I host my server’s endpoint in a shared VPC environment?;Yes. You can deploy your server endpoint with shared VPC environments typically used when segmenting your AWS environment using tools such as AWS Landing Zone for security, cost monitoring, and scalability. Refer to this blog post on using VPC hosted endpoints in shared VPC environments with AWS Transfer Family. /aws-transfer-family/faqs/;How do I access files stored in an external SFTP or FTPS site?;Refer to this blog on using AWS Fargate to connect to an external SFTP/FTPS site and access your data using AWS Transfer Family. If you are looking for a fully managed solution for connecting to external sites, reach out to us via AWS Support or through your AWS account team. /aws-transfer-family/faqs/;Can I select which cryptographic algorithms can be used when my end users’ clients connect to my server endpoint?;Yes. Based on your security and compliance requirements, you can select one of three security policies to control the cryptographic algorithms that will be advertised by your server endpoints: Transfer-Security-Policy-2018-11 (default), Transfer-Security-Policy-2020-06 (restrictive – NSHA-1 algorithms), and Transfer-FIPS-2020-06 (FIPS compliant algorithms). When your end users’ file transfer clients attempt to connect to your server, only the algorithms specified in the policy will be used to negotiate the connection. Refer to the documentation on pre-defined security policies. /aws-transfer-family/faqs/;Can my end users use fixed IP addresses to access my server whose endpoint type is PUBLIC?;No. Fixed IP addresses that are usually used for firewall whitelisting purposes are currently not supported on the PUBLIC Endpoint type. Use VPC hosted endpoints to assign static IP addresses for your endpoint. /aws-transfer-family/faqs/;What IP ranges would my end users need to allow list to access my SFTP server’s endpoint type that is PUBLIC?;If you are using the PUBLIC endpoint type, your users will need to allow list the AWS IP address ranges published here. Refer to the documentation for details on staying up to date with AWS IP Address Ranges. /aws-transfer-family/faqs/;Will my AWS Transfer for SFTP server's host key ever change after I create the server?;No. The server’s host key that is assigned when you create the server remains the same, unless you add a new host key and manually delete the original. /aws-transfer-family/faqs/;What types of SFTP server host keys are supported?;RSA, ED25519, and ECDSA key types are supported for SFTP server host keys. /aws-transfer-family/faqs/;Can I import keys from my current SFTP server so my users do not have to verify the authenticity of my server again?;Yes. You can import a host key when creating a server or import multiple host keys when updating a server. Refer to the documentation on managing host keys for your SFTP-enabled server. /aws-transfer-family/faqs/;How many host keys can I associate with an SFTP server?;You can associate up to 10 host keys per SFTP server. However, only one host key per key type can be used by your end users’ clients to verify the authenticity of your SFTP server in a single session. /aws-transfer-family/faqs/;How can I identify my multiple host keys?;Multiple host keys can be identified using descriptions and tags, which can be added or edited when creating or updating a host key. Each host key also has a unique host key ID as well as an Amazon Resource Name (ARNthat can be used to identify and track the host key. /aws-transfer-family/faqs/;Can multiple host keys be used to verify the authenticity of my SFTP server?;Yes. The oldest host key of each key type can be used to verify the authenticity of an SFTP server. By adding RSA, ED25519, and ECDSA host keys, 3 separate host keys can be used to identify your SFTP server. /aws-transfer-family/faqs/;Which host keys are used to verify authenticity of my SFTP server?;The oldest host key of each key type is used to verify authenticity of your SFTP server. /aws-transfer-family/faqs/;Can I rotate my SFTP server host keys to ensure secure connections?;Yes. You can rotate your SFTP server host keys at any time by adding and removing host keys. Refer to the documentation on managing host keys for your SFTP-enabled server. /aws-transfer-family/faqs/;How do my end users’ FTPS clients verify the identity of my FTPS server?;When you enable FTPS access, you will need to supply a certificate from Amazon Certificate Manager (ACM). This certificate is used by your end user clients to verify the identity of your FTPS server. Refer to the ACM documentation on Requesting New certificates or importing existing certificates into ACM. /aws-transfer-family/faqs/;Do you support active and passive modes of FTPS and FTP?;We only support passive mode, which allows your end users’ clients to initiate connections with your server. Passive mode requires fewer port openings on the client side, making your server endpoint more compatible with end users behind protected firewalls. /aws-transfer-family/faqs/;Do you support Explicit and Implicit FTPS modes?;We only support explicit FTPS mode. /aws-transfer-family/faqs/;Can I transfer files over FTPS/FTP protocols if I have a firewall or a router configured between the client and the server?;Yes. File transfers traversing a firewall or a router are supported by default using extended passive connection mode (EPSV). If you are using an FTPS/FTP client that does not support EPSV mode, visit this blog post to configure your server in PASV mode to expand your server’s compatibility to a broad range of clients. /aws-transfer-family/faqs/;Can I customize the login banners for users connecting to my Transfer Family server?;Yes. You can configure your Transfer Family server to display customized banners such as organization policies or terms and conditions to your users. You can also display customized Message of The Day (MOTD) to users who have successfully authenticated. To learn more, visit the documentation. /aws-transfer-family/faqs/;Can I enable multiple protocols on the same endpoint?;Yes. During setup, you can select the protocol(s) you want to enable for clients to connect to your endpoint. The server hostname, IP address, and identity provider are shared across the selected protocols. Similarly, you can also enable additional protocol support to existing AWS Transfer Family endpoints, as long as the the endpoint configuration meets the requirements for all the protocols you intend to use. /aws-transfer-family/faqs/;When should I create separate server endpoints for each protocol vs enable the same endpoint for multiple protocols?;When you need to use FTP (only supported for access within VPC), and also need to support over the internet for SFTP, AS2, or FTPS, you will need a separate server endpoint for FTP. You can use the same endpoint for multiple protocols, when you want to use the same endpoint hostname and IP address for clients connecting over multiple protocols. Additionally, if you want to share the same credentials for SFTP and FTPS, you can set up and use a single identity provider for authenticating clients connecting over either protocol. /aws-transfer-family/faqs/;Can I set up the same end user to access the endpoint over multiple protocols?;Yes, you can provide the same user access over multiple protocols, as long as the credentials specific to the protocol have been set up in your identity provider. If you have enabled FTP, we recommend maintaining separate credentials for FTP. Refer to the documentation for setting up separate credentials for FTP. /aws-transfer-family/faqs/;Why should I maintain separate credentials for FTP users?;Unlike SFTP and FTPS, FTP transmits credentials in cleartext. We recommend isolating FTP credentials from SFTP or FTPS because, if, inadvertently FTP credentials are shared or exposed, your workloads using SFTP or FTPS remain secure. /aws-transfer-family/faqs/;What identity provider options are supported by the service?;The service supports three identity provider options: Service Managed, where you store user identities within the service, Microsoft Active Directory, and, Custom Identity Providers, which enable you to integrate an identity provider of your choice. Service Managed authentication is supported for server endpoints that are enabled for SFTP only. /aws-transfer-family/faqs/;How can I authenticate my users using Service Managed authentication?;You can use Service Managed authentication to authenticate your SFTP users using SSH keys. /aws-transfer-family/faqs/;How many SSH keys can I upload per SFTP user? Which key types are supported?;You can upload up to 10 SSH keys per user. RSA, ED25519, and ECDSA keys are supported. /aws-transfer-family/faqs/;Is SSH key rotation supported for service managed authentication?;Yes. Refer to the documentation for details on how to set up key rotation for your SFTP users. /aws-transfer-family/faqs/;Can I use service managed option for password authentication?;No, storing passwords within the service for authentication is currently not supported. If you need password authentication, use Active Directory by selecting a directory in AWS Directory Service, or follow the architecture described in this blog on Enabling Password Authentication using Secrets Manager. /aws-transfer-family/faqs/;Why should I use the Custom authentication mode?;The Custom mode (“BYO” authentication) enables you to leverage an existing identity provider to manage your end users for all protocol types (SFTP, FTPS, and FTP), enabling easy and seamless migration of your users. Credentials can be stored in your corporate directory or an in-house identity datastore, and you can integrate it for end user authentication purposes. Examples of identity providers include Okta, Microsoft AzureAD, or any custom-built identity provider you may be using as a part of an overall provisioning portal. /aws-transfer-family/faqs/;What options do I have to integrate my identity provider with an AWS Transfer Family server?;To integrate your identity provider with an AWS Transfer Family server, you can use an AWS Lambda function, or an Amazon API Gateway endpoint. Use Amazon API Gateway if you need a RESTful API to connect to an identity provider or want to leverage AWS WAF for its geo-blocking and rate limiting capabilities. Visit the documentation to learn more about integrating common identity providers such as AWS Cognito, Okta, and AWS Secrets Manager. /aws-transfer-family/faqs/;How can I get started with integrating my existing identity provider for Custom authentication?;To get started, you can use the AWS CloudFormation template in the usage guide and supply the necessary information for user authentication and access. Visit the website on custom identity providers to learn more. /aws-transfer-family/faqs/;When setting up my users via a custom identity provider, what information is used to enable access to my users?;Your user will need to provide a username and password (or SSH key) which will be used to authenticate, and access to your data is determined by the AWS IAM Role supplied by the AWS Lambda function or API Gateway used to connect to your identity provider. You will also need to provide home directory information, and it is recommended that you lock your users down to the designated home folder for an additional layer of security and usability. Refer to this blog post on how to simplify your end users’ experience when using a custom identity provider with AWS SFTP. /aws-transfer-family/faqs/;Can I apply access controls based on the client source IP?;Yes. The client source IP is passed to your identity provider when you use AWS Lambda or API Gateway to connect a custom identity provider. This enables you to allow, deny, or limit access based on the IP addresses of clients to ensure that your data is accessed only from IP addresses that you have specified as trusted. /aws-transfer-family/faqs/;Are anonymous users supported?;No, anonymous users are currently not supported for any of the protocols. /aws-transfer-family/faqs/;How do I uniquely identify my AS2 trading partner?;Your trading partner is uniquely identified using their AS2 Identifier (AS2 ID). Similarly, your trading partners identify your messages using your AS2 ID. /aws-transfer-family/faqs/;Which existing features of AWS Transfer Family are available for AS2? Which features are not available?;You can use AWS Transfer Family’s existing support for Amazon S3, networking features (VPC endpoints, Security Groups, and Elastic IPs), and access controls (AWS IAM) for AS2, as you could for SFTP, FTPS, and FTP. User authentication, logical directories, custom banners, and Amazon EFS as a storage backend are not supported for AS2. No. AWS Transfer Family support for AS2 is currently Drummond Pre-Certified and will become Drummond Certified in 2023. Visit this announcement to learn more. /aws-transfer-family/faqs/;What are the steps involved in message transmission using the AS2 protocol?; No. /aws-transfer-family/faqs/;What are the options available for message transmission?; No. /aws-transfer-family/faqs/;Do you support synchronous (Sync) and asynchronous (Async) MDNs? When should I use which option?; No. /aws-transfer-family/faqs/;Can I archive the received MDNs (as the sender who requested them)?; No. /aws-transfer-family/faqs/;How do I notify AWS Transfer Family when a message is ready for delivery to my trading partner’s endpoint?; No. /aws-transfer-family/faqs/;Can I isolate each of my trading partners to use different inbound and outbound locations for messages?; No. /aws-transfer-family/faqs/;Can I use my trading partner's existing keys and certificates with my AWS Transfer Family AS2 endpoint?; No. /aws-transfer-family/faqs/;What is managed workflows for post-upload processing?;AWS Transfer Family managed workflows make it easier for you to create, run, and monitor post upload processing for file transfers over SFTP, FTPS, and FTP. Using this feature, you can save time with low code automation to coordinate all the necessary tasks such as copying, tagging, and decrypting of files. You can also customize to scan for PII, virus/malware, or other errors such as incorrect file format or type, enabling you to quickly detect anomalies and meet your compliance requirements. /aws-transfer-family/faqs/;Why do I need managed workflows?; Managed workflows allow you to easily preprocess data before it is consumed by your downstream applications by orchestrating file-processing tasks such as moving files to user-specific folders, encrypting files in-transit, malware scanning, and tagging. You can deploy workflows using Infrastructure as Code (IaC), enabling you to quickly replicate and standardize common post-upload file processing tasks spanning multiple business units in your organization. You can have granular control by defining managed workflows that are triggered only on fully uploaded files, to ensure data quality is maintained, and by defining managed workflows that are triggered for partially uploaded files, to configure processing for incomplete uploads. Built-in exception handling allows you to quickly react to file-processing outcomes in case of errors or exceptions in the workflow execution, helping you maintain your business and technical SLAs, while offering you control on how to handle failures. Lastly, each workflow step produces detailed logs, which can be audited to trace the data lineage. /aws-transfer-family/faqs/;How do I get started with workflows?;First, set up your workflow to contain actions such as copying, tagging, and a series of actions that can include your own custom step in a sequence of steps based on your requirements. Next, map the workflow to a server, so on file arrival, actions specified in this workflow are evaluated and triggered in real time. To learn more, visit the documentation, watch this demo on getting started with managed workflows, or deploy a cloud-native file-transfer platform using this blog post. /aws-transfer-family/faqs/;Can I use the same workflow setup across multiple servers?; The following common actions are available once a transfer server has received a file from the client: /aws-transfer-family/faqs/;What actions can I take on my files using workflows?;Decrypting file using PGP keys Move or copy data from where it arrives to where it needs to be consumed. Delete the original file post archiving or copying to a new location. Tag the file based on its contents so it can be indexed and searched by downstream services (S3 only) Any custom file processing logic by supplying your own Lambda function as a custom step to your workflow. For example, checking compatibility of the file type, scanning files for malware, detecting Personally Identifiable Information (PII), and metadata extraction before ingesting files to your data analytics. /aws-transfer-family/faqs/;Can I use workflows to automatically decrypt files using PGP?;Yes. You can use a pre-built, fully managed workflow step for PGP decryption of files. For more information, refer to managed workflows documentation. /aws-transfer-family/faqs/;Can I select which file to process at each workflow step?; Workflow executions can be monitored using AWS CloudWatch metrics such as the total number of workflows executions, successful executions, and failed executions. Using the AWS Management Console, you can also search and view real-time status of in progress Workflow executions. Use CloudWatch logs to get detailed logging of workflows executions. /aws-transfer-family/faqs/;How do I monitor my workflows?; You can use the custom processing step to trigger notifications to EventBridge or Simple Notification Service (SNS) and get notified when file processing is complete. Additionally, you can also use CloudWatch logs from Lambda executions to get notifications. /aws-transfer-family/faqs/;What types of notifications can I receive?; AWS Step Functions is a serverless orchestration service that lets you combine AWS Lambda with other services to define the execution of business application in simple steps. To perform file-processing steps using AWS Step Functions, you use AWS Lambda functions with Amazon S3’s event triggers to assemble your own workflows. Managed workflows provide a framework to easily orchestrate a linear sequence of processing and differentiates from existing solutions in the following ways: 1) You can granularly define workflows to be executed only on full file uploads, as well as workflows to be executed only on partial file uploads, 2) workflows can be triggered automatically for S3 as well as EFS (which doesn’t offer post upload events), and 3) customers can get end to end visibility into their file transfers and processing in CloudWatch logs. /aws-transfer-family/faqs/;Can I trigger workflow actions based on the exchange of messages over AS2?;No, you cannot currently use managed workflows with AS2. /aws-transfer-family/faqs/;Can I trigger workflow actions on user downloads?; No. Workflows currently process one file per execution. /aws-transfer-family/faqs/;How does AWS Transfer Family communicate with Amazon S3?;The data transfer between AWS Transfer Family servers and Amazon S3 happens over internal AWS networks and doesn’t traverse the public internet. Because of this, you do not need to use AWS PrivateLink for data transfered from the AWS Transfer Family server to Amazon S3. The Transfer Family service doesn’t require AWS PrivateLink endpoints for Amazon S3 to keep traffic from going over the internet, and therefore cannot use those to communicate with storage services. This all assumes that the AWS storage service and the Transfer Family server are in the same region. /aws-transfer-family/faqs/;Why do I need to provide an AWS IAM Role and how is it used?;AWS IAM is used to determine the level of access you want to provide your users. This includes the operations you want to enable on their client and which Amazon S3 buckets they have access to – whether it’s the entire bucket or portions of it. /aws-transfer-family/faqs/;Why do I need to provide home directory information and how is it used?;The home directory you set up for your user determines their login directory. This would be the directory path that your user’s client will place them in as soon as they are successfully authenticated into the server. You will need to ensure that the IAM Role supplied provides user access to the home directory. /aws-transfer-family/faqs/;I have 100s of users who have similar access settings but to different portions of my bucket. Can I set them up using the same IAM Role and policy to enable their access?;Yes. You can assign a single IAM Role for all your users and use logical directory mappings that specify which absolute Amazon S3 bucket paths you want to make visible to your end users and how you these paths presented to them by their clients. Visit this blog on how to 'Simplify Your AWS SFTP/FTPS/FTP Structure with Chroot and Logical Directories'. /aws-transfer-family/faqs/;How are files stored in my Amazon S3 bucket transferred using AWS Transfer?;Files transferred over the supported protocols are stored as objects in your Amazon S3 bucket, and there is a one-to-one mapping between files and objects enabling native access to these objects using AWS services for processing or analytics. /aws-transfer-family/faqs/;How are Amazon S3 objects stored in my bucket presented to my users?;After successful authentication, based on your users’ credentials, the service presents Amazon S3 objects and folders as files and directories to your users’ transfer applications. /aws-transfer-family/faqs/;What file operations are supported? What operations are not supported?;Common commands to create, read, update, and delete, files and directories are supported. Files are stored as individual objects in your Amazon S3 bucket. Directories are managed as folder objects in S3, using the same syntax as the S3 console. /aws-transfer-family/faqs/;Can I control which operations my users are allowed to perform?;Yes, you can enable/disable file operations using the AWS IAM role you have mapped to their username. Refer to the documentation on 'Creating IAM Policies and Roles to control your end users’ access /aws-transfer-family/faqs/;Can I provide my end users access to more than one Amazon S3 bucket?;Yes. The bucket(s) your user can access is determined by the AWS IAM Role, and the optional scope-down policy you assign for that user. You can only use a single bucket as the home directory for the user. /aws-transfer-family/faqs/;Can I use S3 Access Points with AWS Transfer Family to simplify user access to shared dataset?;Yes. You can use S3 Access Point aliases with AWS Transfer Family to provide granular access to a large set of data without having to manage a single bucket policy. S3 Access Point aliases combined with AWS Transfer Family logical directories enable you to create a fine-grained access control for different applications, teams, and departments, while reducing the overhead of managing bucket policies. To learn more and get started, visit the blog post on enhancing data access control with AWS Transfer Family and Amazon S3 Access Points. /aws-transfer-family/faqs/;Can I create a server using AWS Account A and map my users to Amazon S3 buckets owned by AWS Account B?;Yes. You can use the CLI and API to set up cross account access between your server and the buckets you want to use for storing files transferred over the supported protocols. The Console drop down will only list buckets in Account A. Additionally, you’d need to make sure the role being assigned to the user belongs to Account A. /aws-transfer-family/faqs/;Can I automate processing of a file once it has been uploaded to Amazon S3?;Yes, you can use AWS Transfer Family managed workflows to create, automate, and monitor file processing after your files are uploaded to Amazon S3. Using managed workflows, you can pre-process your files before ingesting them to your data analytics and processing systems, without the overhead of managing your own custom code and infrastructure. Visit the documentation to learn about AWS Transfer Family managed workflows. /aws-transfer-family/faqs/;Can I customize rules for processing based on the user uploading the file?;Yes. When your user uploads a file, the username and the server id of the server used for the upload is stored as part of the associated S3 object’s metadata. You can use this information for post upload processing. Refer to the documentation on information you use for post upload processing. /aws-transfer-family/faqs/;How do I set up my EFS file system to work with AWS Transfer Family?;Prior to setting up AWS Transfer Family to work with an Amazon EFS file system, you will need to set up ownership of files and folders using the same POSIX identities (user id/group id) you plan to assign to your AWS Transfer Family users. Additionally, if you are accessing file systems in a different account, resource policies must also be configured on your file system to enable cross account access. Refer to this blog post for step-by-step instructions on using AWS Transfer Family with EFS. /aws-transfer-family/faqs/;How does AWS Transfer Family communicate with Amazon EFS?;The data transfer between AWS Transfer Family servers and Amazon EFS happens over internal AWS networks and doesn’t traverse the public internet. Because of this, you do not need to use AWS PrivateLink for data transfered from the AWS Transfer Family server to Amazon EFS. The Transfer Family service doesn’t require AWS PrivateLink endpoints for Amazon EFS to keep traffic from going over the internet, and therefore cannot use those to communicate with storage services. This all assumes that the AWS storage service and the Transfer Family server are in the same region. /aws-transfer-family/faqs/;How do I provide access to my users to upload/download files to/from my file systems?;Amazon EFS uses POSIX IDs which consist of an operating system user id, group id, and secondary group id to control access to a file system. When setting up your user in the AWS Transfer Family console/CLI/API, you will need to specify the username, user’s POSIX configuration, and an IAM role to access the EFS file system. You will also need to specify an EFS file system id and optionally a directory within that file system as your user’s landing directory. When your AWS Transfer Family user authenticates successfully using their file transfer client, they will be placed directly within the specified home directory, or root of the specified EFS file system. Their operating system POSIX id will be applied to all requests made through their file transfer clients. As an EFS administrator, you will need to make sure the file and directories you want your AWS Transfer Family users to access are owned by their corresponding POSIX ids in your EFS file system. Refer to the documentation to learn more on configuring ownership of sub-directories in EFS. /aws-transfer-family/faqs/;How are files transferred over the protocols stored in my Amazon EFS file systems?;Files transferred over the enabled protocols are directly stored in your Amazon EFS file Systems and will be accessible via a standard file system interface or from AWS services that can access Amazon EFS file systems. /aws-transfer-family/faqs/;What file operations are supported over the protocols when using Amazon S3 and Amazon EFS?;SFTP/FTPS/FTP commands to create, read, update, and delete files, directories, and symbolic links are supported. Refer to the table below on supported commands for EFS as well as S3. /aws-transfer-family/faqs/;How can I control which files and folders my users have access to and which operations they are allowed to and not allowed to perform?;The IAM policy you supply for your AWS Transfer Family user determines if they have read-only, read-write, and root access to your file system. Additionally, as a file system administrator, you can set up ownership and grant to access files and directories within your file system using their user id and group id. This applies to users whether they are stored within the service (service managed) or within your identity management system (“BYO Auth”). /aws-transfer-family/faqs/;Can I restrict each of my users to access different directories within my file system and only access files within those directories?;Yes, when you set up your user, you can specify different file systems and directories for each of your users. On successful authentication, EFS will enforce a directory for every file system request made using the enabled protocols. /aws-transfer-family/faqs/;Can I hide the name of the file system from being exposed to my user?;Yes, using AWS Transfer Family logical directory mappings, you can restrict your end users’ view of directories in your file systems by mapping absolute paths to end user visible path names. This also includes being able to “chroot” your user to their designated home directory. /aws-transfer-family/faqs/;Are symbolic links supported?;Yes, if symbolic links are present in directories accessible to your user and your user tries to access them, the links will be resolved to its target. Symbolic links are not supported when you use logical directory mappings to set up your users' access. /aws-transfer-family/faqs/;Can I provide an individual SFTP/FTPS/FTP user access to more than one file system?;Yes, when you set up an AWS Transfer Family user, you can specify one or more file systems in the IAM policy you supply as part of your user set up in order to grant access to multiple file systems. /aws-transfer-family/faqs/;What operating systems can I use to access my EFS file systems via AWS Transfer Family?;You can use clients and applications built for Microsoft Windows, Linux, macOS, or any operating system that supports SFTP/FTPS/FTP to upload and access files stored in your EFS file systems. Simply configure the server and user with the appropriate permissions to the EFS file system to access the file system across all operating systems. /aws-transfer-family/faqs/;How do I automate and monitor file-processing steps after my file is uploaded to EFS?;You can create AWS Transfer Family managed workflows to automatically trigger file-processing after the file is uploaded to EFS. You can set up workflows that contain tagging, copying, any custom processing step that you would like to perform on the file based on your business requirement. Visit AWS Transfer Family managed workflow documentation to learn more. /aws-transfer-family/faqs/;How do I know which user uploaded a file?;For new files, the POSIX user id associated with the user uploading the file will be set as the owner of the file in your EFS file system. Additionally, you can use Amazon CloudWatch to track your users’ activity for file creation, update, delete, and read operations. Visit the documentation to learn more on how to enable Amazon CloudWatch logging. /aws-transfer-family/faqs/;Can I view how much data was uploaded and downloaded over the enabled protocols?;Yes, metrics for data uploaded and downloaded using your server are published to Amazon CloudWatch within the AWS Transfer Family namespace. Visit the documentation to view the available metrics for tracking and monitoring. /aws-transfer-family/faqs/;Can I use AWS Transfer Family to access a file system in another account?;Yes. You can use the CLI and API to set up cross account access between your AWS Transfer Family resources and EFS file systems. The AWS Transfer Family console will only list file systems in the same account. Additionally, you’d need to make sure the IAM role assigned to the user to access the file system belongs to Account A. /aws-transfer-family/faqs/;What happens if my EFS file system does not have the right policies enabled for cross account access?;If you set up an AWS Transfer Family server to access a cross account EFS file system not enabled for cross account access, your SFTP/FTP/FTPS users will be denied access to the file system. If you have CloudWatch logging enabled on your server, cross account access errors will be logged to your CloudWatch Logs. /aws-transfer-family/faqs/;Can I use AWS Transfer Family to access an EFS file system in a different AWS Region?;No, you can use AWS Transfer Family to access EFS file systems in the same AWS Region only. /aws-transfer-family/faqs/;Can I use AWS Transfer Family with all EFS storage classes?;Yes. You can use AWS Transfer to write files into EFS and configure EFS Lifecycle Management to migrate files that have not been accessed for a set period of time to the Infrequent Access (IA) storage class. /aws-transfer-family/faqs/;Can my applications use SFTP/FTPS/FTP to concurrently read and write data from/to the same file?;Yes, Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of NFS/SFTP/FTPS/FTP clients. /aws-transfer-family/faqs/;Will my EFS burst credits be consumed when I access my file systems using AWS Transfer Family?;Yes. Accessing your EFS file systems using your AWS Transfer Family servers will consume your EFS burst credits regardless of the throughput mode. Refer to the documentation on available performance and throughput modes and view some useful performance tips. /aws-transfer-family/faqs/;Which protocols should I use for securing data while in-transit over a public network?;Either SFTP or FTPS should be used for secure transfers over public networks. Due to the underlying security of the protocols based on SSH and TLS cryptographic algorithms, data and commands are transferred through a secure, encrypted channel. /aws-transfer-family/faqs/;What are my options to encrypt data at rest?;You can choose to encrypt files stored your bucket using Amazon S3 Server-Side Encryption (SSE-S3) or Amazon KMS (SSE-KMS). For files stored in EFS, you can choose AWS or customer managed CMK for encryption of files at rest. Refer to the documentation for more details on options for at rest encryption of file data and metadata using Amazon EFS. /aws-transfer-family/faqs/;Which compliance programs does AWS Transfer Family support?;AWS Transfer Family is compliant with PCI-DSS, GDPR, FedRAMP, and SOC 1, 2, and 3. The service is also HIPPA eligible. Learn more about services in scope by compliance programs. /aws-transfer-family/faqs/;Is AWS Transfer Family FISMA compliant?;AWS East/West and GovCloud (US) Regions are FISMA compliant. When AWS Transfer Family is authorized for FedRAMP, it will be FISMA compliant within the respective regions. This compliance is demonstrated through FedRAMP Authorization of these two regions to FedRAMP Moderate and FedRAMP High. We demonstrate compliance through annual assessments and documenting compliance with in-scope NIST SP 800-53 controls within our System Security Plans. Templates are available on Artifact along with our customer responsibility matrix (CRM) which demonstrates at a detailed level or responsibility to meet these NIST controls as required by FedRAMP. Artifact is available through the management console accessible by an AWS account for both East/West and GovCloud. If you have any further questions on this topic, please consult the Console. /aws-transfer-family/faqs/;How does the service ensure integrity of uploaded files?;Files uploaded through services are verified by comparing the file’s pre- and post-upload MD5 checksum. /aws-transfer-family/faqs/;How can I monitor my end users’ activity?;You can monitor your end users’ activity using Amazon CloudWatch and CloudTrail logs. You can also access CloudWatch graphs for metrics such as number of files and bytes transferred in the AWS Transfer Family Management Console, giving you a single pane of glass to monitor file transfers using a centralized dashboard. Use AWS CloudTrail logs to access a record of all API operations invoked by your server to service your end users’ data requests. Visit the documentation to learn more. /aws-transfer-family/faqs/;What are my options to encrypt/ decrypt files for transfer?;You can use AWS Transfer Family managed workflows to automatically decrypt files uploaded to your AWS Transfer Family resource using PGP keys. For more information, refer to managed workflows documentation. If you are looking for PGP encryption support, reach out to us via AWS Support or through your AWS account team. /aws-transfer-family/faqs/;How am I billed for use of the service?;You are billed on an hourly basis for each of the protocols enabled, from the time you create and configure your server endpoint, until the time you delete it. You are also billed based on the amount of data uploaded and downloaded over SFTP, FTPS, or FTP, number of messages exchanged over AS2, and the amount of data processed using Decrypt workflow step. Refer to the pricing page for more details /aws-transfer-family/faqs/;Will my billing be different if I use the same server endpoint for multiple protocols or use different endpoints for each protocol?;No, you are billed on an hourly basis for each of the protocols you have enabled and for the amount of data transferred through each of the protocols, regardless of whether same endpoint is enabled for multiple protocols or you are using different endpoints for each of the protocols. /aws-transfer-family/faqs/;I have stopped my server. Will I be billed while it is stopped?;Yes, stopping the server, by using the console, or by running the “stop-server” CLI command or the “StopServer” API command, does not impact billing. You are billed on an hourly basis from the time you create your server endpoint and configure access to it over one or more protocols until the time you delete it. /aws-transfer-family/faqs/;How am I billed for using managed workflows?;You are billed for Decrypt workflow step based on the amount of data you decrypt using PGP keys. There is no other additional charge for using managed workflows. Depending on your workflow configuration, you are also billed for use of Amazon S3, Amazon EFS, AWS Secrets Manager and AWS Lambda. /rds/aurora/faqs/;What is Amazon Aurora?;Amazon Aurora is a modern relational database service offering performance and high availability at scale, fully open source MySQL and PostgreSQL-compatible editions, and a range of developer tools for building serverless and machine learning (ML)-driven applications. Aurora features a distributed, fault-tolerant, and self-healing storage system that is decoupled from compute resources and auto-scales up to 128 TiB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon Simple Storage Service (Amazon S3), and replication across three Availability Zones (AZs). Amazon Aurora is also a fully managed service that automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups while providing the security, availability, and reliability of commercial databases at 1/10th the cost. /rds/aurora/faqs/;Is Amazon Aurora MySQL compatible?;Amazon Aurora is drop-in compatible with existing MySQL open-source databases and adds support for new releases regularly. This means you can easily migrate MySQL databases to and from Aurora using standard import/export tools or snapshots. It also means that most of the code, applications, drivers, and tools you already use with MySQL databases today can be used with Aurora with little or no change. This makes it easy to move applications between the two engines. You can see the current Amazon Aurora MySQL release compatibility information in the documentation. /rds/aurora/faqs/;How is Aurora PostgreSQL supported for issues related to PostgreSQL extensions?;Amazon fully supports Aurora PostgreSQL and all extensions available with Aurora. If you need support for Aurora PostgreSQL, please reach out to AWS Support. If you have an active AWS Premium Support account, you can contact AWS Premium Support for Amazon Aurora specific issues. /rds/aurora/faqs/;How much does Amazon Aurora cost?;For provisioned Aurora, you can choose On-Demand Instances and pay for your database by the hour with no long-term commitments or upfront fees, or choose Reserved Instances for additional savings. Alternatively, Aurora Serverless automatically starts up, shuts down, and scales capacity up or down based on your application's needs and you pay only for capacity consumed. Please see our Aurora pricing page for current pricing information. /rds/aurora/faqs/;Amazon Aurora replicates each chunk of my database volume six ways across three Availability Zones. Does that mean that my effective storage price will be three or six times what is shown on the pricing page?;No. Amazon Aurora’s replication is bundled into the price. You are charged based on the storage your database consumes at the database layer, not the storage consumed in Amazon Aurora’s virtualized storage layer. /rds/aurora/faqs/;In which AWS regions is Amazon Aurora available?;You can see region availability for Amazon Aurora here. /rds/aurora/faqs/;How can I migrate from MySQL to Amazon Aurora and vice versa?;If you want to migrate from MySQL to Amazon Aurora (and vice versa), you have several options: You can use the standard mysqldump utility to export data from MySQL and mysqlimport utility to import data to Amazon Aurora, and vice-versa. You can also use Amazon RDS’s DB Snapshot migration feature to migrate an Amazon RDS for MySQL DB Snapshot to Amazon Aurora using the AWS Management Console. Migration to Aurora completes for most customers in under an hour, though the duration depends on format and data set size. For more information see Best Practices for Migrating MySQL Databases to Amazon Aurora. /rds/aurora/faqs/;How can I migrate from PostgreSQL to Amazon Aurora and vice versa?;If you want to migrate from PostgreSQL to Amazon Aurora (and vice versa), you have several options: You can use the standard pg_dump utility to export data from PostgreSQL and pg_restore utility to import data to Amazon Aurora, and vice-versa. You can also use Amazon RDS’s DB Snapshot migration feature to migrate an Amazon RDS for PostgreSQL DB Snapshot to Amazon Aurora using the AWS Management Console. Migration to Aurora completes for most customers in under an hour, though the duration depends on format and data set size. To migrate SQL Server databases to Aurora PostgreSQL-Compatible Edition, you can use Babelfish for Aurora PostgreSQL. Your applications will work without any changes. See the Babelfish documentation for more information. /rds/aurora/faqs/;Does Amazon Aurora participate in the AWS Free Tier?;"Not at this time. The AWS Free Tier for Amazon RDS offers benefits for Micro DB Instances; Amazon Aurora does not currently offer Micro DB Instance support. Please see the Aurora pricing page for current pricing information. To try Amazon Aurora, sign in to the AWS Management Console, select RDS under the Database category, and choose Amazon Aurora as your database engine." /rds/aurora/faqs/;What are I/Os in Amazon Aurora and how are they calculated?;I/Os are input/output operations performed by the Aurora database engine against its solid state drive (SSD)-based virtualized storage layer. Every database page read operation counts as one I/O. The Aurora database engine issues reads against the storage layer in order to fetch database pages not present in memory in the cache: If your query traffic can be totally served from memory or the cache, you will not be charged for retrieving any data pages from memory. If your query traffic cannot be served entirely from memory, you will be charged for any data pages that need to be retrieved from storage. Each database page is 16 KB in Aurora MySQL-Compatible Edition and 8 KB in Aurora PostgreSQL-Compatible Edition. Aurora was designed to eliminate unnecessary I/O operations in order to reduce costs and to ensure resources are available for serving read/write traffic. Write I/Os are only consumed when persisting redo log records in Aurora MySQL-Compatible Edition or write ahead log records in Aurora PostgreSQL-Compatible Edition to the storage layer for the purpose of making writes durable. Write I/Os are counted in 4 KB units. For example, a log record that is 1024 bytes will count as one write I/O operation. However, if the log record is larger than 4 KB, more than one write I/O operation will be needed to persist it. Concurrent write operations whose log records are less than 4 KB may be batched together by the Aurora database engine in order to optimize I/O consumption, if they are persisted on the same storage protection groups. Unlike traditional database engines, Aurora never flushes dirty data pages to storage. You can see how many I/O requests your Aurora instance is consuming by checking the AWS Management Console. To find your I/O consumption, go to the Amazon RDS section of the console, look at your list of instances, select your Aurora instances, then look for the “Billed read operations” and “Billed write operations” metrics in the monitoring section. /rds/aurora/faqs/;Do I need to change client drivers to use Amazon Aurora PostgreSQL-Compatible Edition?;No, Amazon Aurora will work with standard PostgreSQL database drivers. /rds/aurora/faqs/;"What does ""five times the performance of MySQL"" mean?";Amazon Aurora delivers significant increases over MySQL performance by tightly integrating the database engine with an SSD-based virtualized storage layer purpose-built for database workloads, reducing writes to the storage system, minimizing lock contention, and eliminating delays created by database process threads. Our tests with SysBench on r3.8xlarge instances show that Amazon Aurora delivers over 500,000 SELECTs/sec and 100,000 UPDATEs/sec, five times higher than MySQL running the same benchmark on the same hardware. Detailed instructions on this benchmark and how to replicate it yourself are provided in the Amazon Aurora MySQL-Compatible Edition Performance Benchmarking Guide. /rds/aurora/faqs/;"What does ""three times the performance of PostgreSQL"" mean?";Amazon Aurora delivers significant increases over PostgreSQL performance by tightly integrating the database engine with an SSD-based virtualized storage layer purpose-built for database workloads, reducing writes to the storage system, minimizing lock contention, and eliminating delays created by database process threads. Our tests with SysBench on r4.16xlarge instances show that Amazon Aurora delivers SELECTs/sec and UPDATEs/sec over three times higher than PostgreSQL running the same benchmark on the same hardware. Detailed instructions on this benchmark and how to replicate it yourself are provided in the Amazon Aurora PostgreSQL-Compatible Edition Performance Benchmarking Guide. /rds/aurora/faqs/;How do I optimize my database workload for Amazon Aurora MySQL-Compatible Edition?;Amazon Aurora is designed to be compatible with MySQL so that existing MySQL applications and tools can run without requiring modification. However, one area where Amazon Aurora improves upon MySQL is with highly concurrent workloads. In order to maximize your workload’s throughput on Amazon Aurora, we recommend building your applications to drive a large number of concurrent queries and transactions. /rds/aurora/faqs/;How do I optimize my database workload for Amazon Aurora PostgreSQL-Compatible Edition?;Amazon Aurora is designed to be compatible with PostgreSQL so that existing PostgreSQL applications and tools can run without requiring modification. However, one area where Amazon Aurora improves upon PostgreSQL is with highly concurrent workloads. In order to maximize your workload’s throughput on Amazon Aurora, we recommend building your applications to drive a large number of concurrent queries and transactions. /rds/aurora/faqs/;What are the minimum and maximum storage limits of an Amazon Aurora database?;The minimum storage is 10 GB. Based on your database usage, your Amazon Aurora storage will automatically grow, up to 128 TiB, in 10 GB increments with no impact to database performance. There is no need to provision storage in advance. /rds/aurora/faqs/;How do I scale the compute resources associated with my Amazon Aurora DB Instance?;"There are two ways to scale the compute resources associated with my Amazon Aurora DB Instance – via Aurora Serverless and via manual adjustment. You can use Aurora Serverless, an on-demand, autoscaling configuration for Amazon Aurora to scale database compute resources based on application demand. It enables you to run your database in the cloud without worrying about database capacity management. You can specify the desired database capacity range and your database will scale based on your application’s needs. Read more in the Aurora Serverless User Guide. You can also manually scale your compute resources associated with your database by selecting the desired DB instance type in the AWS Management Console. Your requested change will be applied during your specified maintenance window or you can use the ""Apply Immediately"" flag to change the DB instance type immediately. Both of these options will have an availability impact for a few minutes as the scaling operation is performed. Note that any other pending system changes will also be applied." /rds/aurora/faqs/;How do I enable backups for my DB Instance?;Automated continuous backups are always enabled on Amazon Aurora DB Instances. Backups do not impact database performance. /rds/aurora/faqs/;Can I take DB Snapshots and keep them around as long as I want?;Yes, and there is no performance impact when taking snapshots. Note that restoring data from DB Snapshots requires the creation of a new DB Instance. /rds/aurora/faqs/;If my database fails, what is my recovery path?;Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs) and will automatically attempt to recover your database in a healthy AZ with no data loss. In the unlikely event your data is unavailable within Amazon Aurora storage, you can restore from a DB Snapshot or perform a point-in-time restore operation to a new instance. Note that the latest restorable time for a point-in-time restore operation can be up to five minutes in the past. /rds/aurora/faqs/;What happens to my automated backups and DB Snapshots if I delete my DB Instance?;You can choose to create a final DB Snapshot when deleting your DB Instance. If you do, you can use this DB Snapshot to restore the deleted DB Instance at a later date. Amazon Aurora retains this final user-created DB Snapshot along with all other manually created DB Snapshots after the DB Instance is deleted. Only DB Snapshots are retained after the DB Instance is deleted (i.e., automated backups created for point-in-time restore are not kept). /rds/aurora/faqs/;Can I share my snapshots with another AWS account?;Yes. Aurora gives you the ability to create snapshots of your databases, which you can use later to restore a database. You can share a snapshot with a different AWS account, and the owner of the recipient account can use your snapshot to restore a DB that contains your data. You can even choose to make your snapshots public – that is, anybody can restore a DB containing your (public) data. You can use this feature to share data between your various environments (production, dev/test, staging, etc.) that have different AWS accounts, as well as keep backups of all your data secure in a separate account in case your main AWS account is ever compromised. /rds/aurora/faqs/;Will I be billed for shared snapshots?;There is no charge for sharing snapshots between accounts. However, you may be charged for the snapshots themselves, as well as any databases you restore from shared snapshots. Learn more about Aurora pricing. /rds/aurora/faqs/;Can I automatically share snapshots?;We do not support automatic sharing of DB snapshots. To share a snapshot, you must manually create a copy of the snapshot, and then share the copy. /rds/aurora/faqs/;How many accounts can I share snapshots with?;You may share manual snapshots with up to 20 AWS account IDs. If you want to share the snapshot with more than 20 accounts, you can either share the snapshot as public, or contact support for increasing your quota. /rds/aurora/faqs/;In which regions can I share my Aurora snapshots?;You can share your Aurora snapshots within each AWS region where Aurora is available. /rds/aurora/faqs/;Can I share my Aurora snapshots across different regions?;No. Your shared Aurora snapshots will only be accessible by accounts in the same region as the account that shares them. /rds/aurora/faqs/;Can I share an encrypted Aurora snapshot?;Yes, you can share encrypted Aurora snapshots. /rds/aurora/faqs/;How does Amazon Aurora improve my database’s fault tolerance to disk failures?;Amazon Aurora automatically divides your database volume into 10 GB segments spread across many disks. Each 10 GB chunk of your database volume is replicated six ways, across three AZs. Amazon Aurora is designed to transparently handle the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically. /rds/aurora/faqs/;How does Aurora improve recovery time after a database crash?;Unlike other databases, after a database crash Amazon Aurora does not need to replay the redo log from the last database checkpoint (typically five minutes) and confirm that all changes have been applied before making the database available for operations. This reduces database restart times to less than 60 seconds in most cases. Amazon Aurora moves the buffer cache out of the database process and makes it available immediately at restart time. This prevents you from having to throttle access until the cache is repopulated to avoid brownouts. /rds/aurora/faqs/;What kind of replicas does Aurora support?;Amazon Aurora MySQL-Compatible Edition and Amazon Aurora PostgreSQL-Compatible Edition support Amazon Aurora replicas, which share the same underlying volume as the primary instance in the same AWS region. Updates made by the primary are visible to all Amazon Aurora Replicas. With Amazon Aurora MySQL-Compatible Edition, you can also create cross-region MySQL Read Replicas based on MySQL’s binlog-based replication engine. In MySQL Read Replicas, data from your primary instance is replayed on your replica as transactions. For most use cases, including read scaling and high availability, we recommend using Amazon Aurora Replicas. You have the flexibility to mix and match these two replica types based on your application needs: /rds/aurora/faqs/;Can I have cross-region replicas with Amazon Aurora?;Yes, you can set up cross-region Aurora replicas using either physical or logical replication. Physical replication, called Amazon Aurora Global Database, uses dedicated infrastructure that leaves your databases entirely available to serve your application, and can replicate up to five secondary regions with typical latency of under a second. It's available for both Aurora MySQL-Compatible Edition and Aurora PostgreSQL-Compatible Edition. For low-latency global reads and disaster recovery, we recommend using Amazon Aurora Global Database. Aurora supports native logical replication in each database engine (binlog for MySQL and PostgreSQL replication slots for PostgreSQL), so you can replicate to Aurora and non-Aurora databases, even across Regions. Aurora MySQL-Compatible Edition also offers an easy-to-use logical cross-region read replica feature that supports up to five secondary AWS regions. It is based on single threaded MySQL binlog replication, so the replication lag will be influenced by the change/apply rate and delays in network communication between the specific regions selected. /rds/aurora/faqs/;Can I create Aurora Replicas on the cross-region replica cluster?;Yes, you can add up to 15 Aurora Replicas on each cross-region cluster, and they will share the same underlying storage as the cross-region replica. A cross-region replica acts as the primary on the cluster and the Aurora Replicas on the cluster will typically lag behind the primary by tens of milliseconds. /rds/aurora/faqs/;Can I fail over my application from my current primary to the cross-region replica?;Yes, you can promote your cross-region replica to be the new primary from the Amazon RDS console. For logical (binlog) replication, the promotion process typically takes a few minutes depending on your workload. The cross-region replication will stop once you initiate the promotion process. With Amazon Aurora Global Database, you can promote a secondary region to take full read/write workloads in under a minute. /rds/aurora/faqs/;Can I prioritize certain replicas as failover targets over others?;Yes. You can assign a promotion priority tier to each instance on your cluster. When the primary instance fails, Amazon RDS will promote the replica with the highest priority to primary. If two or more Aurora Replicas share the same priority, then Amazon RDS promotes the replica that is largest in size. If two or more Aurora Replicas share the same priority and size, then Amazon RDS promotes an arbitrary replica in the same promotion tier. For more information on failover logic, read the Amazon Aurora User Guide. /rds/aurora/faqs/;Can I modify priority tiers for instances after they have been created?;Yes, you can modify the priority tier for an instance at any time. Simply modifying priority tiers will not trigger a failover. /rds/aurora/faqs/;Can I prevent certain replicas from being promoted to the primary instance?;You can assign lower priority tiers to replicas that you don’t want promoted to the primary instance. However, if the higher priority replicas on the cluster are unhealthy or unavailable for some reason, then Amazon RDS will promote the lower priority replica. /rds/aurora/faqs/;How can I improve upon the availability of a single Amazon Aurora database?;You can add Amazon Aurora Replicas. Aurora Replicas in the same AWS Region share the same underlying storage as the primary instance. Any Aurora Replica can be promoted to primary without any data loss, and therefore can be used to enhance fault tolerance in the event of a primary DB Instance failure. To increase database availability, simply create one to 15 replicas, in any of three AZs, and Amazon RDS will automatically include them in failover primary selection in the event of a database outage. You can use Amazon Aurora Global Database if you want your database to span multiple AWS Regions. This will replicate your data with no impact on database performance and provide disaster recovery from region-wide outages. /rds/aurora/faqs/;What happens during failover and how long does it take?;Failover is handled automatically by Amazon Aurora so your applications can resume database operations as quickly as possible without manual administrative intervention. If you have an Aurora Replica in the same or a different AZ when failing over, Aurora flips the canonical name record (CNAME) for your DB Instance to point at the healthy replica, which is promoted to become the new primary. Start-to-finish, failover typically completes within 30 seconds. For improved resiliency and faster failovers, consider using Amazon RDS Proxy which automatically connects to the failover DB instance while preserving application connections. Proxy makes failovers transparent to your applications and reduces failover times by up to 66%. If you are running Aurora Serverless v1 and the DB instance or AZ become unavailable, Aurora will automatically recreate the DB instance in a different AZ. Aurora Serverless v2 works like provisioned for failover and other high availability features. For more information, see Aurora Serverless v2 and high availability.. If you do not have an Aurora Replica (i.e., single instance) and are not running Aurora Serverless, Aurora will attempt to create a new DB Instance in the same Availability Zone as the original instance. This replacement of the original instance is done on a best-effort basis and may not succeed, for example, if there is an issue that is broadly affecting the Availability Zone. Your application should retry database connections in the event of connection loss. Disaster recovery across regions is a manual process, where you promote a secondary region to take read/write workloads. /rds/aurora/faqs/;If I have a primary database and an Amazon Aurora Replica actively taking read traffic and a failover occurs, what happens?;Amazon Aurora will automatically detect a problem with your primary instance and trigger a failover. If you are using the Cluster Endpoint, your read/write connections will be automatically redirected to an Amazon Aurora Replica that will be promoted to primary. In addition, the read traffic that your Aurora Replicas were serving will be briefly interrupted. If you are using the Cluster Reader Endpoint to direct your read traffic to the Aurora Replica, the read only connections will be directed to the newly promoted Aurora Replica until the old primary node is recovered as a replica. /rds/aurora/faqs/;How far behind the primary will my replicas be?;Since Amazon Aurora Replicas share the same data volume as the primary instance in the same AWS Region, there is virtually no replication lag. We typically observe lag times in the tens of milliseconds. For cross-region replication, binlog-based logical replication lag can grow indefinitely based on change/apply rate as well as delays in network communication. However, under typical conditions, under a minute of replication lag is common. Cross-region replicas using Amazon Aurora Global Database’s physical replication will have a typical lag of under a second. /rds/aurora/faqs/;Can I set up replication between my Aurora MySQL-Compatible Edition database and an external MySQL database?;Yes, you can set up binlog replication between an Aurora MySQL-Compatible Edition instance and an external MySQL database. The other database can run on Amazon RDS, or as a self-managed database on AWS, or completely outside of AWS. If you're running Aurora MySQL-Compatible Edition 5.7, consider setting up GTID-based binlog replication. This will provide complete consistency so your replication won’t miss transactions or generate conflicts, even after failover or downtime. /rds/aurora/faqs/;What is Amazon Aurora Global Database?;Amazon Aurora Global Database is a feature that allows a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads in each Region with typical latency of less than a second, and provides disaster recovery from region-wide outages. In the unlikely event of a regional degradation or outage, a secondary region can be promoted to full read/write capabilities in less than one minute. This feature is available for both Aurora MySQL-Compatible Edition and Aurora PostgreSQL-Compatible Edition. /rds/aurora/faqs/;How do I create an Amazon Aurora Global Database?;You can create an Aurora Global Database with just a few clicks in the Amazon RDS console. Alternatively, you can use the AWS Software Development Kit (SDK) or AWS Command-Line Interface (CLI). You need to provision at least one instance per region in your Amazon Aurora Global Database. /rds/aurora/faqs/;How many secondary regions can an Amazon Aurora Global Database have?;You can create up to five secondary regions for an Amazon Aurora Global Database. /rds/aurora/faqs/;If I use Amazon Aurora Global Database, can I also use logical replication (binlog) on the primary database?;Yes. If your goal is to analyze database activity, consider using Aurora advanced auditing, general logs, and slow query logs instead, to avoid impacting the performance of your database. /rds/aurora/faqs/;Will Aurora automatically fail over to a secondary region of an Amazon Aurora Global Database?;No. If your primary region becomes unavailable, you can manually remove a secondary region from an Amazon Aurora Global Database and promote it to take full reads and writes. You will also need to point your application to the newly promoted region. /rds/aurora/faqs/;Can I use Amazon Aurora in Amazon Virtual Private Cloud (Amazon VPC)?;Yes, all Amazon Aurora DB Instances must be created in a VPC. With Amazon VPC, you can define a virtual network topology that closely resembles a traditional network you might operate in your own datacenter. This gives you complete control over who can access your Amazon Aurora databases. /rds/aurora/faqs/;Does Amazon Aurora encrypt my data in transit and at rest?;Yes. Amazon Aurora uses SSL (AES-256) to secure the connection between the database instance and the application. Amazon Aurora allows you to encrypt your databases using keys you manage through AWS Key Management Service (AWS KMS). On a database instance running with Amazon Aurora encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, snapshots, and replicas in the same cluster. Encryption and decryption are handled seamlessly. For more information about the use of AWS KMS with Amazon Aurora, see the Amazon RDS User's Guide. /rds/aurora/faqs/;Can I encrypt an existing unencrypted database?;Currently, encrypting an existing unencrypted Aurora instance is not supported. To use Amazon Aurora encryption for an existing unencrypted database, create a new DB Instance with encryption enabled and migrate your data into it. /rds/aurora/faqs/;How do I access my Amazon Aurora database?;Aurora databases must be accessed through the database port entered on database creation. This provides an additional layer of security for your data. Step-by-step instructions on how to connect to your Amazon Aurora database are provided in the Amazon Aurora Connectivity Guide. /rds/aurora/faqs/;Can I use Amazon Aurora with applications that require HIPAA compliance?;Yes, the MySQL- and PostgreSQL-compatible editions of Aurora are HIPAA-eligible. You can use them to build HIPAA-compliant applications and store healthcare-related information, including protected health information (PHI) under an executed Business Associate Addendum (BAA) with AWS. If you have already entered into a BAA with AWS, no further action is necessary to begin using these services in the account(s) covered by your BAA. For more information about using AWS to build compliant applications, see Healthcare Providers. /rds/aurora/faqs/;Where can I access a list of Common Vulnerabilities and Exposures (CVE) entries for publicly known cybersecurity vulnerabilities for Amazon Aurora releases?;You can currently find a list of CVEs at Amazon Aurora Security Updates. /rds/aurora/faqs/;What is Amazon Aurora Serverless?;Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora. It enables you to run your database in the cloud without managing database capacity. Manually managing database capacity can take up valuable time and can lead to inefficient use of database resources. With Aurora Serverless, you simply create a database, specify the desired database capacity range, and connect your application. Aurora automatically adjusts the capacity within the range your specified based on your application’s needs. You pay on a per-second basis for the database capacity you use when the database is active. Learn more about Aurora Serverless and get started with a few clicks in the Amazon RDS Management Console. /rds/aurora/faqs/;What is the difference between Aurora Serverless v2 and v1?;Aurora Serverless v2 supports every type of database workload, from development and test environments, websites, and applications that have infrequent, intermittent, or unpredictable workloads to the most demanding, business critical applications that require high scale and high availability. It scales in place by adding more CPU and memory without having to failover the database to a larger or smaller database instance. As a result, it can scale even when there are long running transactions, table locks etc. In addition, it scales database capacity in increments as small as 0.5 Aurora Capacity Unit (ACUs) so your database capacity closely matches your application’s needs. Aurora Serverless v1 is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. It automatically starts up, scales compute capacity to match your application's usage, and shuts down when it's not in use. Visit the Aurora User Guide to learn more. /rds/aurora/faqs/;Which Aurora features does Aurora Serverless v2 support?;Aurora Serverless v2 supports all features of provisioned Aurora, including read replica , multi-AZ configuration , Global Database , RDS proxy , and Performance Insights . /rds/aurora/faqs/;Can I start using Aurora Serverless v2 with my existing Aurora DB cluster?;Yes, you can start using Aurora Serverless v2 to manage database compute capacity in your existing Aurora DB cluster. A cluster containing both provisioned instances as well as Aurora Serverless v2 is referred to as a mixed-configuration cluster. You can choose to have any combination of provisioned instances and Aurora Serverless v2 in your cluster. To test Aurora Serverless v2, you add a reader to your Aurora DB cluster and select Serverless v2 as the instance type. Once the reader is created and available, you can start using it for read-only workloads. Once you confirm that the reader is working as expected, you can initiate a failover to start using Aurora Serverless v2 for both reads and writes. This option provides a minimal downtime experience to get started with Aurora Serverless v2. /rds/aurora/faqs/;Can I migrate from Aurora Serverless v1 to Aurora Serverless v2?;Yes, you can migrate from Aurora Serverless v1 to Aurora Serverless v2. Refer to the Aurora User Guide to learn more. /rds/aurora/faqs/;Which versions of Amazon Aurora are supported for Aurora Serverless?;Aurora Serverless v1 compatibility information can be seen here. Aurora Serverless v2 compatibility information can be seen here. /rds/aurora/faqs/;Can I migrate an existing Aurora DB cluster to Aurora Serverless?;Yes, you can restore a snapshot taken from an existing Aurora provisioned cluster into an Aurora Serverless DB Cluster (and vice versa). /rds/aurora/faqs/;How do I connect to an Aurora Serverless DB cluster?;You access an Aurora Serverless DB cluster from within a client application running in the same VPC. You can't give a public IP address to an Aurora Serverless DB. /rds/aurora/faqs/;Can I explicitly set the capacity of an Aurora Serverless cluster?;While Aurora Serverless automatically scales based on the active database workload, in some cases, capacity might not scale fast enough to meet a sudden workload change, such as a large number of new transactions. In these cases, you can set the capacity explicitly to a specific value with the AWS Management Console, the AWS CLI, or the Amazon RDS API. /rds/aurora/faqs/;Why isn't my Aurora Serverless DB Cluster automatically scaling?;Once a scaling operation is initiated, Aurora Serverless attempts to find a scaling point, which is a point in time at which the database can safely complete scaling. Aurora Serverless might not be able to find a scaling point if you have long-running queries or transactions in progress, or temporary tables or table locks in use. /rds/aurora/faqs/;How am I billed for Aurora Serverless?;In Aurora Serverless, database capacity is measured in Aurora Capacity Units (ACUs). You pay a flat rate per second of ACU usage. Storage and I/O prices are the same for provisioned and serverless configurations. Visit Aurora pricing page for up-to-date information about pricing and AWS Region availability. /rds/aurora/faqs/;What is Amazon Aurora Parallel Query?;"Amazon Aurora Parallel Query refers to the ability to push down and distribute the computational load of a single query across thousands of CPUs in Aurora’s storage layer. Without Parallel Query, a query issued against an Amazon Aurora database would be executed wholly within one instance of the database cluster; this would be similar to how most databases operate." /rds/aurora/faqs/;What's the target use case?;Parallel Query is a good fit for analytical workloads requiring fresh data and good query performance, even on large tables. Workloads of this type are often operational in nature. /rds/aurora/faqs/;What benefits does Parallel Query provide?;Parallel Query results in faster performance, speeding up analytical queries by up to two orders of magnitude. It also delivers operational simplicity and data freshness as you can issue a query directly over the current transactional data in your Aurora cluster. And, Parallel Query enables transactional and analytical workloads on the same database by allowing Aurora to maintain high transaction throughput alongside concurrent analytical queries. /rds/aurora/faqs/;What specific queries improve under Parallel Query?;Most queries over large data sets that are not already in the buffer pool can expect to benefit. The initial version of Parallel Query can push down and scale out of the processing of more than 200 SQL functions, equijoins, and projections. /rds/aurora/faqs/;What performance improvement can I expect?;The improvement to a specific query’s performance depends on how much of the query plan can be pushed down to the Aurora storage layer. Customers have reported more than an order of magnitude improvement to query latency. /rds/aurora/faqs/;Is there any chance that performance will be slower?;Yes, but we expect such cases to be rare. /rds/aurora/faqs/;What changes do I need to make to my query to take advantage of Parallel Query?;Changes in query syntax are not required. The query optimizer will automatically decide whether to use Parallel Query for your specific query. To check if a query is using Parallel Query, you can view the query execution plan by running the EXPLAIN command. If you wish to bypass the heuristics and force Parallel Query for test purposes, use the aurora_pq_force session variable. /rds/aurora/faqs/;How do I turn Parallel Query feature on or off?;Parallel Query can be enabled and disabled dynamically at both the global and session level using the aurora_pq parameter. /rds/aurora/faqs/;Are there any additional charges associated with using Parallel Query?;No. You aren’t charged for anything other than what you already pay for instances, I/O, and storage. /rds/aurora/faqs/;Since Parallel Query reduces I/O, will turning it on reduce my Aurora IO charges?;No, Parallel Query I/O costs for your query are metered at the storage layer, and will be the same or larger with Parallel Query turned on. Your benefit is the improvement in query performance. There are two reasons for potentially higher I/O costs with Parallel Query. First, even if some of the data in a table is in the buffer pool, Parallel Query requires all data to be scanned at the storage layer, incurring I/O. Second, a side effect of avoiding contention in the buffer pool is that running a Parallel Query does not warm up the buffer pool. As a result, consecutive runs of the same Parallel Query query will incur the full I/O cost. Learn more about Parallel Query in the Documentation. /rds/aurora/faqs/;Is Parallel Query available with all instance types?;No. At this time, you can use Parallel Query with instances in the R* instance family. /rds/aurora/faqs/;Is Parallel Query compatible with all other Aurora features?;Not initially. At this time, you can only turn it on for database clusters that aren't running the Serverless or Backtrack features. Further, it doesn’t support functionality specific to Aurora with MySQL 5.7 compatibility. /rds/aurora/faqs/;If Parallel Query speeds up queries with only rare performance losses, should I simply turn it on all the time?;No. While we expect Parallel Query to improve query latency in most cases, you may incur higher I/O costs. We recommend that you thoroughly test your workload with the feature enabled and disabled. Once you're convinced that Parallel Query is the right choice, you can rely on the query optimizer to automatically decide which queries will use Parallel Query. In the rare case when the optimizer doesn’t make the optimal decision, you can override the setting. /rds/aurora/faqs/;Can Aurora Parallel Query replace my data warehouse?;Aurora Parallel Query is not a data warehouse and doesn’t provide the functionality typically found in such products. It’s designed to speed up query performance on your relational database and is suitable for use cases such as operational analytics, when you need to perform fast analytical queries on fresh data in your database. For an exabyte scale cloud data warehouse, please consider Amazon Redshift. /rds/aurora/faqs/;What is Amazon DevOps Guru for RDS?;Amazon DevOps Guru for RDS is a new ML-powered capability for Amazon RDS (which includes Amazon Aurora) that is designed to automatically detect and diagnose database performance and operational issues, enabling you to resolve issues in minutes rather than days. Amazon DevOps Guru for RDS is a feature of Amazon DevOps Guru, which is designed to detect operational and performance issues for all Amazon RDS engines and dozens of other resource types. DevOps Guru for RDS expands the capabilities of DevOps Guru to detect, diagnose, and remediate a wide variety of database-related issues in Amazon RDS (e.g. resource over-utilization, and misbehavior of certain SQL queries). When an issue occurs, Amazon DevOps Guru for RDS is designed to immediately notify developers and DevOps engineers and provides diagnostic information, details on the extent of the problem, and intelligent remediation recommendations to help customers quickly resolve database-related performance bottlenecks and operational issues. /rds/aurora/faqs/;Why should I use DevOps Guru for RDS?;Amazon DevOps Guru for RDS is designed to remove manual effort and shorten time (from hours and days to minutes) to detect and resolve hard to find performance bottlenecks in your relational database workload. You can enable DevOps Guru for RDS for every Amazon Aurora database, and it will automatically detect performance issues for your workloads, send alerts to you on each issue, explain findings, and recommend actions to resolve. DevOps Guru for RDS helps make database administration more accessible to non-experts and assists database experts so that they can manage even more databases. /rds/aurora/faqs/;How does Amazon DevOps Guru for RDS work?;Amazon DevOps Guru for RDS uses ML to analyze telemetry data collected by Amazon RDS Performance Insights (PI). DevOps Guru for RDS does not use any of your data stored in the database in its analysis. PI measures database load, a metric that characterizes how an application spends time in the database and selected metrics generated by the database, such as server status variables in MySQL and pg_stat tables in PostgreSQL. /rds/aurora/faqs/;How can I get started with Amazon DevOps Guru for RDS?;To get started with DevOps Guru for RDS, ensure Performance Insights is enabled through the RDS console, and then simply enable DevOps Guru for your Amazon Aurora databases. With DevOps Guru, you can choose your analysis coverage boundary to be your entire AWS account, prescribe the specific AWS CloudFormation stacks that you want DevOps Guru to analyze, or use AWS tags to create the resource grouping you want DevOps Guru to analyze. /rds/aurora/faqs/;What types of issues can Amazon DevOps Guru for RDS detect?;Amazon DevOps Guru for RDS helps identify a wide range of performance issues that may affect application service quality, such as lock pile-ups, connection storms, SQL regressions, CPU and I/O contention, and memory issues. /rds/aurora/faqs/;How is DevOps Guru for RDS different from Amazon RDS Performance insights?;Amazon RDS Performance Insights is a database performance tuning and monitoring feature that collects and visualizes Amazon RDS database performance metrics, helping you quickly assess the load on your database, and determine when and where to take action. Amazon DevOps Guru for RDS is designed to monitor those metrics, detect when your database is experiencing performance issues, analyze the metrics, and then tell you what’s wrong and what you can do about it. /rds/aurora/faqs/;What is the cost of using Amazon RDS Blue/Green Deployments?;You will incur the same price for running your workloads on green instances as you do for blue instances. The cost of running on blue and green instances include our current standard pricing for db.instances, cost of storage, cost of read/write I/Os, and any enabled features, such as cost of backups and Amazon RDS Performance Insights. Effectively, you are paying approximately 2x the cost of running workloads on db.instance for the lifespan of the blue-green-deployment. For example: You have Aurora MySQL-Compatible Edition 5.7 cluster running on two r5.2xlarge db.instances, a primary writer instance and a reader instance, in us-east-1 AWS region. Each of the r5.2xlarge db.instances are configured for 40 GiB Storage and have 25 Million I/Os per month. You create a clone of the blue instance topology using Amazon RDS Blue/Green Deployments, run it for 15 days (360 hours) and each green instance has 3 million I/O reads during that time. You then delete the blue instances after a successful switchover. The blue instances (writer and reader) cost $849.2 for 15 days at an on-demand rate of $1.179/hr (Instance + Storage+ I/O). The green instances (writer and reader) cost $840.40 for 15 days at an on-demand rate of $1.167/hr (Instance +Storage+ I/O). The total cost to you for using Blue/Green Deployments for those 15 days is $1689.60, which is approximately 2x the cost of running blue instances for that time period. /rds/aurora/faqs/;How do switchovers work with Amazon RDS Blue/Green Deployments?;When Amazon RDS Blue/Green Deployments initiate a switchover, they block writes to both the blue and green environments, until switchover is complete. During switchover, the staging environment, or green environment, catches up with the production system, ensuring data is consistent between the staging and production environment. Once the production and staging environment are in complete sync, Blue/Green Deployments promote the staging environment as the new production environment by redirecting traffic to the newly promoted production environment. Amazon RDS Blue/Green Deployments are designed to enable writes on the green environment after switchover is complete, ensuring zero data loss during the switchover process. /rds/aurora/faqs/;After Amazon RDS Blue/Green Deployments switches over, what happens to my old production environment?;Amazon RDS Blue/Green Deployments do not delete your old production environment. If needed, you can access it for additional validations and performance/regression testing. If you no longer need the old production environment, you can delete it. Standard billing charges apply on old production instances until you delete them. /rds/aurora/faqs/;What do Amazon RDS Blue/Green Deployments switchover guardrails check for?;Amazon RDS Blue/Green Deployments switchover guardrails block writes on your blue and green environments until your green environment catches up before switching over. Blue/Green Deployments also perform health checks of your primary and replicas in your blue and green environments. They also perform replication health checks, for example, to see if replication has stopped or if there are errors. They detect long running transactions between your blue and green environments. You can specify your maximum tolerable downtime, as low as 30 seconds, and if you have an ongoing transaction that exceeds this your switchover will time out. /rds/aurora/faqs/;Do Amazon RDS Blue/Green Deployments support Amazon Aurora Global Databases?;No, Amazon RDS Blue/Green Deployments do not support Amazon Aurora Global Databases. /rds/aurora/faqs/;Can I use Amazon RDS Blue/Green Deployments to rollback changes?;No, at this time you cannot use Amazon RDS Blue/Green Deployments to rollback changes. /rds/aurora/faqs/;Why should I use Trusted Language Extensions for PostgreSQL?;Trusted Language Extensions (TLE) for PostgreSQL enables developers to build high performance PostgreSQL extensions and run them safely on Amazon Aurora. In doing so, TLE improves your time to market and removes the burden placed on database administrators to certify custom and third-party code for use in production database workloads. You can move forward as soon as you decide an extension meets your needs. With TLE, independent software vendors (ISVs) can provide new PostgreSQL extensions to customers running on Aurora. /rds/aurora/faqs/;What are traditional risks of running extensions in PostgreSQL and how does TLE for PostgreSQL mitigate those risks?;PostgreSQL extensions are executed in the same process space for high performance. However, extensions might have software defects that can crash the database. TLE for PostgreSQL offers multiple layers of protection to mitigate this risk. TLE is designed to limit access to system resources. The rds_superuser role can determine who is permitted to install specific extensions. However, these changes can only be made through the TLE API. TLE is designed to limit the impact of an extension defect to a single database connection. In addition to these safeguards, TLE is designed to provide DBAs in the rds_superuser role fine-grained, online control over who can install extensions and they can create a permissions model for running them. Only users with sufficient privileges will be able to run and create using the “CREATE EXTENSIONcommand on a TLE extension. DBAs can also allow-list “PostgreSQL hooks” required for more sophisticated extensions that modify the database’s internal behavior and typically require elevated privilege. /rds/aurora/faqs/;How does TLE for PostgreSQL relate to/work with other AWS services?;TLE for PostgreSQL is available for Amazon Aurora PostgreSQL-Compatible Edition on versions 14.5 and higher. TLE is implemented as a PostgreSQL extension itself and you can activate it from the rds_superuser role similar to other extensions supported on Aurora. /rds/aurora/faqs/;How is TLE for PostgreSQL different from extensions available on Amazon Aurora and Amazon RDS today?;Aurora and Amazon RDS support a curated set of over 85 PostgreSQL extensions. AWS manages the security risks for each of these extensions under the AWS shared responsibility model. The extension that implements TLE for PostgreSQL is included in this set. Extensions that you write or that you obtain from third-party sources and install in TLE are considered part of your application code. You are responsible for the security of your applications that use TLE extensions. /rds/aurora/faqs/;What are some examples of extensions I could run with TLE for PostgreSQL?;You can build developer functions, such as bitmap compression and differential privacy (such as publicly accessible statistical queries that protect privacy of individuals). /rds/aurora/faqs/;What programming languages can I use to develop TLE for PostgreSQL?;TLE for PostgreSQL currently supports JavaScript, PL/pgSQL, Perl, and SQL. /rds/aurora/faqs/;How do I deploy a TLE for PostgreSQL extension?;Once the rds_superuser role activates TLE for PostgreSQL, you can deploy TLE extensions using the SQL CREATE EXTENSION command from any PostgreSQL client, such as psql. This is similar to how you would create a user-defined function written in a procedural language, such as PL/pgSQL or PL/Perl. You can control which users have permission to deploy TLE extensions and use specific extensions. /rds/aurora/faqs/;How do TLE for PostgreSQL extensions communicate with the PostgreSQL database?;TLE for PostgreSQL access your PostgreSQL database exclusively through the TLE API. The TLE supported trusted languages include all functions of the PostgreSQL server programming interface (SPI) and support for PostgreSQL hooks, including the check password hook. /rds/aurora/faqs/;Where can I learn more about the TLE for PostgreSQL open-source project?;You can learn more about the TLE for PostgreSQL project on the official TLE GitHub page. /rds/faqs/;What is Amazon RDS?;Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity, while managing time-consuming database administration tasks, freeing you to focus on your applications and business. Amazon RDS gives you access to the capabilities of a familiar MySQL, MariaDB, Oracle, SQL Server, or PostgreSQL database. This means that the code, applications, and tools you already use today with your existing databases should work seamlessly with Amazon RDS. Amazon RDS can automatically back up your database and keep your database software up to date with the latest version. You benefit from the flexibility of being able to easily scale the compute resources or storage capacity associated with your relational database instance. In addition, Amazon RDS makes it easy to use replication to enhance database availability, improve data durability, or scale beyond the capacity constraints of a single database instance for read-heavy database workloads. As with all Amazon Web Services, there are no up-front investments required, and you pay only for the resources you use. /rds/faqs/;When would I use Amazon RDS vs. Amazon EC2 Relational Database AMIs?;Amazon Web Services provides a number of database alternatives for developers. Amazon RDS enables you to run a fully managed and fully featured relational database while offloading database administration. Using one of our many relational database AMIs on Amazon EC2 allows you to manage your own relational database in the cloud. There are important differences between these alternatives that may make one more appropriate for your use case. See Cloud Databases with AWS for guidance on which solution is best for you. /rds/faqs/;Are there hybrid or on-premises deployment options for Amazon RDS?;Yes, you can run Amazon RDS on premises using Amazon RDS on Outposts. Please see the Amazon RDS on Outposts FAQs for additional information. /rds/faqs/;Can I get help to learn more about and onboard to Amazon RDS?;Yes, Amazon RDS specialists are available to answer questions and provide support. Contact Us and you’ll hear back from us in one business day to discuss how AWS can help your organization. /rds/faqs/;How do I set up a connection between an application or a SQL based client running on an Amazon EC2 compute instance and my Amazon RDS database instance/cluster?;"You can set up a connection between an EC2 compute instance and a new Amazon RDS database using the Amazon RDS console. On the “Create database” page, select “Connect to an EC2 compute resource” option in the Connectivity Section. When you select this option, Amazon RDS automates the manual networking set up tasks such as creating a VPC, security groups, subnets, and ingress/egress rules to establish a connection between your application and database. Additionally, you can set up a connection between an existing Amazon RDS database and an EC2 compute instance. To do so, open the RDS console, select an RDS database from the database list page, and choose “Set up EC2 connection” from the ""Action"" menu dropdown list. Amazon RDS automatically sets up your related network settings to enable a secure connection between the selected EC2 instance and the RDS database. This connectivity automation improves productivity for new users and application developers. Users can now quickly and seamlessly connect an application or a client using SQL on an EC2 compute instance to an RDS database within minutes." /rds/faqs/;What is a database instance (DB instance)?;You can think of a DB instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB instances, define/refine infrastructure attributes of your DB instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and AWS Command Line Interface. You can run one or more DB instances and each DB instance can support one or more databases or database schemas, depending on engine type. /rds/faqs/;How do I create a DB instance?;"DB instances are simple to create using either the AWS Management Console, Amazon RDS APIs, or AWS Command Line Interface. To launch a DB instance using the AWS Management Console, click ""RDS"" and then the “Launch DB Instance” button on the Instances tab. From there, you can specify the parameters for your DB instance, including DB engine and version, license model, instance type, storage type and amount, and primary user credentials. You also have the ability to change your DB instance’s backup retention policy, preferred backup window, and scheduled maintenance window. Alternatively, you can create your DB instance using the CreateDBInstance API or create-db-instance command." /rds/faqs/;How do I access my running DB instance?;Once your DB instance is available, you can retrieve its endpoint via the DB instance description in the AWS Management Console, DescribeDBInstances API or describe-db-instances command. Using this endpoint, you can construct the connection string required to connect directly with your DB instance using your favorite database tool or programming language. In order to allow network requests to your running DB instance, you will need to authorize access. For a detailed explanation of how to construct your connection string and get started, please refer to our Getting Started Guide. /rds/faqs/;How many DB instances can I run with Amazon RDS?;"By default, customers are allowed to have up to a total of 40 Amazon RDS DB instances. Of those 40, up to 10 can be Oracle or SQL Server DB instances under the ""License Included"" model. All 40 can be used for Amazon Aurora, MySQL, MariaDB, PostgreSQL, and Oracle under the ""BYOL"" model. Note that RDS for SQL Server has a limit of up to 100 databases on a single DB instance to learn more see the Amazon RDS for SQL Server User Guide." /rds/faqs/;How many databases or schemas can I run within a DB instance?;"RDS for Amazon Aurora: Nlimit imposed by software RDS for MySQL: Nlimit imposed by software RDS for MariaDB: Nlimit imposed by software RDS for Oracle: 1 database per instance; no limit on the number of schemas per database imposed by software RDS for SQL Server: Up to 100 databases per instance RDS for PostgreSQL: Nlimit imposed by software" /rds/faqs/;How do I import data into an Amazon RDS DB instance?;"There are a number of simple ways to import data into Amazon RDS, such as with the mysqldump or mysqlimport utilities for MySQL; Data Pump, import/export, or SQL Loader for Oracle; Import/Export wizard, full backup files (.bak files), or Bulk Copy Program (BCP) for SQL Server; or pg_dump for PostgreSQL. For more information on data import and export, please refer to the Data Import Guide for MySQL, the Data Import Guide for Oracle, the Data Import Guide for SQL Server, or the Data Import Guide for PostgreSQL. In addition, AWS Database Migration Service can help you migrate databases to AWS easily and securely." /rds/faqs/;What is a maintenance window? Will my DB instance be available during maintenance events?;The Amazon RDS maintenance window is your opportunity to control when DB instance modifications, database engine version upgrades, and software patching occurs, in the event they are requested or required. If a maintenance event is scheduled for a given week, it will be initiated during the maintenance window you identify. Maintenance events that require Amazon RDS to take your DB instance offline are scale compute operations (which generally take only a few minutes from start-to-finish), database engine version upgrades, and required software patching. Required software patching is automatically scheduled only for patches that are security and durability related. Such patching occurs infrequently (typically once every few months) and should seldom require more than a fraction of your maintenance window. If you do not specify a preferred weekly maintenance window when creating your DB instance, a 30-minute default value is assigned. If you wish to modify when maintenance is performed on your behalf, you can do so by modifying your DB instance in the AWS Management Console, the ModifyDBInstance API, or the modify-db-instance command. Each of your DB instances can have different preferred maintenance windows, if you so choose. Running your DB instance as a Multi-AZ deployment can further reduce the impact of a maintenance event. Please refer to the Amazon RDS User Guide for more information on maintenance operations. /rds/faqs/;What should I do if my queries seem to be running slowly?;"For production databases, we encourage you to enable Enhanced Monitoring , which provides access to over 50 CPU, memory, file system, and disk I/O metrics. You can enable these features on a per-instance basis and you can choose the granularity (all the way down to 1 second). High levels of CPU utilization can reduce query performance and in this case, you may want to consider scaling your DB instance class. For more information on monitoring your DB instance, refer to the Amazon RDS User Guide . If you are using RDS for MySQL or MariaDB, you can access the slow query logs for your database to determine if there are slow-running SQL queries and, if so, the performance characteristics of each. You could set the ""slow_query_log"" DB Parameter and query the mysql.slow_log table to review the slow-running SQL queries. Please refer to the Amazon RDS User Guide to learn more. If you are using RDS for Oracle, you can use the Oracle trace file data to identify slow queries. For more information on accessing trace file data, please refer to Amazon RDS User Guide . If you are using RDS for SQL Server, you can use the client side SQL Server traces to identify slow queries. For information on accessing server side trace file data, please refer to Amazon RDS User Guide ." /rds/faqs/;Which relational database engine versions does Amazon RDS support?;For the list of supported database engine versions, please refer to the documentation for each engine: Amazon RDS for MySQL Amazon RDS for MariaDB Amazon RDS for PostgreSQL Amazon RDS for Oracle Amazon RDS for SQL Server Amazon Aurora /rds/faqs/;How does Amazon RDS distinguish between “major” and “minor” DB engine versions?;Refer to the FAQs page for each Amazon RDS database engine for specifics on version numbering: Amazon RDS for MySQL Amazon RDS for MariaDB Amazon RDS for PostgreSQL Amazon RDS for Oracle Amazon RDS for SQL Server Amazon Aurora /rds/faqs/;Does Amazon RDS provide guidelines for support of new DB engine versions?;Over time, Amazon RDS adds support for new major and minor database engine versions. The number of new versions supported will vary based on the frequency and content of releases and patches from the engine’s vendor or development organization and the outcome of a thorough vetting of these releases and patches by our database engineering team. However, as a general guidance, we aim to support new engine versions within 5 months of their general availability. /rds/faqs/;How do I specify which supported DB engine version I would like my DB instance to run?;You can specify any currently supported version (major and minor) when creating a new DB instance via the Launch DB Instance operation in the AWS Management Console or the CreateDBInstance API. Please note that not every database engine version is available in every AWS region. /rds/faqs/;How do I control if and when the engine version of my DB instance is upgraded to new supported versions?;Amazon RDS strives to keep your database instance up to date by providing you with newer versions of the supported database engines. After a new version of a database engine is released by the vendor or development organization, it is thoroughly tested by our database engineering team before it is made available in Amazon RDS. We recommend that you keep your database instance upgraded to the most current minor version as it will contain the latest security and functionality fixes. Unlike major version upgrades, minor version upgrades only include database changes that are backward-compatible with previous minor versions (of the same major version) of the database engine. If a new minor version does not contain fixes that would benefit Amazon RDS customers, we may choose not to make it available in Amazon RDS. Soon after a new minor version is available in Amazon RDS, we will set it to be the preferred minor version for new DB instances. To manually upgrade a database instance to a supported engine version, use the Modify DB Instance command on the AWS Management Console or the ModifyDBInstance API and set the DB Engine Version parameter to the desired version. By default, the upgrade will be applied during your next maintenance window. You can also choose to upgrade immediately by selecting the Apply Immediately option in the console API. If we determine that a new engine minor version contains significant bug fixes compared to a previously released minor version, we will schedule automatic upgrades for DB instances that have the Auto Minor Version Upgrade setting to “Yes”. These upgrades will be scheduled to occur during customer-specified maintenance windows. We schedule them so you can plan around them because downtime is required to upgrade a DB engine version, even for Multi-AZ instances. If you wish to turn off automatic minor version upgrades you can do so by setting the Auto Minor Version Upgrade setting to “No”. In the case of RDS for Oracle and RDS for SQL Server, if the upgrade to the next minor version requires a change to a different edition, then we may not schedule automatic upgrades even if you have enabled the Auto Minor Version Upgrade setting. The determination on whether to schedule automatic upgrades in such situations will be made on a case-by-case basis. Since major version upgrades involve some compatibility risk, they will not occur automatically and must be initiated by you (except in the case of major version deprecation, see below). For more information about upgrading a DB instance to a new DB engine version, refer to the Amazon RDS User Guide. /rds/faqs/;Can I test my DB instance with a new version before upgrading?;Yes. You can do so by creating a DB snapshot of your existing DB instance, restoring from the DB snapshot to create a new DB instance, and then initiating a version upgrade for the new DB instance. You can then experiment safely on the upgraded copy of your DB instance before deciding whether or not to upgrade your original DB instance. For more information about restoring a DB snapshot, refer to the Amazon RDS User Guide. /rds/faqs/;Does Amazon RDS provide guidelines for deprecating database engine versions that are currently supported?;We intend to support major version releases (e.g., MySQL 5.6, PostgreSQL 9.6) for at least 3 years after they are initially supported by Amazon RDS. We intend to support minor versions (e.g., MySQL 5.6.37, PostgreSQL 9.6.1) for at least 1 year after they are initially supported by Amazon RDS. Periodically, we will deprecate major or minor engine versions. Major versions are made available at least until the community end of life for the corresponding community version or the version is no longer receiving software fixes or security updates. For minor versions, this is when a minor version has significant bugs or security issues that have been resolved in a later minor version. While we strive to meet these guidelines, in some cases we may deprecate specific major or minor versions sooner, such as when there are security issues. In the unlikely event that such cases occur, Amazon RDS will automatically upgrade your database engine to address the issue. Specific circumstances may dictate different timelines depending on the issue being addressed. /rds/faqs/;What happens when an Amazon RDS DB engine version is deprecated?;When a minor version of a database engine is deprecated in Amazon RDS, we will provide a three (3) month period after the announcement before beginning automatic upgrades. At the end of this period, all instances still running the deprecated minor version will be scheduled for automatic upgrade to the latest supported minor version during their scheduled maintenance windows. When a major version of the database engine is deprecated in Amazon RDS, we will provide a minimum six (6) month period after the announcement of a deprecation for you to initiate an upgrade to a supported major version. At the end of this period, an automatic upgrade to the next major version will be applied to any instances still running the deprecated version during their scheduled maintenance windows. Once a major or minor database engine version is deprecated in Amazon RDS, any DB instance restored from a DB snapshot created with the unsupported version will automatically and immediately be upgraded to a currently supported version. /rds/faqs/;Why can I not create a particular version?;In some cases, we may deprecate specific major or minor versions without prior notice, such as when we discover a version does not meet our high quality, performance, or security bar. In the unlikely event that such cases occur, Amazon RDS will discontinue the creation of new database instances and clusters with these versions. Existing customers may continue to be able to run their databases. Specific circumstances may dictate different timelines depending on the issue being addressed. /rds/faqs/;How will I be charged and billed for my use of Amazon RDS?;You pay only for what you use and there are no minimum or setup fees. You are billed based on: DB instance hours – Based on the class (e.g. db.t2.micro, db.m4.large) of the DB instance consumed. Partial DB instance hours consumed are billed in one-second increments with a 10 minute minimum charge following a billable status change, such as creating, starting, or modifying the DB instance class. For additional details, read our what's new announcement. Storage (per GB per month) – Storage capacity you have provisioned to your DB instance. If you scale your provisioned storage capacity within the month, your bill will be pro-rated. I/O requests per month – Total number of storage I/O requests you have (for Amazon RDS Magnetic Storage and Amazon Aurora only) Provisioned IOPS per month – Provisioned IOPS rate, regardless of IOPS consumed (for Amazon RDS Provisioned IOPS (SSD) Storage only) Backup Storage – Backup storage is the storage associated with your automated database backups and any customer-initiated database snapshots. Increasing your backup retention period or taking additional database snapshots increases the backup storage consumed by your database. Data transfer – Internet data transfer in and out of your DB instance. For Amazon RDS pricing information, please visit the pricing section on the Amazon RDS product page. /rds/faqs/;When does billing of my Amazon RDS DB instances begin and end?;Billing commences for a DB instance as soon as the DB instance is available. Billing continues until the DB instance terminates, which would occur upon deletion or in the event of an instance failure. /rds/faqs/;What defines billable Amazon RDS instance hours?;DB instance hours are billed for each hour your DB instance is running in an available state. If you no longer wish to be charged for your DB instance, you must stop or delete it to avoid being billed for additional instance hours. Partial DB instance hours consumed are billed in one-second increments with a 10 minute minimum charge following a billable status change, such as creating, starting, or modifying the DB instance class. /rds/faqs/;How will I be billed for a stopped DB instance?;While your database instance is stopped, you are charged for provisioned storage (including Provisioned IOPS) and backup storage (including manual snapshots and automated backups within your specified retention window), but not for DB instance hours. /rds/faqs/;How will I be billed for backups storage?;Free backup storage is provided up to your account's total provisioned database storage across the entire region. For example, if you have a MySQL DB instance with 100 GB of provisioned storage over the month, and a PostgreSQL DB instance with 150 GB of provisioned storage over the month, both in the same region and same account, we will provide 250 GB of backup storage in this account and region at no additional charge. You will only be charged for backup storage that exceeds this amount. Each day, your account's total provisioned database storage in the region is compared against your total backup storage in the region, and only the excess backup storage is charged. For example, if you have exactly 10 GB of excess backup storage each day, you will be charged for 10 GB-month of backup storage for the month. Alternatively, if you have 300 GB of provisioned storage each day, and 500 GB of backup storage each day, but only for half the month, then you will only be charged for 100 GB-month of backup storage (not 200 GB-month), since the charge is calculated daily (prorated), and the backups did not exist for the entire month. Please note that the free backup storage is account-specific and region-specific. The size of your backups are directly proportional to the amount of data on your instance. For example, if you have a DB instance with 100 GB of provisioned storage, but only store 5 GB of data on it, your first backup will only be approximately 5 GB (not 100 GB). Subsequent backups are incremental, and will only store the changed data on your DB instance. Please note that the backup storage size is not displayed in RDS Console nor in the DescribeDBSnapshots API response. /rds/faqs/;Why does my additional backup storage cost more than the allocated DB instance storage?;The storage provisioned to your DB instance for your primary data is located within a single Availability Zone. When your database is backed up, the backup data (including transactions logs) is geo-redundantly replicated across multiple Availability Zones to provide even greater levels of data durability. The price for backup storage beyond your free allocation reflects this extra replication that occurs to maximize the durability of your critical backups. /rds/faqs/;How will I be billed for Multi-AZ DB instance deployments?;If you specify that your DB instance should be a Multi-AZ deployment, you will be billed according to the Multi-AZ pricing posted on the Amazon RDS pricing page. Multi-AZ billing is based on: Multi-AZ DB instance hours – Based on the class (e.g. db.t2.micro, db.m4.large) of the DB instance consumed. As with standard deployments in a single Availability Zone, Partial DB instance hours consumed are billed in one-second increments with a 10 minutes minimum charge following a billable status change, such as creating, starting, or modifying the DB instance class. If you convert your DB instance deployment between standard and Multi-AZ within a given hour, you will be charged both applicable rates for that hour. Provisioned storage (for Multi-AZ DB instance) – If you convert your deployment between standard and Multi-AZ within a given hour, you will be charged the higher of the applicable storage rates for that hour. I/O requests per month – Total number of storage I/O requests you have. Multi-AZ deployments consume a larger volume of I/O requests than standard DB instance deployments, depending on your database write/read ratio. Write I/O usage associated with database updates will double as Amazon RDS synchronously replicates your data to the standby DB instance. Read I/O usage will remain the same. Backup Storage – Your backup storage usage will not change whether your DB instance is a standard or Multi-AZ deployment. Backups will simply be taken from your standby to avoid I/O suspension on the DB instance primary. Data transfer – You are not charged for the data transfer incurred in replicating data between your primary and standby. Internet data transfer in and out of your DB instance is charged the same as with a standard deployment. /rds/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, the use of AWS services is subject to Japanese Consumption Tax. Learn more. /rds/faqs/;What does the AWS Free Tier for Amazon RDS offer?;"The AWS Free Tier for Amazon RDS offer provides free use of Single-AZ Micro DB instances running MySQL, MariaDB, PostgreSQL, Oracle (""Bring-Your-Own-License (BYOL)"" licensing model), and SQL Server Express Edition. The free usage tier is capped at 750 instance hours per month. Customers also receive 20 GB of General Purpose (SSD) database storage and 20 GB of backup storage for free per month." /rds/faqs/;For what time period will the AWS Free Tier for Amazon RDS be available to me?;New AWS accounts receive 12 months of AWS Free Tier access. Please see the AWS Free Tier FAQs for more information. /rds/faqs/;Can I run more than one DB instance under the AWS Free Usage Tier for Amazon RDS?;Yes. You can run more than one Single-AZ Micro DB instance simultaneously and be eligible for usage counted under the AWS Free Tier for Amazon RDS. However, any use exceeding 750 instance hours, across all Amazon RDS Single-AZ Micro DB instances and across all eligible database engines and regions, will be billed at standard Amazon RDS prices. For example, if you run two Single-AZ Micro DB instances for 400 hours each in a single month, you will accumulate 800 instance hours of usage, of which 750 hours will be free. You will be billed for the remaining 50 hours at the standard Amazon RDS price. /rds/faqs/;Do I have access to 750 instance hours each of the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server Micro DB instances under the AWS Free Tier?;No. A customer with access to the AWS Free Tier can use up to 750 instance hours of Micro instances running either MySQL, PostgreSQL, Oracle, or SQL Server Express Edition. Any use exceeding 750 instance hours, across all Amazon RDS Single-AZ Micro DB instances and across all eligible database engines and regions, will be billed at standard Amazon RDS prices. /rds/faqs/;How am I billed when my instance-hour usage exceeds the Free Tier benefit?;You are billed at standard Amazon RDS prices for instance hours beyond what the Free Tier provides. See the Amazon RDS pricing page for details. /rds/faqs/;What is a reserved instance (RI)?;Amazon RDS reserved instances give you the option to reserve a DB instance for a one or three year term and in turn receive a significant discount compared to the on-demand instance pricing for the DB instance. There are three RI payment options -- NUpfront, Partial Upfront, All Upfront -- which enable you to balance the amount you pay upfront with your effective hourly price. /rds/faqs/;How are reserved instances different from on-demand DB instances?;Functionally, reserved instances and on-demand DB instances are exactly the same. The only difference is how your DB instance(s) are billed. With Reserved Instances, you purchase a one- or three-year reservation and in return receive a lower effective hourly usage rate (compared with on-demand DB instances) for the duration of the term. Unless you purchase reserved instances in a Region, all DB instances will be billed at on-demand hourly rates. /rds/faqs/;How do I purchase and create reserved instances?;"You can purchase a reserved instance in the ""Reserved Instance"" section of the AWS Management Console for Amazon RDS. Alternatively, you can use the Amazon RDS API or AWS Command Line Interface to list the reservations available for purchase and then purchase a DB instance reservation. Once you have made a reserved purchase, using a reserved DB instance is no different than an On-Demand DB instance. Launch a DB instance using the same instance class, engine, and region for which you made the reservation. As long as your reservation purchase is active, Amazon RDS will apply the reduced hourly rate for which you are eligible to the new DB instance." /rds/faqs/;Do reserved instances include a capacity reservation?;Amazon RDS reserved instances are purchased for a Region rather than for a specific Availability Zone. As RIs are not specific to an Availability Zone, they are not capacity reservations. This means that even if capacity is limited in one Availability Zone, reservations can still be purchased in the Region and the discount will apply to matching usage in any Availability Zone within that Region. /rds/faqs/;How many reserved instances can I purchase?;You can purchase up to 40 reserved DB instances. If you wish to run more than 40 DB instances, please complete the Amazon RDS DB Instance request form. /rds/faqs/;What if I have an existing DB instance that I’d like to cover with a reserved instance?;Simply purchase a DB instance reservation with the same DB instance class, DB engine, Multi-AZ option, and License Model within the same Region as the DB instance you are currently running and would like to reserve. If the reservation purchase is successful, Amazon RDS will automatically apply your new hourly usage charge to your existing DB instance. /rds/faqs/;If I sign up for a reserved instance, when does the term begin? What happens to my DB instance when the term ends?;Pricing changes associated with a reserved instance are activated once your request is received while the payment authorization is processed. You can follow the status of your reservation on the AWS Account Activity page or by using the DescribeReservedDBInstances API or describe-reserved-db-instances command.If the one-time payment cannot be successfully authorized by the next billing period, the discounted price will not take effect. When your reservation term expires, your reserved instance will revert to the appropriate On-Demand hourly usage rate for your DB instance class and Region. /rds/faqs/;How do I control which DB instances are billed at the reserved instance rate?;The Amazon RDS operations for creating, modifying, and deleting DB instances do not distinguish between on-demand and reserved instances. When computing your bill, our system will automatically apply your Reservation(s) such that all eligible DB instances are charged at the lower hourly reserved DB instance rate. /rds/faqs/;If I scale my DB instance class up or down, what happens to my reservation?;"Each reservation is associated with the following set of attributes: DB engine, DB instance class, Multi-AZ deployment option, license model, and Region. A reservation for a DB engine and license model that is eligible for size-flexibility (MySQL, MariaDB, PostgreSQL, Amazon Aurora, or Oracle ""Bring Your Own License"") will automatically apply to a running DB instance of any size within the same instance family (e.g. M4, T2, or R3) for the same database engine and Region. In addition, the reservation will also apply to DB instances running in either Single-AZ or Multi-AZ deployment options. For example, let’s say you purchased a db.m4.2xlarge MySQL reservation. If you decide to scale up the running DB instance to a db.m4.4xlarge, the discounted rate of this RI will cover 1/2 of the usage of the larger DB instance. If you are running a DB engine or license model that is not eligible for size-flexibility (Microsoft SQL Server or Oracle ""License Included""), each reservation can only be applied to a DB instance with the same attributes for the duration of the term. If you decide to modify any of these attributes of your running DB instance before the end of the reservation term, your hourly usage rates for that DB instance will revert to on demand hourly rates. For more details on about size flexibility, see the Amazon RDS User Guide ." /rds/faqs/;Can I move a reserved instance from one Region or Availability Zone to another?;Each reserved instance is associated with a specific Region, which is fixed for the lifetime of the reservation and cannot be changed. Each reservation can, however, be used in any of the available AZs within the associated Region. /rds/faqs/;Are reserved instances available for Multi-AZ deployments?;Yes. When you purchase a reserved instance, you can select the Multi-AZ option in the DB instance configuration available for purchase. In addition, if you are using a DB engine and license model that supports reserved instance size-flexibility, a Multi-AZ reserved instance will cover usage for two Single-AZ DB instances. /rds/faqs/;Are reserved instances available for read replicas?;A DB instance reservation can be applied to a read replica, provided the DB instance class and Region are the same. When computing your bill, our system will automatically apply your Reservation(s), such that all eligible DB instances are charged at the lower hourly reserved instance rate. /rds/faqs/;Can I cancel a reservation?;No, you cannot cancel your reserved DB instance and the one-time payment (if applicable) is not refundable. You will continue to pay for every hour during your Reserved DB instance term regardless of your usage. /rds/faqs/;How do the payment options impact my bill?;When you purchase an RI under the All Upfront payment option, you pay for the entire term of the RI in one upfront payment. You can choose to pay nothing upfront by choosing the NUpfront option. The entire value of the NUpfront RI is spread across every hour in the term and you will be billed for every hour in the term, regardless of usage. The Partial Upfront payment option is a hybrid of the All Upfront and NUpfront options. You make a small upfront payment, and you are billed a low hourly rate for every hour in the term regardless of usage. /rds/faqs/;How do I determine which initial DB instance class and storage capacity are appropriate for my needs?;In order to select your initial DB instance class and storage capacity, you will want to assess your application’s compute, memory, and storage needs. For information about the DB instance classes available, please refer to the Amazon RDS User Guide. /rds/faqs/;How do I scale the compute resources and/or storage capacity associated with my Amazon RDS Database Instance?;You can scale the compute resources and storage capacity allocated to your DB instance with the AWS Management Console (selecting the desired DB instance and clicking the Modify button), the Amazon RDS API, or the AWS Command Line Interface. Memory and CPU resources are modified by changing your DB Instance class, and storage available is changed when you modify your storage allocation. Please note that when you modify your DB Instance class or allocated storage, your requested changes will be applied during your specified maintenance window. Alternately, you can use the “apply-immediately” flag to apply your scaling requests immediately. Bear in mind that any other pending system changes will be applied as well. Some older RDS for SQL Server instances may not be eligible for scaled storage. See the RDS for SQL Server FAQ for more information. /rds/faqs/;What is the hardware configuration for Amazon RDS storage?;Amazon RDS uses EBS volumes for database and log storage. Depending on the size of storage requested, Amazon RDS automatically stripes across multiple EBS volumes to enhance IOPS performance. For MySQL and Oracle, for an existing DB instance, you may observe some I/O capacity improvement if you scale up your storage. You can scale the storage capacity allocated to your DB Instance using the AWS Management Console, the ModifyDBInstance API, or the modify-db-instance command. For more information, see Storage for Amazon RDS. /rds/faqs/;Will my DB instance remain available during scaling?;The storage capacity allocated to your DB Instance can be increased while maintaining DB Instance availability. However, when you decide to scale the compute resources available to your DB instance up or down, your database will be temporarily unavailable while the DB instance class is modified. This period of unavailability typically lasts only a few minutes, and will occur during the maintenance window for your DB Instance, unless you specify that the modification should be applied immediately. /rds/faqs/;How can I scale my DB instance beyond the largest DB instance class and maximum storage capacity?;Amazon RDS supports a variety of DB instance classes and storage allocations to meet different application needs. If your application requires more compute resources than the largest DB instance class or more storage than the maximum allocation, you can implement partitioning, thereby spreading your data across multiple DB instances. /rds/faqs/;What is Amazon RDS General Purpose (SSD) storage?;Amazon RDS General Purpose (SSD) Storage is suitable for a broad range of database workloads that have moderate I/O requirements. With the baseline of 3 IOPS/GB and ability to burst up to 3,000 IOPS, this storage option provides predictable performance to meet the needs of most applications. /rds/faqs/;What is Amazon RDS Provisioned IOPS (SSD) storage?;Amazon RDS Provisioned IOPS (SSD) Storage is an SSD-backed storage option designed to deliver fast, predictable, and consistent I/O performance. With Amazon RDS Provisioned IOPS (SSD) Storage, you specify an IOPS rate when creating a DB instance, and Amazon RDS provisions that IOPS rate for the lifetime of the DB instance. Amazon RDS Provisioned IOPS (SSD) Storage is optimized for I/O-intensive, transactional (OLTP) database workloads. For more details, please see the Amazon RDS User Guide. /rds/faqs/;What is Amazon RDS magnetic storage?;Amazon RDS magnetic storage is useful for small database workloads where data is accessed less frequently. Magnetic storage is not recommended for production database instances. /rds/faqs/;How do I choose among the Amazon RDS storage types?;Choose the storage type most suited for your workload. High-performance OLTP workloads: Amazon RDS Provisioned IOPS (SSD) Storage Database workloads with moderate I/O requirements: Amazon RDS General Purpose (SSD) Storage /rds/faqs/;What are the minimum and maximum IOPS supported by Amazon RDS?;The IOPS supported by Amazon RDS varies by database engine. For more details, please see the Amazon RDS User Guide. /rds/faqs/;What is the difference between automated backups and DB Snapshots?;"Amazon RDS provides two different methods for backing up and restoring your DB instance(s) automated backups and database snapshots (DB Snapshots). The automated backup feature of Amazon RDS enables point-in-time recovery of your DB instance. When automated backups are turned on for your DB Instance, Amazon RDS automatically performs a full daily snapshot of your data (during your preferred backup window) and captures transaction logs (as updates to your DB Instance are made). When you initiate a point-in-time recovery, transaction logs are applied to the most appropriate daily backup in order to restore your DB instance to the specific time you requested. Amazon RDS retains backups of a DB Instance for a limited, user-specified period of time called the retention period, which by default is 7 days but can be set to up to 35 days. You can initiate a point-in-time restore and specify any second during your retention period, up to the Latest Restorable Time. You can use the DescribeDBInstances API to return the latest restorable time for you DB instance, which is typically within the last five minutes. Alternatively, you can find the Latest Restorable Time for a DB instance by selecting it in the AWS Management Console and looking in the “Description” tab in the lower panel of the Console. DB Snapshots are user-initiated and enable you to back up your DB instance in a known state as frequently as you wish, and then restore to that specific state at any time. DB Snapshots can be created with the AWS Management Console, CreateDBSnapshot API, or create-db-snapshot command and are kept until you explicitly delete them. The snapshots which Amazon RDS performs for enabling automated backups are available to you for copying (using the AWS console or the copy-db-snapshot command) or for the snapshot restore functionality. You can identify them using the ""automated"" Snapshot Type. In addition, you can identify the time at which the snapshot has been taken by viewing the ""Snapshot Created Time"" field. Alternatively, the identifier of the ""automated"" snapshots also contains the time (in UTC) at which the snapshot has been taken. Please note: When you perform a restore operation to a point in time or from a DB Snapshot, a new DB Instance is created with a new endpoint (the old DB Instance can be deleted if so desired). This is done to enable you to create multiple DB Instances from a specific DB Snapshot or point in time." /rds/faqs/;Do I need to enable backups for my DB Instance or is it done automatically?;By default, Amazon RDS enables automated backups of your DB instance with a 7-day retention period. If you would like to modify your backup retention period, you can do so using the RDS Console, the CreateDBInstance API (when creating a new DB Instance), or the ModifyDBInstance API (for existing instances). You can use these methods to change the RetentionPeriod parameter to any number from 0 (which will disable automated backups) to the desired number of days, up to 35. The value cannot be set to 0 if the DB instance is a source to Read Replicas. For more information on automated backups, please refer to the Amazon RDS User Guide. /rds/faqs/;What is a backup window and why do I need it? Is my database available during the backup window?;The preferred backup window is the user-defined period of time during which your DB Instance is backed up. Amazon RDS uses these periodic data backups in conjunction with your transaction logs to enable you to restore your DB Instance to any second during your retention period, up to the LatestRestorableTime (typically up to the last few minutes). During the backup window, storage I/O may be briefly suspended while the backup process initializes (typically under a few seconds) and you may experience a brief period of elevated latency. There is no I/O suspension for Multi-AZ DB deployments, since the backup is taken from the standby. /rds/faqs/;Where are my automated backups and DB snapshots stored and how do I manage their retention?;"Amazon RDS DB snapshots and automated backups are stored in S3. You can use the AWS Management Console, the ModifyDBInstance API, or the modify-db-instance command to manage the period of time your automated backups are retained by modifying the RetentionPeriod parameter. If you desire to turn off automated backups altogether, you can do so by setting the retention period to 0 (not recommended). You can manage your user-created DB Snapshots via the ""Snapshots"" section of the Amazon RDS Console. Alternatively, you can see a list of the user-created DB Snapshots for a given DB Instance using the DescribeDBSnapshots API or describe-db-snapshots command Dand delete snapshots with the DeleteDBSnapshot API or delete-db-snapshot command." /rds/faqs/;Why do I have more automated DB snapshots than the number of days in the retention period for my DB instance?;It is normal to have 1 or 2 more automated DB snapshots than the number of days in your retention period. One extra automated snapshot is retained to ensure the ability to perform a point in time restore to any time during the retention period. For example, if your backup window is set to 1 day, you will require 2 automated snapshots to support restores to any within the previous 24 hours. You may also see an additional automated snapshot as a new automated snapshot is always created before the oldest automated snapshot is deleted. /rds/faqs/;What happens to my backups and DB snapshots if I delete my DB instance?;"When you delete a DB instance, you can create a final DB snapshot upon deletion; if you do, you can use this DB snapshot to restore the deleted DB instance at a later date. Amazon RDS retains this final user-created DB snapshot along with all other manually created DB snapshots after the DB instance is deleted. Refer to the pricing page for details of backup storage costs. Automated backups are deleted when the DB instance is deleted. Only manually created DB Snapshots are retained after the DB Instance is deleted." /rds/faqs/;What is Amazon Virtual Private Cloud (VPC) and how does it work with Amazon RDS?;Amazon VPC lets you create a virtual networking environment in a private, isolated section of the AWS cloud where you can exercise complete control over aspects, such as private IP address ranges, subnets, routing tables, and network gateways. With Amazon VPC, you can define a virtual network topology and customize the network configuration to closely resemble a traditional IP network that you might operate in your own data center. One way that you can take advantage of VPC is when you want to run a public-facing web application while still maintaining non-publicly accessible backend servers in a private subnet. You can create a public-facing subnet for your webservers that has access to the Internet, and place your backend Amazon RDS DB Instances in a private-facing subnet with no Internet access. For more information about Amazon VPC, refer to the Amazon Virtual Private Cloud User Guide . /rds/faqs/;How is using Amazon RDS inside a VPC different from using it on the EC2-Classic platform (non-VPC)?;If your AWS account was created before 2013-12-04, you may be able to run Amazon RDS in an Amazon Elastic Compute Cloud (EC2)-Classic environment. The basic functionality of Amazon RDS is the same regardless of whether EC2-Classic or EC2-VPC is used. Amazon RDS manages backups, software patching, automatic failure detection, read replicas, and recovery whether your DB Instances are deployed inside or outside a VPC. For more information about the differences between EC2-Classic and EC2-VPC, see the EC2 documentation. /rds/faqs/;What is a DB Subnet Group and why do I need one?;A DB Subnet Group is a collection of subnets that you may want to designate for your Amazon RDS DB Instances in a VPC. Each DB Subnet Group should have at least one subnet for every Availability Zone in a given Region. When creating a DB Instance in VPC, you will need to select a DB Subnet Group. Amazon RDS then uses that DB Subnet Group and your preferred Availability Zone to select a subnet and an IP address within that subnet. Amazon RDS creates and associates an Elastic Network Interface to your DB Instance with that IP address. Please note that we strongly recommend you use the DNName to connect to your DB Instance as the underlying IP address can change (e.g., during failover). For Multi-AZ deployments, defining a subnet for all Availability Zones in a Region will allow Amazon RDS to create a new standby in another Availability Zone should the need arise. You need to do this even for Single-AZ deployments, just in case you want to convert them to Multi-AZ deployments at some point. /rds/faqs/;How do I create an Amazon RDS DB Instance in VPC?;For a procedure that walks you through this process, refer to Creating a DB Instance in a VPC in the Amazon RDS User Guide. /rds/faqs/;How do I control network access to my DB Instance(s)?;Visit the Security Groups section of the Amazon RDS User Guide to learn about the different ways to control access to your DB Instances. /rds/faqs/;How do I connect to an Amazon RDS DB Instance in VPC?;DB Instances deployed within a VPC can be accessed by EC2 Instances deployed in the same VPC. If these EC2 Instances are deployed in a public subnet with associated Elastic IPs, you can access the EC2 Instances via the internet. DB Instances deployed within a VPC can be accessed from the Internet or from EC2 Instances outside the VPC via VPN or bastion hosts that you can launch in your public subnet or using Amazon RDS's Publicly Accessible option: To use a bastion host, you will need to set up a public subnet with an EC2 instance that acts as a SSH Bastion. This public subnet must have an internet gateway and routing rules that allow traffic to be directed via the SSH host, which must then forward requests to the private IP address of your Amazon RDS DB instance. To use public connectivity, simply create your DB Instances with the Publicly Accessible option set to yes. With Publicly Accessible active, your DB Instances within a VPC will be fully accessible outside your VPC by default. This means you do not need to configure a VPN or bastion host to allow access to your instances. You can also set up a VPN Gateway that extends your corporate network into your VPC and allows access to the Amazon RDS DB instance in that VPC. Refer to the Amazon VPC User Guide for more details. We strongly recommend you use the DNName to connect to your DB Instance as the underlying IP address can change (e.g., during failover). /rds/faqs/;Can I move my existing DB instances outside VPC into my VPC?;If your DB instance is not in a VPC, you can use the AWS Management Console to easily move your DB instance into a VPC. See the Amazon RDS User Guide for more details. You can also take a snapshot of your DB Instance outside VPC and restore it to VPC by specifying the DB Subnet Group you want to use. Alternatively, you can perform a “Restore to Point in Time” operation as well. /rds/faqs/;Can I move my existing DB instances from inside VPC to outside VPC?;Migration of DB Instances from inside to outside VPC is not supported. For security reasons, a DB Snapshot of a DB Instance inside VPC cannot be restored to outside VPC. The same is true with “Restore to Point in Time” functionality. /rds/faqs/;What precautions should I take to ensure that my DB Instances in VPC are accessible by my application?;You are responsible for modifying routing tables and networking ACLs in your VPC to ensure that your DB instance is reachable from your client instances in the VPC. For Multi-AZ deployments, after failover, your client EC2 instance and Amazon RDS DB Instance may be in different Availability Zones. You should configure your networking ACLs to ensure that cross-AZ communication is possible. /rds/faqs/;Can I change the DB Subnet Group of my DB Instance?;An existing DB Subnet Group can be updated to add more subnets, either for existing Availability Zones or for new Availability Zones added since the creation of the DB Instance. Removing subnets from an existing DB Subnet Group can cause unavailability for instances if they are running in a particular AZ that gets removed from the subnet group. View the Amazon RDS User Guide for more information. /rds/faqs/;What is an Amazon RDS primary user account and how is it different from an AWS account?;To begin using Amazon RDS you will need an AWS developer account. If you do not have one prior to signing up for Amazon RDS, you will be prompted to create one when you begin the sign-up process. A primary user account is different from an AWS developer account and used only within the context of Amazon RDS to control access to your DB Instance(s). The primary user account is a native database user account that you can use to connect to your DB Instance. You can specify the primary user name and password you want associated with each DB Instance when you create the DB Instance. Once you have created your DB Instance, you can connect to the database using the primary user credentials. Subsequently, you may also want to create additional user accounts so that you can restrict who can access your DB Instance. /rds/faqs/;What privileges are granted to the primary user for my DB Instance?;"For MySQL, the default privileges for the primary user include: create, drop, references, event, alter, delete, index, insert, select, update, create temporary tables, lock tables, trigger, create view, show view, alter routine, create routine, execute, trigger, create user, process, show databases, grant option. For Oracle, the primary user is granted the ""dba"" role. The primary user inherits most of the privileges associated with the role. Please refer to the Amazon RDS User Guide for the list of restricted privileges and the corresponding alternatives to perform administrative tasks that may require these privileges. For SQL Server, a user that creates a database is granted the ""db_owner"" role. Please refer to the Amazon RDS User Guide for the list of restricted privileges and the corresponding alternatives to perform administrative tasks that may require these privileges." /rds/faqs/;Is there anything different about user management with Amazon RDS?;No, everything works the way you are familiar with when using a relational database you manage yourself. /rds/faqs/;Can programs running on servers in my own data center access Amazon RDS databases?;Yes. You have to intentionally turn on the ability to access your database over the internet by configuring Security Groups. You can authorize access for only the specific IPs, IP ranges, or subnets corresponding to servers in your own data center. /rds/faqs/;Can I encrypt connections between my application and my DB Instance using SSL/TLS?;"Yes, this option is supported for all Amazon RDS engines. Amazon RDS generates an SSL/TLS certificate for each DB Instance . Once an encrypted connection is established, data transferred between the DB Instance and your application will be encrypted during transfer. While SSL offers security benefits, be aware that SSL/TLS encryption is a compute-intensive operation and will increase the latency of your database connection. SSL/TLS support within Amazon RDS is for encrypting the connection between your application and your DB Instance; it should not be relied on for authenticating the DB Instance itself. For details on establishing an encrypted connection with Amazon RDS, please visit Amazon RDS's MySQL User Guide, MariaDB User Guide, PostgreSQL User Guide, or Oracle User Guide. To learn more about how SSL/TLS works with these engines, you can refer directly to the MySQL documentation, the MariaDB documentation, the MSDN SQL Server documentation, the PostgreSQL documentation, or the Oracle Documentation." /rds/faqs/;Can I encrypt data at rest on my Amazon RDS databases?;Amazon RDS supports encryption at rest for all database engines, using keys you manage using AWS Key Management Service (KMS). On a database instance running with Amazon RDS encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, read replicas, and snapshots. Encryption and decryption are handled transparently. For more information about the use of KMS with Amazon RDS, see the Amazon RDS User's Guide. You can also add encryption to a previously unencrypted DB instance or DB cluster by creating a DB snapshot and then creating a copy of that snapshot and specifying a KMS encryption key. You can then restore an encrypted DB instance or DB cluster from the encrypted snapshot. Amazon RDS for Oracle and SQL Server support those engines' Transparent Data Encryption (TDE) technologies. For more information, see the Amazon RDS User's Guide for Oracle and SQL Server . /rds/faqs/;How do I control the actions that my systems and users can take on specific Amazon RDS resources?;"You can control the actions that your AWS IAM users and groups can take on Amazon RDS resources. You do this by referencing the Amazon RDS resources in the AWS IAM policies that you apply to your users and groups. Amazon RDS resources that can be referenced in an AWS IAM policy include DB instances, DB snapshots, read replicas, DB security groups, DB option groups, DB parameter groups, event subscriptions, and DB subnet groups. In addition, you can tag these resources to add additional metadata to your resources. By using tagging, you can categorize your resources (e.g. ""Development"" DB instances, ""Production"" DB instances, and ""Test"" DB instances), and write AWS IAM policies that list the permissions (i.e. actions) that can be taken on resources with the same tags. For more information, refer to Tagging Amazon RDS Resources." /rds/faqs/;I wish to perform security analysis or operational troubleshooting on my Amazon RDS deployment. Can I get a history of all Amazon RDS API calls made on my account?;Yes. AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. /rds/faqs/;Can I use Amazon RDS with applications that require HIPAA compliance?;Yes, all Amazon RDS database engines are HIPAA-eligible, so you can use them to build HIPAA-compliant applications and store healthcare-related information, including protected health information (PHI) under an executed Business Associate Agreement (BAA) with AWS. If you already have an executed BAA, no action is necessary to begin using these services in the account(s) covered by your BAA. If you do not have an executed BAA with AWS, or have any other questions about HIPAA-compliant applications on AWS, please contact your account manager. /rds/faqs/;How do I choose the right configuration parameters for my DB Instance(s)?;By default, Amazon RDS chooses the optimal configuration parameters for your DB Instance taking into account the instance class and storage capacity. However, if you want to change them, you can do so using the AWS Management Console, the Amazon RDS APIs, or the AWS Command Line Interface. Please note that changing configuration parameters from recommended values can have unintended effects, ranging from degraded performance to system crashes, and should only be attempted by advanced users who wish to assume these risks. /rds/faqs/;What are DB Parameter groups? How are they helpful?;A database parameter group (DB Parameter Group) acts as a “container” for engine configuration values that can be applied to one or more DB Instances. If you create a DB Instance without specifying a DB Parameter Group, a default DB Parameter Group is used. This default group contains engine defaults and Amazon RDS system defaults optimized for the DB Instance you are running. However, if you want your DB Instance to run with your custom-specified engine configuration values, you can simply create a new DB Parameter Group, modify the desired parameters, and modify the DB Instance to use the new DB Parameter Group. Once associated, all DB Instances that use a particular DB Parameter Group get all the parameter updates to that DB Parameter Group. For more information on configuring DB Parameter Groups, please read the Amazon RDS User Guide. /rds/faqs/;How can I monitor the configuration of my Amazon RDS resources?;You can use AWS Config to continuously record configuration changes to Amazon RDS DB Instances, DB Subnet Groups, DB Snapshots, DB Security Groups, and Event Subscriptions and receive notification of changes through Amazon Simple Notification Service (SNS). You can also create AWS Config Rules to evaluate whether these Amazon RDS resources have the desired configurations. /rds/faqs/;What does it mean to run a DB instance as a Multi-AZ deployment?;When you create or modify your DB instance to run as a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous “standby” replica in a different Availability Zone. Updates to your DB Instance are synchronously replicated across Availability Zones to the standby in order to keep both in sync and protect your latest database updates against DB instance failure. During certain types of planned maintenance, or in the unlikely event of DB instance failure or Availability Zone failure, Amazon RDS will automatically failover to the standby so that you can resume database writes and reads as soon as the standby is promoted. Since the name record for your DB instance remains the same, your application can resume database operation without the need for manual administrative intervention. With Multi-AZ deployments, replication is transparent. You do not interact directly with the standby, and it cannot be used to serve read traffic. More information about Multi-AZ deployments is in the Amazon RDS User Guide. /rds/faqs/;What is an Availability Zone?;Availability Zones are distinct locations within a Region that are engineered to be isolated from failures in other Availability Zones. Each Availability Zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of failures like generators and cooling equipment are not shared across Availability Zones. Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados, or flooding would only affect a single Availability Zone. Availability Zones within the same Region benefit from low-latency network connectivity. /rds/faqs/;What do “primary” and “standby” mean in the context of a Multi-AZ deployment?;When you run a DB instance as a Multi-AZ deployment, the “primary” serves database writes and reads. In addition, Amazon RDS provisions and maintains a “standby” behind the scenes, which is an up-to-date replica of the primary. The standby is “promoted” in failover scenarios. After failover, the standby becomes the primary and accepts your database operations. You do not interact directly with the standby (e.g. for read operations) at any point prior to promotion. If you are interested in scaling read traffic beyond the capacity constraints of a single DB instance, please see the FAQs on Read Replicas. /rds/faqs/;What are the benefits of a Multi-AZ deployment?;The chief benefits of running your DB instance as a Multi-AZ deployment are enhanced database durability and availability. The increased availability and fault tolerance offered by Multi-AZ deployments make them a natural fit for production environments. Running your DB instance as a Multi-AZ deployment safeguards your data in the unlikely event of a DB instance component failure or loss of availability in one Availability Zone. For example, if a storage volume on your primary fails, Amazon RDS automatically initiates a failover to the standby, where all of your database updates are intact. This provides additional data durability relative to standard deployments in a single AZ, where a user-initiated restore operation would be required and updates that occurred after the latest restorable time (typically within the last five minutes) would not be available. You also benefit from enhanced database availability when running your DB instance as a Multi-AZ deployment. If an Availability Zone failure or DB instance failure occurs, your availability impact is limited to the time automatic failover takes to complete. The availability benefits of Multi-AZ also extend to planned maintenance. For example, with automated backups, I/O activity is no longer suspended on your primary during your preferred backup window, since backups are taken from the standby. In the case of patching or DB instance class scaling, these operations occur first on the standby, prior to automatic failover. As a result, your availability impact is limited to the time required for automatic failover to complete. Another implied benefit of running your DB instance as a Multi-AZ deployment is that DB instance failover is automatic and requires no administration. In an Amazon RDS context, this means you are not required to monitor DB instance events and initiate manual DB instance recovery (via the RestoreDBInstanceToPointInTime or RestoreDBInstanceFromSnapshot APIs) in the event of an Availability Zone failure or DB instance failure. /rds/faqs/;Are there any performance implications of running my DB instance as a Multi-AZ deployment?;You may observe elevated latencies relative to a standard DB instance deployment in a single Availability Zone as a result of the synchronous data replication performed on your behalf. /rds/faqs/;When running my DB instance as a Multi-AZ deployment, can I use the standby for read or write operations?;No, a Multi-AZ standby cannot serve read requests. Multi-AZ deployments are designed to provide enhanced database availability and durability, rather than read scaling benefits. As such, the feature uses synchronous replication between primary and standby. Our implementation makes sure the primary and the standby are constantly in sync, but precludes using the standby for read or write operations. If you are interested in a read scaling solution, please see the FAQs on Read Replicas. /rds/faqs/;How do I set up a Multi-AZ DB instance deployment?;In order to create a Multi-AZ DB instance deployment, simply click the “Yes” option for “Multi-AZ Deployment” when launching a DB Instance with the AWS Management Console. Alternatively, if you are using the Amazon RDS APIs, you would call the CreateDBInstance API and set the “Multi-AZ” parameter to the value “true.” To convert an existing standard (single-AZ) DB instance to Multi-AZ, modify the DB instance in the AWS Management Console or use the ModifyDBInstance API and set the Multi-AZ parameter to true. /rds/faqs/;What happens when I convert my Amazon RDS instance from Single-AZ to Multi-AZ?;For the RDS for MySQL, MariaDB, PostgreSQL, and Oracle database engines, when you elect to convert your Amazon RDS instance from Single-AZ to Multi-AZ, the following happens: A snapshot of your primary instance is taken. A new standby instance is created in a different Availability Zone, from the snapshot. Synchronous replication is configured between primary and standby instances. As such, there should be no downtime incurred when an instance is converted from Single-AZ to Multi-AZ. However, you may see increased latency while the data on the standby is caught up to match to the primary. /rds/faqs/;What events would cause Amazon RDS to initiate a failover to the standby replica?;Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following: Loss of availability in primary Availability Zone Loss of network connectivity to primary Compute unit failure on primary Storage failure on primary Note: When operations such as DB instance scaling or system upgrades, like OS patching, are initiated for Multi-AZ deployments, for enhanced availability they are applied first on the standby prior to automatic failover. As a result, your availability impact is limited only to the time required for automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations, such as long running queries, deadlocks, or database corruption errors. /rds/faqs/;Will I be alerted when automatic failover occurs?;Yes, Amazon RDS will emit a DB instance event to inform you that automatic failover occurred. You can click the “Events” section of the Amazon RDS Console or use the DescribeEvents API to return information about events related to your DB instance. You can also use Amazon RDS Event Notifications to be notified when specific DB events occur. /rds/faqs/;What happens during Multi-AZ failover and how long does it take?;"Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary. We encourage you to follow best practices and implement database connection retry at the application layer. Failovers, as defined by the interval between the detection of the failure on the primary and the resumption of transactions on the standby, typically complete within one to two minutes. Failover time can also be affected by whether large uncommitted transactions must be recovered; the use of adequately large instance types is recommended with Multi-AZ for best results. AWS also recommends the use of Provisioned IOPS with Multi-AZ instances, for fast, predictable, and consistent throughput performance." /rds/faqs/;Can I initiate a “forced failover” for my Multi-AZ DB instance deployment?;Amazon RDS will automatically failover without user intervention under a variety of failure conditions. In addition, Amazon RDS provides an option to initiate a failover when rebooting your instance. You can access this feature via the AWS Management Console or when using the RebootDBInstance API call. /rds/faqs/;How do I control/configure Multi-AZ synchronous replication?;With Multi-AZ deployments, you simply set the “Multi-AZ” parameter to true. The creation of the standby, synchronous replication, and failover are all handled automatically. This means you cannot select the Availability Zone your standby is deployed in or alter the number of standbys available (Amazon RDS provisions one dedicated standby per DB instance primary). The standby also cannot be configured to accept database read activity. Learn more about Multi-AZ configurations. /rds/faqs/;Will my standby be in the same Region as my primary?;Yes. Your standby is automatically provisioned in a different Availability Zone of the same Region as your DB instance primary. /rds/faqs/;Can I see which Availability Zone my primary is currently located in?;Yes, you can gain visibility into the location of the current primary by using the AWS Management Console or DescribeDBInstances API. /rds/faqs/;After failover, my primary is now located in a different Availability Zone than my other AWS resources (e.g. EC2 instances). Should I be concerned about latency?;Availability Zones are engineered to provide low latency network connectivity to other Availability Zones in the same Region. In addition, you may want to consider architecting your application and other AWS resources with redundancy across multiple Availability Zones so your application will be resilient in the event of an Availability Zone failure. Multi-AZ deployments address this need for the database tier without administration on your part. /rds/faqs/;How do DB Snapshots and automated backups work with my Multi-AZ deployment?;You interact with automated backup and DB Snapshot functionality in the same way whether you are running a standard deployment in a Single-AZ or Multi-AZ deployment. If you are running a Multi-AZ deployment, automated backups and DB Snapshots are simply taken from the standby to avoid I/O suspension on the primary. Please note that you may experience increased I/O latency (typically lasting a few minutes) during backups for both Single-AZ and Multi-AZ deployments. Initiating a restore operation (point-in-time restore or restore from DB Snapshot) also works the same with Multi-AZ deployments as standard, Single-AZ deployments. New DB instance deployments can be created with either the RestoreDBInstanceFromSnapshot or RestoreDBInstanceToPointInTime APIs. These new DB instance deployments can be either standard or Multi-AZ, regardless of whether the source backup was initiated on a standard or Multi-AZ deployment. /rds/faqs/;What does it mean to run a DB Instance as a read replica?;Read replicas make it easier to take advantage of supported engines' built-in replication functionality to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create a read replica with a few clicks in the AWS Management Console or using the CreateDBInstanceReadReplica API. Once the read replica is created, database updates on the source DB instance will be replicated using a supported engine's native, asynchronous replication. You can create multiple read replicas for a given source DB Instance and distribute your application’s read traffic amongst them. Since read replicas use supported engines' built-in replication, they are subject to its strengths and limitations. In particular, updates are applied to your read replica(s) after they occur on the source DB instance, and replication lag can vary significantly. Read replicas can be associated with Multi-AZ deployments to gain read scaling benefits in addition to the enhanced database write availability and data durability provided by Multi-AZ deployments. /rds/faqs/;When would I want to consider using an Amazon RDS read replica?;There are a variety of scenarios where deploying one or more read replicas for a given source DB instance may make sense. Common reasons for deploying a read replica include: Scaling beyond the compute or I/O capacity of a single DB instance for read-heavy database workloads. This excess read traffic can be directed to one or more read replicas. Serving read traffic while the source DB instance is unavailable. If your source DB Instance cannot take I/O requests (e.g. due to I/O suspension for backups or scheduled maintenance), you can direct read traffic to your read replica(s). For this use case, keep in mind that the data on the read replica may be “stale” since the source DB Instance is unavailable. Business reporting or data warehousing scenarios. You may want business reporting queries to run against a read replica rather than your primary, production DB Instance. You may use a read replica for disaster recovery of the source DB instance either in the same AWS Region or in another Region. /rds/faqs/;Do I need to enable automatic backups on my DB instance before I can create read replicas?;Yes. Enable automatic backups on your source DB Instance before adding read replicas by setting the backup retention period to a value other than 0. Backups must remain enabled for read replicas to work. /rds/faqs/;Which versions of database engines support Amazon RDS read replicas?;Amazon Aurora: All DB clusters. Amazon RDS for MySQL: All DB instances support creation of read replicas. Automatic backups must be and remain enabled on the source DB instance for read replica operations. Automatic backups on the replica are supported only for Amazon RDS read replicas running MySQL 5.6 and later, not 5.5. Amazon RDS for PostgreSQL: DB instances with PostgreSQL version 9.3.5 or newer support creation of read replicas. Existing PostgreSQL instances prior to version 9.3.5 need to be upgraded to PostgreSQL version 9.3.5 to take advantage of Amazon RDS read replicas. Amazon RDS for MariaDB: All DB instances support creation of read replicas. Automatic backups must be and remain enabled on the source DB Instance for read replica operations. Amazon RDS for Oracle: Supported for Oracle version 12.1.0.2.v12 and higher and for all 12.2 versions using the Bring Your Own License model with Oracle Database Enterprise Edition and licensed for the Active Data Guard Option. Amazon RDS for SQL Server: Read replicas are supported on Enterprise Edition in the Multi-AZ configuration when the underlying replication technology is using Always On availability groups for SQL Server versions 2016 and 2017. /rds/faqs/;How do I deploy a read replica for a given DB instance?;You can create a read replica in minutes using the standard CreateDBInstanceReadReplica API or a few steps on the AWS Management Console. When creating a read replica, you can identify it as a read replica by specifying a SourceDBInstanceIdentifier. The SourceDBInstanceIdentifier is the DB Instance Identifier of the “source” DB Instance from which you wish to replicate. As with a standard DB Instance, you can also specify the Availability Zone, DB instance class, and preferred maintenance window. The engine version (e.g., PostgreSQL 9.3.5) and storage allocation of a read replica is inherited from the source DB instance. When you initiate the creation of a read replica, Amazon RDS takes a snapshot of your source DB instance and begins replication. As a result, you will experience a brief I/O suspension on your source DB instance as the snapshot occurs. The I/O suspension typically lasts on the order of one minute and is avoided if the source DB instance is a Multi-AZ deployment (in the case of Multi-AZ deployments, snapshots are taken from the standby). Amazon RDS is also currently working on an optimization (to be released shortly) such that if you create multiple Read Replicas within a 30 minute window, all of them will use the same source snapshot to minimize I/O impact (“catch-up” replication for each Read Replica will begin after creation). /rds/faqs/;How do I connect to my read replica(s)?;You can connect to a read replica just as you would connect to a standard DB instance, using the DescribeDBInstance API or AWS Management Console to retrieve the endpoint(s) for your read replica(s). If you have multiple read replicas, it is up to your application to determine how read traffic will be distributed amongst them. /rds/faqs/;How many read replicas can I create for a given source DB instance?;Amazon RDS for MySQL, MariaDB, and PostgreSQL allow you to create up to 15 read replicas for a given source DB instance. Amazon RDS for Oracle and SQL Server allow you to create up to 5 read replicas for a given source DB instance. /rds/faqs/;Can I create a read replica in an AWS Region different from that of the source DB instance?;Yes, Amazon RDS (except RDS for SQL Server) supports cross-region read replicas. The amount of time between when data is written to the source DB instance and when it is available in the read replica will depend on the network latency between the two regions. /rds/faqs/;Do Amazon RDS read replicas support synchronous replication?;No. Read replicas in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server are implemented using those engines' native asynchronous replication. Amazon Aurora uses a different, but still asynchronous, replication mechanism. /rds/faqs/;Can I use a read replica to enhance database write availability or protect the data on my source DB instance against failure scenarios?;If you are looking to use replication to increase database write availability and protect recent database updates against various failure conditions, we recommend you run your DB instance as a Multi-AZ deployment. With Amazon RDS Read Replicas, which employ supported engines' native, asynchronous replication, database writes occur on a read replica after they have already occurred on the source DB instance, and this replication “lag” can vary significantly. In contrast, the replication used by Multi-AZ deployments is synchronous, meaning that all database writes are concurrent on the primary and standby. This protects your latest database updates, since they should be available on the standby in the event failover is required. In addition, with Multi-AZ deployments replication is fully managed. Amazon RDS automatically monitors for DB instance failure conditions or Availability Zone failure and initiates automatic failover to the standby (or to a read replica, in the case of Amazon Aurora) if an outage occurs. /rds/faqs/;Can I create a read replica with a Multi-AZ DB instance deployment as its source?;Yes. Since Multi-AZ DB instances address a different need than read replicas, it makes sense to use the two in conjunction for production deployments and to associate a read replica with a Multi-AZ DB Instance deployment. The “source” Multi AZ-DB instance provides you with enhanced write availability and data durability, and the associated read replica would improve read traffic scalability. /rds/faqs/;Can I configure my Amazon RDS read replicas themselves Multi-AZ?;Yes. Amazon RDS for MySQL, MariaDB, PostgreSQL, and Oracle allow you to enable Multi-AZ configuration on read replicas to support disaster recovery and minimize downtime from engine upgrades. /rds/faqs/;If my read replica(s) use a Multi-AZ DB instance deployment as a source, what happens if Multi-AZ failover occurs?;In the event of Multi-AZ failover, any associated and available read replicas will automatically resume replication once failover has completed (acquiring updates from the newly promoted primary). /rds/faqs/;Can I create a read replica of another read replica?;Amazon Aurora, Amazon RDS for MySQL, and MariaDB: You can create three tiers of read replicas. A second-tier read replica from an existing first-tier read replica and a third tier replica from second tier read replicas. By creating a second-tier and third-tier read replica, you may be able to move some of the replication load from the primary database instance to different tiers of Read Replica based on your application needs. Please note that a second-tier Read Replica may lag further behind the primary because of additional replication latency introduced as transactions are replicated from the primary to the first tier replica and then to the second-tier replica. Similarly, the third tier replica may lag behind the second-tier read replica. Amazon RDS for Oracle and Amazon RDS for SQL Server: Read Replicas of Read Replicas are not currently supported. /rds/faqs/;Can my read replicas only accept database read operations?;Read replicas are designed to serve read traffic. However, there may be use cases where advanced users wish to complete Data Definition Language (DDL) SQL statements against a read replica. Examples might include adding a database index to a read replica that is used for business reporting without adding the same index to the corresponding source DB instance. Amazon RDS for MySQL can be configured to permit DDL SQL statements against a read replica. If you wish to enable operations other than reads for a given read replica, modify the active DB parameter group for the read replica setting the “read_only” parameter to “0.” Amazon RDS for PostgreSQL does not currently support the execution of DDL SQL statements against a read replica. /rds/faqs/;Can I promote my read replica into a “standalone” DB Instance?;Yes. Refer to the Amazon RDS User Guide for more details. /rds/faqs/;Will my read replica be kept up-to-date with its source DB instance?;Updates to a source DB instance will automatically be replicated to any associated read replicas. However, with supported engines' asynchronous replication technology, a read replica can fall behind its source DB instance for a variety of reasons. Typical reasons include: Write I/O volume to the source DB instance exceeds the rate at which changes can be applied to the read replica (this problem is particularly likely to arise if the compute capacity of a read replica is less than the source DB Instance) Complex or long-running transactions to the source DB Instance hold up replication to the read replica Network partitions or latency between the source DB instance and a read replica Read Replicas are subject to the strengths and weaknesses of supported engines' native replication. If you are using Read Replicas, you should be aware of the potential for a lag between a Read Replica and its source DB Instance or “inconsistency”. /rds/faqs/;How do I see the status of my active read replica(s)?;"You can use the standard DescribeDBInstances API to return a list of all the DB Instances you have deployed (including Read Replicas) or simply click on the ""Instances"" tab of the Amazon RDS Console. Amazon RDS allows you to gain visibility into how far a read replica has fallen behind its source DB instance. The number of seconds that the read replica is behind the primary is published as an Amazon CloudWatch metric (""Replica Lag"") available via the AWS Management Console or Amazon CloudWatch APIs. For Amazon RDS for MySQL, the source of this information is the same as that displayed by issuing a standard ""Show Replica Status"" MySQL command against the read replica. For Amazon RDS for PostgreSQL, you can use the pg_stat_replication view on the source DB instance to explore replication metrics. Amazon RDS monitors the replication status of your Read Replicas and updates the Replication State field in the AWS Management console to ""Error"" if replication stops for any reason (e.g. attempting DML queries on your replica that conflict with the updates made on the primary database instance could result in a replication error). You can review the details of the associated error thrown by the MySQL engine by viewing the Replication Error field and take appropriate action to recover from it. You can learn more about troubleshooting replication issues in the Troubleshooting a Read Replica Problem section of the User Guide for Amazon RDS for MySQL or PostgreSQL. If a replication error is fixed, the Replication State changes to Replicating." /rds/faqs/;I scaled the compute and/or storage capacity of my source DB instance. Should I scale the resources for associated read replica(s) as well?;For replication to work effectively, we recommend that read replicas have as much or more compute and storage resources as their respective source DB instances. Otherwise replication lag is likely to increase or your read replica may run out of space to store replicated updates. /rds/faqs/;How do I delete a read replica? Will it be deleted automatically if its source DB Instance is deleted?;You can delete a read replica with a few steps of the AWS Management Console or by passing its DB Instance identifier to the DeleteDBInstance API. An Amazon Aurora replica will stay active and continue accepting read traffic even after its corresponding source DB Instance has been deleted. One of the replicas in the cluster will automatically be promoted as the new primary and will start accepting write traffic. An Amazon RDS for MySQL or MariaDB read replica will stay active and continue accepting read traffic even after its corresponding source DB instance has been deleted. If you desire to delete the Read Replica in addition to the source DB instance, you must explicitly do so using the DeleteDBInstance API or AWS Management Console. If you delete an Amazon RDS for PostgreSQL DB Instance that has read replicas, all Read Replicas will be promoted to standalone DB Instances and will be able to accept both read and write traffic. The newly promoted DB Instances will operate independently of one another. If you desire to delete these DB Instances in addition to the original source DB Instance, you must explicitly do so using the DeleteDBInstance API or AWS Management Console. /rds/faqs/;How much do read replicas cost? When does billing begin and end?;A read replica is billed as a standard DB Instance and at the same rates. Just like a standard DB instance, the rate per “DB Instance hour” for a read replica is determined by the DB instance class of the read replica – please see pricing page for up-to-date pricing. You are not charged for the data transfer incurred in replicating data between your source DB instance and read replica within the same AWS Region. Billing for a read replica begins as soon as the replica has been successfully created (i.e. when the status is listed as “active”). The read replica will continue being billed at standard Amazon RDS DB instance hour rates until you issue a command to delete it. /rds/faqs/;What is Enhanced Monitoring for Amazon RDS?;Enhanced Monitoring for Amazon RDS gives you deeper visibility into the health of your Amazon RDS instances. Just turn on the “Enhanced Monitoring” option for your Amazon RDS DB Instance and set a granularity and Enhanced Monitoring will collect vital operating system metrics and process information, at the defined granularity. For an even deeper level of diagnostics and visualization of your database load, and a longer data retention period, you can try Performance Insights . /rds/faqs/;Which metrics and processes can I monitor in Enhanced Monitoring?;Enhanced Monitoring captures your Amazon RDS instance system level metrics, such as the CPU, memory, file system, and disk I/O among others. The complete list of metrics can be found in the documentation. /rds/faqs/;Which engines are supported by Enhanced Monitoring?;Enhanced Monitoring supports all Amazon RDS database engines. /rds/faqs/;Which instance types are supported by Enhanced Monitoring?;Enhanced Monitoring supports every instance type except t1.micro and m1.small. The software uses a small amount of CPU, memory, and I/O, and for general purpose monitoring, we recommend switching on higher granularities for instances that are medium or larger. For non-production DB Instances, the default setting for Enhanced Monitoring is “off” and you have the choice of leaving it disabled or modifying the granularity when it is on. /rds/faqs/;What information can I view on the Amazon RDS dashboard?;You can view all the system metrics and process information for your Amazon RDS DB Instances in a graphical format on the console. You can manage which metrics you want to monitor for each instance and customize the dashboard according to your requirements. /rds/faqs/;Will all the instances in my Amazon RDS account sample metrics at the same granularity?;No. You can set different granularities for each DB Instance in your Amazon RDS account. You can also choose the instances on which you want to enable Enhanced Monitoring as well as modify the granularity of any instance whenever you want. /rds/faqs/;How far back can I see the historical metrics on the Amazon RDS console?;You can see the performance values for all the metrics up to 1 hour back at a granularity of up to 1 second based on your settings. /rds/faqs/;How can I visualize the metrics generated by Amazon RDS Enhanced Monitoring in CloudWatch?;The metrics from Amazon RDS Enhanced Monitoring are delivered into your CloudWatch Logs account. You can create metrics filters in CloudWatch from CloudWatch Logs and display the graphs on the CloudWatch dashboard. For more details, please visit the Amazon CloudWatch page. /rds/faqs/;When should I use CloudWatch instead of the Amazon RDS console dashboard?;You should use CloudWatch if you want to view historical data beyond what is available on the Amazon RDS console dashboard. You can monitor your Amazon RDS instances in CloudWatch to diagnose the health of your entire AWS stack in a single location. Currently, CloudWatch supports granularities of up to 1 minute and the values will be averaged out for granularities less than that. /rds/faqs/;Can I set up alarms and notifications based on specific metrics?;Yes. You can create an alarm in CloudWatch that sends a notification when the alarm changes state. The alarm watches a single metric over a time period that you specify and performs one or more actions based on the value of the metric relative to the specified threshold over a number of time periods. For more details on CloudWatch alarms, please visit the Amazon CloudWatch Developer Guide. /rds/faqs/;How do I integrate Enhanced Monitoring with my tool that I currently use?;Amazon RDS Enhanced Monitoring provides a set of metrics formed as JSON payloads that are delivered into your CloudWatch Logs account. The JSON payloads are delivered at the granularity last configured for the Amazon RDS instance. There are two ways you can consume the metrics via a third-party dashboard or application. Monitoring tools can use CloudWatch Logs Subscriptions to set up a near real time feed for the metrics. Alternatively, you can use filters in CloudWatch Logs to bridge metrics across to CloudWatch and integrate your application with CloudWatch. Please visit Amazon CloudWatch Documentation for more details. /rds/faqs/;How can I delete historical data?;Since Enhanced Monitoring delivers JSON payloads into a log in your CloudWatch Logs account, you can control its retention period just like any other CloudWatch Logs stream. The default retention period configured for Enhanced Monitoring in CloudWatch Logs is 30 days. For details on how to change retention settings, please visit Amazon CloudWatch Developer Guide. /rds/faqs/;What impact does Enhanced Monitoring have on my monthly bills?;Since the metrics are ingested into CloudWatch Logs, your charges will be based on CloudWatch Logs data transfer and storage rates once you exceed CloudWatch Logs free tier. Pricing details can be found here. The amount of information transferred for an Amazon RDS instance is directly proportional to the defined granularity for the Enhanced Monitoring feature. Administrators can set different granularities for different instances in their accounts to manage costs. The approximate volume of data ingested into CloudWatch Logs by Enhanced Monitoring for an instance is as shown below: /rds/faqs/;What is Amazon RDS Proxy?;Amazon RDS Proxy is a fully managed, highly available database proxy feature for Amazon RDS. RDS Proxy makes applications more scalable, more resilient to database failures, and more secure. /rds/faqs/;Why would I use Amazon RDS Proxy?;Amazon RDS Proxy is a fully managed, highly available, and easy-to-use database proxy feature of Amazon RDS that enables your applications to: 1) improve scalability by pooling and sharing database connections, 2) improve availability by reducing database failover times by up to 66% and preserving application connections during failovers, and 3) improve security by optionally enforcing AWS IAM authentication to databases and securely storing credentials in AWS Secrets Manager. /rds/faqs/;What use cases does Amazon RDS Proxy address?;Amazon RDS Proxy addresses a number of use cases related to scalability, availability, and security of your applications, including: Applications with unpredictable workloads: Applications that support highly variable workloads may attempt to open a burst of new database connections. Amazon RDS Proxy’s connection governance allows you to gracefully scale applications dealing with unpredictable workloads by efficiently reusing database connections. First, RDS Proxy enables multiple application connections to share a database connection for efficient use of database resources. Second, RDS Proxy allows you to maintain predictable database performance by regulating the number of database connections that are opened. Third, RDS Proxy removes requests that cannot be served to preserve the overall performance and availability of the application. Applications that frequently open and close database connections: Applications built on technologies such as Serverless, PHP, or Ruby on Rails may open and close database connections frequently to serve application requests. Amazon RDS Proxy maintains a pool of database connections to avoid unnecessary stress on database compute and memory for establishing new connections. Applications that keep connections open but idle: Applications in industries such as SaaS or eCommerce may keep database connections idling to minimize the response time when a customer reengages. Instead of overprovisioning databases to support mostly idling connections, you can use Amazon RDS Proxy to hold idling connections while only establishing database connections as required to optimally serve active requests. Applications requiring availability through transient failures: With Amazon RDS Proxy, you can build applications that can transparently tolerate database failures without needing to write complex failure handling code. RDS Proxy automatically routes traffic to a new database instance while preserving application connections. RDS Proxy also bypasses Domain Name System (DNS) caches to reduce failover times by up to 66% for Amazon RDS and Aurora Multi-AZ databases. During database failovers, the application may experience increased latencies and ongoing transactions may have to be retried. Improved security and centralized credentials management: Amazon RDS Proxy aids you in building more secure applications by giving you a choice to enforce IAM based authentication with relational databases. RDS Proxy also enables you to centrally manage database credentials through AWS Secrets Manager. /rds/faqs/;When should I connect to the database directly versus using Amazon RDS Proxy?;Depending on your workload, Amazon RDS Proxy can add an average of 5 milliseconds of network latency to query or transaction response time. If your application cannot tolerate 5 milliseconds of latency or does not need connection management and other features enabled by RDS Proxy, you may want your application to connect directly to the database endpoint. /rds/faqs/;How will serverless applications benefit from Amazon RDS Proxy?;Amazon RDS Proxy transforms your approach to building modern serverless applications that leverage the power and simplicity of relational databases. First, RDS Proxy enables serverless applications to scale efficiently by pooling and reusing database connections. Second, with RDS Proxy, you no longer need to handle database credentials in your Lambda code. You can use the IAM execution role associated with your Lambda function to authenticate with RDS Proxy and your database. Third, you don’t need to manage any new infrastructure or code to utilize the full potential of serverless applications backed by relational databases. RDS Proxy is fully managed and scales its capacity automatically based on your application demands. /rds/faqs/;Which database engines does Amazon RDS Proxy support?;RDS Proxy is available for Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, Amazon RDS for MariaDB, Amazon RDS for MySQL, Amazon RDS for PostgreSQL, and Amazon RDS for SQL Server. For a list of supported engine versions see the Amazon Aurora User Guide or the Amazon RDS User Guide. /rds/faqs/;How can I enable Amazon RDS Proxy?;You enable Amazon RDS Proxy for your Amazon RDS database with just a few clicks in the Amazon RDS console. While enabling RDS Proxy, you specify the VPC and subnets you want to access RDS Proxy from. As a Lambda user, you can enable Amazon RDS Proxy for your Amazon RDS database and set up a Lambda function to access it with just a few clicks in the Lambda console. /rds/faqs/;Can I access Amazon RDS Proxy using APIs?;Yes. You can use Amazon RDS Proxy APIs to create a proxy and then define target groups to associate the proxy with specific database instances or clusters. For example: /rds/faqs/;Why should I use Trusted Language Extensions for PostgreSQL?;Trusted Language Extensions (TLE) for PostgreSQL enables developers to build high performance PostgreSQL extensions and run them safely on Amazon Aurora and Amazon RDS. In doing so, TLE improves your time to market and removes the burden placed on database administrators to certify custom and third-party code for use in production database workloads. You can move forward as soon as you decide an extension meets your needs. With TLE, independent software vendors (ISVs) can provide new PostgreSQL extensions to customers running on Aurora and Amazon RDS. /rds/faqs/;What are traditional risks of running extensions in PostgreSQL and how does TLE for PostgreSQL mitigate those risks?;PostgreSQL extensions are executed in the same process space for high performance. However, extensions might have software defects that can crash the database. TLE for PostgreSQL offers multiple layers of protection to mitigate this risk. TLE is designed to limit access to system resources. The rds_superuser role can determine who is permitted to install specific extensions. However, these changes can only be made through the TLE API. TLE is designed to limit the impact of an extension defect to a single database connection. In addition to these safeguards, TLE is designed to provide DBAs in the rds_superuser role fine-grained, online control over who can install extensions and they can create a permissions model for running them. Only users with sufficient privileges will be able to run and create using the “CREATE EXTENSIONcommand on a TLE extension. DBAs can also allow-list “PostgreSQL hooks” required for more sophisticated extensions that modify the database’s internal behavior and typically require elevated privilege. /rds/faqs/;How does TLE for PostgreSQL relate to/work with other AWS services?;TLE for PostgreSQL is available for Amazon Aurora PostgreSQL-Compatible Edition and Amazon RDS on PostgreSQL on versions 14.5 and higher. TLE is implemented as a PostgreSQL extension itself and you can activate it from the rds_superuser role similar to other extensions supported on Aurora and Amazon RDS. /rds/faqs/;How is TLE for PostgreSQL different from extensions available on Amazon Aurora and Amazon RDS today?;Aurora and Amazon RDS support a curated set of over 85 PostgreSQL extensions. AWS manages the security risks for each of these extensions under the AWS shared responsibility model. The extension that implements TLE for PostgreSQL is included in this set. Extensions that you write or that you obtain from third-party sources and install in TLE are considered part of your application code. You are responsible for the security of your applications that use TLE extensions. /rds/faqs/;What are some examples of extensions I could run with TLE for PostgreSQL?;You can build developer functions, such as bitmap compression and differential privacy (such as publicly accessible statistical queries that protect privacy of individuals). /rds/faqs/;What programming languages can I use to develop TLE for PostgreSQL?;TLE for PostgreSQL currently supports JavaScript, PL/pgSQL, Perl, and SQL. /rds/faqs/;How do I deploy a TLE for PostgreSQL extension?;Once the rds_superuser role activates TLE for PostgreSQL, you can deploy TLE extensions using the SQL CREATE EXTENSION command from any PostgreSQL client, such as psql. This is similar to how you would create a user-defined function written in a procedural language, such as PL/pgSQL or PL/Perl. You can control which users have permission to deploy TLE extensions and use specific extensions. /rds/faqs/;How do TLE for PostgreSQL extensions communicate with the PostgreSQL database?;TLE for PostgreSQL accesses your PostgreSQL database exclusively through the TLE API. The TLE supported trusted languages include all functions of the PostgreSQL server programming interface (SPI) and support for PostgreSQL hooks, including the check password hook. /rds/faqs/;Where can I learn more about the TLE for PostgreSQL open-source project?;You can learn more about the TLE for PostgreSQL project on the official TLE GitHub page. /rds/faqs/;What is the cost of using Amazon RDS Blue/Green Deployments?;You will incur the same price for running your workloads on green instances as you do for blue instances. The cost of running on blue and green instances include our current standard pricing for db.instances, cost of storage, cost of read/write I/Os, and any enabled features, such as cost of backups and Amazon RDS Performance Insights. Effectively, you are paying approximately 2x the cost of running workloads on db.instance for the lifespan of the blue-green-deployment. For example: You have an RDS for MySQL 5.7 database running on two r5.2xlarge db.instances, a primary database instance and a read replica, in us-east-1 AWS region with a Multi-AZ (MAZ) configuration. Each of the r5.2xlarge db.instances is configured for 20 GiB General Purpose Amazon Elastic Block Storge (EBS). You create a clone of the blue instance topology using Amazon RDS Blue/Green Deployments, run it for 15 days (360 hours), and then delete the blue instances after a successful switchover. The blue instances cost $1,387 for 15 days at an on-demand rate of $1.926/hr (Instance + EBS cost). The total cost to you for using Blue/Green Deployments for those 15 days is $2,774, which is 2x the cost of running blue instances for that time period. /rds/faqs/;"What is the “blue environment” in Amazon RDS Blue/Green Deployments? What is the “green environment""?";In Amazon RDS Blue/Green Deployments , the blue environment is your current production environment. The green environment is your staging environment that will become your new production environment after switchover. /rds/faqs/;How do switchovers work with Amazon RDS Blue/Green Deployments?;When Amazon RDS Blue/Green Deployments initiate a switchover, they block writes to both the blue and green environments, until switchover is complete. During switchover, the staging environment, or green environment, catches up with the production system, ensuring data is consistent between the staging and production environment. Once the production and staging environment are in complete sync, Blue/Green Deployments promote the staging environment as the production environment by redirecting traffic to the newly promoted production environment. Blue/Green Deployments are designed to enable writes on the green environment after switchover is complete, ensuring zero data loss during the switchover process. /rds/faqs/;After Amazon RDS Blue/Green Deployments switches over, what happens to my old production environment?;Amazon RDS Blue/Green Deployments do not delete your old production environment. If needed, you can access it for additional validations and performance/regression testing. If you no longer need the old production environment, you can delete it. Standard billing charges apply on old production instances until you delete them. /rds/faqs/;Do Amazon RDS Blue/Green Deployments support Global Databases, Amazon RDS Proxy, cross-Region read replicas, or cascaded read replicas?;No, Amazon RDS Blue/Green Deployments do not support Global Databases, Amazon RDS Proxy, cross-Region read replicas, or cascaded read replicas. /rds/faqs/;Can I use Amazon RDS Blue/Green Deployments to rollback changes?;No, at this time you cannot use Amazon RDS Blue/Green Deployments to rollback changes. /rds/faqs/;Which RDS for MySQL database versions support Amazon RDS Optimized Writes?;Amazon RDS Optimized Writes are available for MySQL major version 8.0.30 and higher. /rds/faqs/;Is Amazon RDS Optimized Writes supported on Amazon Aurora MySQL-Compatible Edition?;No. Amazon Aurora MySQL-Compatible Edition already avoids the use of the “doublewrite buffer.” Instead, Amazon Aurora replicates data six ways across three Availability Zones (AZs) and uses a quorum-based approach to durably write data and correctly read it thereafter. /rds/faqs/;Can customers convert their existing Amazon RDS databases to use Amazon RDS Optimized Writes?;At this time, this initial release does not support enabling Amazon RDS Optimized Writes for your existing database instances even if the instance class supports Optimized Writes. /rds/faqs/;How much are Amazon RDS Optimized Writes?;Amazon RDS Optimized Writes are available to RDS for MySQL customers at no additional cost. /rds/faqs/;How do Amazon RDS Optimized Reads speed up query performance?;Workloads that use temporary objects in MySQL and MariaDB for query processing benefit from Amazon RDS Optimized Reads. Optimized Reads place temporary objects on the database instance's NVMe-based instance storage, instead of the Amazon Elastic Block Store volume. This helps to speed up complex query processing by up to 2X. /rds/faqs/;Which RDS for MySQL and RDS for MariaDB database versions support Amazon RDS Optimized Reads?;Amazon RDS Optimized Reads are available for RDS for MySQL on MySQL versions 8.0.28 and higher and on RDS for MariaDB on MariaDB versions 10.4.25, 10.5.16, 10.6.7 and higher. /rds/faqs/;Which database instance types support Amazon RDS Optimized Reads? In what regions are they available?;Amazon RDS Optimized Reads are available in all regions where db.r5d, db.m5d, db.r6gd, and db.m6gd, X2idn, and X2iedn instances are available instances are available. For more information, see the Amazon RDS DB instance classes documentation. /rds/faqs/;When should I use Amazon RDS Optimized Reads?;"Customers should use Amazon RDS Optimized Reads when they have workloads that require complex queries; general purpose analytics; or require intricate groups, sorts, hash aggregations, high-load joins, and Common Table Expressions (CTEs). These use cases result in the creation of temporary tables, allowing Optimized Reads to speed up your workload’s query processing." /rds/faqs/;Can customers convert their existing Amazon RDS databases to use Amazon RDS Optimized Reads?;Yes, customers can convert their existing Amazon RDS database to use Amazon RDS Optimized Reads by moving your workload to an Optimized Read-enabled instance. Optimized Reads are also available by default on all supported instance classes. If you’re running your workload on db.r5d, db.m5d, db.r6gd, db.m6gd, X2idn, and X2iedn instances, you’re already benefiting from Optimized Reads. /dynamodb/faqs/;What is Amazon DynamoDB?;DynamoDB is a fast and flexible nonrelational database service for any scale. DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don’t have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling. /dynamodb/faqs/;What does DynamoDB manage on my behalf?;DynamoDB takes away one of the main stumbling blocks of scaling databases: the management of database software and the provisioning of the hardware needed to run it. You can deploy a nonrelational database in a matter of minutes. DynamoDB automatically scales throughput capacity to meet workload demands, and partitions and repartitions your data as your table size grows. Also, DynamoDB synchronously replicates data across three facilities in an AWS Region, giving you high availability and data durability. /dynamodb/faqs/;What is the consistency model of DynamoDB?;When reading data from DynamoDB, users can specify whether they want the read to be eventually consistent or strongly consistent: Eventually consistent reads (the default) – The eventual consistency option maximizes your read throughput. However, an eventually consistent read might not reflect the results of a recently completed write. All copies of data usually reach consistency within a second. Repeating a read after a short time should return the updated data. Strongly consistent reads — In addition to eventual consistency, DynamoDB also gives you the flexibility and control to request a strongly consistent read if your application, or an element of your application, requires it. A strongly consistent read returns a result that reflects all writes that received a successful response before the read. ACID transactions – DynamoDB transactions provide developers atomicity, consistency, isolation, and durability (ACID) across one or more tables within a single AWS account and region. You can use transactions when building applications that require coordinated inserts, deletes, or updates to multiple items as part of a single logical business operation. /dynamodb/faqs/;What kind of query functionality does DynamoDB support?;DynamoDB supports GET/PUT operations by using a user-defined primary key. The primary key is the only required attribute for items in a table. You specify the primary key when you create a table, and it uniquely identifies each item. DynamoDB also provides flexible querying by letting you query on nonprimary key attributes using global secondary indexes and local secondary indexes. A primary key can be either a single-attribute partition key or a composite partition-sort key. A single-attribute partition key could be, for example, UserID. Such a single attribute partition key would allow you to quickly read and write data for an item associated with a given user ID. DynamoDB indexes a composite partition-sort key as a partition key element and a sort key element. This multipart key maintains a hierarchy between the first and second element values. For example, a composite partition-sort key could be a combination of UserID (partition) and Timestamp (sort). Holding the partition key element constant, you can search across the sort key element to retrieve items. Such searching would allow you to use the Query API to, for example, retrieve all items for a single UserID across a range of time stamps. /dynamodb/faqs/;How do I update and query data items with DynamoDB?;After you have created a table using the DynamoDB console or CreateTable API, you can use the PutItem or BatchWriteItem APIs to insert items. Then, you can use the GetItem, BatchGetItem, or, if composite primary keys are enabled and in use in your table, the Query API to retrieve the items you added to the table. /dynamodb/faqs/;Can DynamoDB be used by applications running on any operating system?;Yes. DynamoDB is a fully managed cloud service that you access via API. Applications running on any operating system (such as Linux, Windows, iOS, Android, Solaris, AIX, and HP-UX) can use DynamoDB. We recommend using the AWS SDKs to get started with DynamoDB. /dynamodb/faqs/;How am I charged for my use of DynamoDB?;Each DynamoDB table has provisioned read-throughput and write-throughput associated with it. You are billed by the hour for that throughput capacity. Note that you are charged by the hour for the throughput capacity, whether or not you are sending requests to your table. If you would like to change your table’s provisioned throughput capacity, you can do so using the AWS Management Console, the UpdateTable API, or the PutScalingPolicy API for auto scaling. Also, DynamoDB charges for data storage as well as the standard internet data transfer fees. To learn more about DynamoDB pricing, see the DynamoDB pricing page. /dynamodb/faqs/;What is the maximum throughput I can provision for a single DynamoDB table?;Maximum throughput per DynamoDB table is practically unlimited. For information about the limits in place, see Limits in DynamoDB. If you want to request a limit increase, contact Amazon. /dynamodb/faqs/;What is the minimum throughput I can provision for a single DynamoDB table?;The smallest provisioned throughput you can request is 1 write capacity unit and 1 read capacity unit for both auto scaling and manual throughput provisioning. Such provisioning falls within the free tier which allows for 25 units of write capacity and 25 units of read capacity. The free tier applies at the account level, not the table level. In other words, if you add up the provisioned capacity of all your tables, and if the total capacity is no more than 25 units of write capacity and 25 units of read capacity, your provisioned capacity would fall into the free tier. /dynamodb/faqs/;What are DynamoDB table classes?;DynamoDB offers two table classes designed to help you optimize for cost. The DynamoDB Standard table class is the default, and recommended for the vast majority of workloads. The DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class is optimized for tables that store data that is accessed infrequently, where storage is the dominant cost. Each table is associated with a table class and each table class offers a different pricing for data storage as well as read and write requests. You can select the most cost-effective table class based on your table's storage requirements and data access patterns. /dynamodb/faqs/;When should I use DynamoDB Standard-IA?;DynamoDB Standard-IA helps you reduce your DynamoDB total costs for tables that store infrequently accessed data such as applications’ logs, old social media posts, e-commerce order history, and past gaming achievements. If storage is your dominant table cost— storage cost exceeds 50 percent of the cost of throughput (reads and writes) consistently—then the DynamoDB Standard-IA table class is the most economical choice for you. Learn more about DynamoDB Standard-IA pricing in the DynamoDB pricing page. /dynamodb/faqs/;How do DynamoDB Standard-IA tables work with existing DynamoDB features and integrate with other AWS services?;DynamoDB Standard-IA tables are no different than DynamoDB Standard tables in supporting all existing DynamoDB features including global tables, secondary indexes, on-demand backups, point-in-time recovery (PITR), and Amazon DynamoDB Accelerator (DAX). DynamoDB Standard-IA tables also have built-in integration with other AWS services in the same way as DynamoDB Standard tables. For example, you can monitor the performance of your DynamoDB Standard-IA tables using Amazon CloudWatch, use AWS CloudFormation templates to provision and manage your DynamoDB Standard-IA tables, stream your change data records to Amazon Kinesis Data Streams, and export your DynamoDB Standard-IA tables data to Amazon Simple Storage Service (Amazon S3). /elasticache/faqs/;What is Amazon ElastiCache?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;What is in-memory caching and how does it help my applications?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;Can I use Amazon ElastiCache for use cases other than caching?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;Can I use Amazon ElastiCache through AWS CloudFormation?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;What does Amazon ElastiCache manage on my behalf?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;What are Amazon ElastiCache nodes, shards and clusters?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;Which engines does Amazon ElastiCache support?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;How do I get started with Amazon ElastiCache?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;How do I create a cluster?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;What Node Types can I select?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;How do I access my nodes?;"Navigate to the ""Amazon ElastiCache"" tab. Click on the ""(Number of) Nodes"" link and navigate to the ""Nodes"" tab. Click on the ""Copy Node Endpoint(s)"" button." /elasticache/faqs/;How do I control access to Amazon ElastiCache?;You need to have a VPC set up with at least one subnet. For information on creating Amazon VPC and subnets refer to the Getting Started Guide for Amazon VPC. You need to have a Subnet Group (for Redis or Memcached) defined for your VPC. You need to have a VPC Security Group defined for your VPC (or you can use the default provided). In addition, you should allocate adequately large CIDR blocks to each of your subnets so that there are enough spare IP addresses for Amazon ElastiCache to use during maintenance activities such as cache node replacement. /elasticache/faqs/;Can programs running on servers in my own data center access Amazon ElastiCache?;You need to have a VPC set up with at least one subnet. For information on creating Amazon VPC and subnets refer to the Getting Started Guide for Amazon VPC. You need to have a Subnet Group (for Redis or Memcached) defined for your VPC. You need to have a VPC Security Group defined for your VPC (or you can use the default provided). In addition, you should allocate adequately large CIDR blocks to each of your subnets so that there are enough spare IP addresses for Amazon ElastiCache to use during maintenance activities such as cache node replacement. /elasticache/faqs/;Can programs running on EC2 instances in a VPC access Amazon ElastiCache?;You need to have a VPC set up with at least one subnet. For information on creating Amazon VPC and subnets refer to the Getting Started Guide for Amazon VPC. You need to have a Subnet Group (for Redis or Memcached) defined for your VPC. You need to have a VPC Security Group defined for your VPC (or you can use the default provided). In addition, you should allocate adequately large CIDR blocks to each of your subnets so that there are enough spare IP addresses for Amazon ElastiCache to use during maintenance activities such as cache node replacement. /elasticache/faqs/;What is Amazon Virtual Private Cloud (VPC) and why may I want to use with Amazon ElastiCache?;You need to have a VPC set up with at least one subnet. For information on creating Amazon VPC and subnets refer to the Getting Started Guide for Amazon VPC. You need to have a Subnet Group (for Redis or Memcached) defined for your VPC. You need to have a VPC Security Group defined for your VPC (or you can use the default provided). In addition, you should allocate adequately large CIDR blocks to each of your subnets so that there are enough spare IP addresses for Amazon ElastiCache to use during maintenance activities such as cache node replacement. /elasticache/faqs/;How do I create an Amazon ElastiCache Cluster in VPC?;You need to have a VPC set up with at least one subnet. For information on creating Amazon VPC and subnets refer to the Getting Started Guide for Amazon VPC. You need to have a Subnet Group (for Redis or Memcached) defined for your VPC. You need to have a VPC Security Group defined for your VPC (or you can use the default provided). In addition, you should allocate adequately large CIDR blocks to each of your subnets so that there are enough spare IP addresses for Amazon ElastiCache to use during maintenance activities such as cache node replacement. /elasticache/faqs/;What does it mean to run a Redis node as a Read Replica?;Failure Handing Read Scaling /elasticache/faqs/;When would I want to consider using a Redis read replica?;"Scaling beyond the compute or I/O capacity of a single primary node for read-heavy workloads. This excess read traffic can be directed to one or more read replicas. Serving read traffic while the primary is unavailable. If your primary node cannot take I/O requests (e.g. due to I/O suspension for backups or scheduled maintenance), you can direct read traffic to your read replicas. For this use case, keep in mind that the data on the read replica may be “stale” since the primary Instance is unavailable. The read replica can also be used to restart a failed primary warmed up. Data protection scenarios; in the unlikely event or primary node failure or that the Availability Zone in which your primary node resides becomes unavailable, you can promote a read replica in a different Availability Zone to become the new primary." /elasticache/faqs/;How do I deploy a read replica node for a given primary node?;"The read replicas are as easy to delete as they are to create; simply use the Amazon ElastiCache Management Console or call the DeleteCacheCluster API (specifying the CacheClusterIdentifier for the read replica you wish to delete). Yes. You can create a new cache cluster along with read replicas in minutes using the CreateReplicationGroup API or using the “Create” wizard at the Amazon ElastiCache Management Console and selecting “Multi-AZ Replication”. When creating the cluster, specify an identifier, the total number of desired shard in a cluster and read replicas per shard, along with the creation parameters such as node type, engine version, etc. You can also specify the Availability Zone for each shard in the cluster." /elasticache/faqs/;How do I connect to my read replica(s)?;Redis (cluster mode disabled) clusters, use the individual Node Endpoints for read operations (In the API/CLI these are referred to as Read Endpoints). Redis (cluster mode enabled) clusters, use the cluster's Configuration Endpoint for all operations. You must use a client that supports Redis Cluster (Redis 3.2). You can still read from individual node endpoints (In the API/CLI these are referred to as Read Endpoints). /elasticache/faqs/;How many read replicas can I create for a given primary node?; No, this is not supported. Instead, you may snapshot your Amazon ElastiCache for Redis node (you may select the primary or any of the read-replicas). You can then use the snapshot to seed a new Amazon ElastiCache for Redis primary. /elasticache/faqs/;Will my read replica be kept up-to-date with its primary node?;Write I/O volume to the primary cache node exceeds the rate at which changes can be applied to the read replica Network partitions or latency between the primary cache node and a read replica /elasticache/faqs/;Can I create a read replica in another region as my primary?;You can, however, use the Global Datastore for Redis to work with fully managed, fast, reliable, and secure replication across AWS Regions. Using this feature, you can create cross-Region read replica clusters for ElastiCache for Redis to enable low-latency reads and disaster recovery across AWS Regions. /elasticache/faqs/;What events would cause Amazon ElastiCache to fail over to a read replica?;Loss of availability in primary’s Availability Zone Loss of network connectivity to primary Compute unit failure on primary /elasticache/faqs/;When should I use Multi-AZ?; For more information about Multi-AZ, see Amazon ElastiCache documentation. /elasticache/faqs/;What is Backup and Restore?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;What is a snapshot?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;Why would I need snapshots?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;What can I do with a snapshot?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;How does Backup and Restore work?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;Where are my snapshots stored?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;How can I get started using Backup and Restore?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;How do I specify which Redis cluster and node to backup?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;How can I specify when a backup will take place?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;What is the performance impact of taking a snapshot?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;Can I create a snapshot from an Amazon ElastiCache for Redis read replica?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;In what regions is the Backup and Restore feature available?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;Can I export Amazon ElastiCache for Redis snapshots to an S3 bucket owned by me?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;Can I copy snapshots from one region to another?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;I have multiple AWS accounts using Amazon ElastiCache for Redis. Can I use Amazon ElastiCache snapshots from one account to warm start an Amazon ElastiCache for Redis cluster in a different one?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;How much does it cost to use Backup and Restore?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;What is the retention period?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;How do I manage the retention of my automated snapshots?; All Amazon ElastiCache for Redis instance node types besides t1.micro support backup and restore: Current Generation Cache Nodes: cache.m4.large, cache.m4.xlarge, cache.m4.2xlarge, cache.m4.4xlarge, cache.m4.10xlarge, cache.m5.large, cache.m5.xlarge, cache.m5.2xlarge, cache.m5.4xlarge, cache.m5.12xlarge, cache.m5.24xlarge, cache.m6g.large, cache.m6g.xlarge, cache.m6g.2xlarge, cache.m6g.4xlarge, cache.m6g.8xlarge, cache.m6g.12xlarge, cache.m6g.16xlarge, cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, cache.r4.8xlarge, cache.r4.16xlarge, cache.r5.large, cache.r5.xlarge, cache.r5.2xlarge, cache.r5.4xlarge, cache.r5.12xlarge, cache.r5.24xlarge, cache.r6g.large, cache.r6g.xlarge, cache.r6g.2xlarge, cache.r6g.4xlarge, cache.r6g.8xlarge, cache.r6g.12xlarge, cache.r6g.16xlarge, cache.t2.medium, cache.t2.small, cache.t2.micro, cache.t3.medium, cache.t3.small, cache.t4g.micro, cache.t4g.small, cache.t4g.medium Current Generation Cache Nodes with data tiering: cache.r6gd.xlarge, cache.r6gd.2xlarge, cache.r6gd.4xlarge, cache.r6gd.8xlarge, cache.r6gd.12xlarge, cache.r6gd.16xlarge Previous Generation Nodes: cache.m1.small, cache.m1.medium, cache.m1.large, cache.m1.xlarge, cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge, cache.m3.medium, cache.m3.large, cache.m3.xlarge, cache.m3.2xlarge, cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, cache.r3.8xlarge, cache.c1.xlarge /elasticache/faqs/;What cache nodes types support backup and restore capability?; Yes. /elasticache/faqs/;How can I specify when a backup will take place?;Take a backup right now (through “Create Snapshot” console button or CreateSnapshot API) Set up an automatic daily backup. The backup will take place during your preferred backup window. You can set that up through Creating/Modifying cluster via console or the CreateCacheCluster, ModifyCacheCluster, CreateReplicationGroup or ModifyReplicationGroup API’s. /elasticache/faqs/;How is the engine within ElastiCache for Redis different from open-source Redis?;More usable memory: You can now safely allocate more memory for your application without risking increased swap usage during syncs and snapshots. Improved synchronization: More robust synchronization under heavy load and when recovering from network disconnections. Additionally, syncs are faster as both the primary and replicas no longer use the disk for this operation. Smoother failovers: In the event of a failover, your shard now recovers faster as replicas no longer flush their data to do a full re-sync with the primary. /elasticache/faqs/;What does encryption at-rest for Amazon ElastiCache ElastiCache for Redis provide?;Disk during sync, backup and swap operations Backups stored in Amazon S3 /elasticache/faqs/;What does encryption in-transit for Amazon ElastiCache for Redis provide?; There are no additional costs for using encryption. /elasticache/faqs/;How can I use encryption in-transit, at-rest, and Redis AUTH?; There are no additional costs for using encryption. /elasticache/faqs/;Is there an Amazon ElastiCache for Redis client that I need to use when using encryption in-transit, or at-rest?; There are no additional costs for using encryption. /elasticache/faqs/;Can I enable encryption in-transit and encryption at-rest on my existing Amazon ElastiCache for Redis clusters?; There are no additional costs for using encryption. /elasticache/faqs/;Is there any action needed to renew certificates?; There are no additional costs for using encryption. /elasticache/faqs/;Can I use my certificates for encryption?; There are no additional costs for using encryption. /elasticache/faqs/;Which instance types are supported for encryption in transit and encryption at rest?; There are no additional costs for using encryption. /elasticache/faqs/;Which compliance programs does Amazon ElastiCache for Redis support?;Amazon ElastiCache for Redis Compliance page AWS PCI Compliance page /elasticache/faqs/;Is Amazon ElastiCache for Redis PCI compliant?;Amazon ElastiCache for Redis Compliance page AWS PCI Compliance page /elasticache/faqs/;Is Amazon ElastiCache for Redis HIPAA eligible?;Amazon ElastiCache for Redis Compliance page AWS FedRAMP Compliance page /elasticache/faqs/;What do I have to do to use HIPAA eligible Amazon ElastiCache for Redis?;Amazon ElastiCache for Redis Compliance page AWS FedRAMP Compliance page /elasticache/faqs/;Is Amazon ElastiCache for Redis FedRAMP authorized?;Amazon ElastiCache for Redis Compliance page AWS FedRAMP Compliance page /elasticache/faqs/;How do I select an appropriate Node Type for my application?;The total memory required for your data to achieve your target cache-hit rate, and The number of nodes required to maintaining acceptable application performance without overloading the database backend in the event of node failure(s). /elasticache/faqs/;How does Amazon ElastiCache respond to node failure?;"Amazon ElastiCache will repair the node by acquiring new service resources, and will then redirect the node's existing DNname to point to the new service resources. For VPC installations, ElastiCache will ensure that both the DNname and the IP address of the node remain the same when nodes are recovered in case of failure. For non-VPC installations, ElastiCache will ensure that the DNname of a node is unchanged; however, the underlying IP address of the node can change. If you associated an SNtopic with your cluster, when the new node is configured and ready to be used, Amazon ElastiCache will send an SNnotification to let you know that node recovery occurred. This allows you to optionally arrange for your applications to force the Memcached client library to attempt to reconnect to the repaired nodes. This may be important, as some Memcached libraries will stop using a server (node) indefinitely if they encounter communication errors or timeouts with that server." /redshift/faqs/;What is Amazon Redshift?;Amazon Redshift managed storage is available with serverless and RA3 node types and lets you scale and pay for compute and storage independently so you can size your cluster based only on your compute needs. It automatically uses high-performance SSD-based local storage as tier-1 cache and takes advantage of optimizations such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to Amazon S3 when needed without requiring any action. /redshift/faqs/;What are the top reasons customers choose Amazon Redshift?;Amazon Redshift managed storage is available with serverless and RA3 node types and lets you scale and pay for compute and storage independently so you can size your cluster based only on your compute needs. It automatically uses high-performance SSD-based local storage as tier-1 cache and takes advantage of optimizations such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to Amazon S3 when needed without requiring any action. /redshift/faqs/;How does Amazon Redshift simplify data warehouse and analytics management?;Amazon Redshift managed storage is available with serverless and RA3 node types and lets you scale and pay for compute and storage independently so you can size your cluster based only on your compute needs. It automatically uses high-performance SSD-based local storage as tier-1 cache and takes advantage of optimizations such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to Amazon S3 when needed without requiring any action. /redshift/faqs/;What are the deployment options for Amazon Redshift?;Amazon Redshift managed storage is available with serverless and RA3 node types and lets you scale and pay for compute and storage independently so you can size your cluster based only on your compute needs. It automatically uses high-performance SSD-based local storage as tier-1 cache and takes advantage of optimizations such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to Amazon S3 when needed without requiring any action. /redshift/faqs/;How do I get started with Amazon Redshift?;Amazon Redshift managed storage is available with serverless and RA3 node types and lets you scale and pay for compute and storage independently so you can size your cluster based only on your compute needs. It automatically uses high-performance SSD-based local storage as tier-1 cache and takes advantage of optimizations such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to Amazon S3 when needed without requiring any action. /redshift/faqs/;How does the performance of Amazon Redshift compare to that of other data warehouses?;Amazon Redshift managed storage is available with serverless and RA3 node types and lets you scale and pay for compute and storage independently so you can size your cluster based only on your compute needs. It automatically uses high-performance SSD-based local storage as tier-1 cache and takes advantage of optimizations such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to Amazon S3 when needed without requiring any action. /redshift/faqs/;Can I get help to learn more about and onboard to Amazon Redshift?;Amazon Redshift managed storage is available with serverless and RA3 node types and lets you scale and pay for compute and storage independently so you can size your cluster based only on your compute needs. It automatically uses high-performance SSD-based local storage as tier-1 cache and takes advantage of optimizations such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to Amazon S3 when needed without requiring any action. /redshift/faqs/;What is Amazon Redshift managed storage?;Amazon Redshift managed storage is available with serverless and RA3 node types and lets you scale and pay for compute and storage independently so you can size your cluster based only on your compute needs. It automatically uses high-performance SSD-based local storage as tier-1 cache and takes advantage of optimizations such as data block temperature, data block age, and workload patterns to deliver high performance while scaling storage automatically to Amazon S3 when needed without requiring any action. /redshift/faqs/;How do I use Amazon Redshift’s managed storage?;If you are already using Amazon Redshift Dense Storage or Dense Compute nodes, you can use Elastic Resize to upgrade your existing clusters to the new compute instance RA3. Amazon Redshift Serverless and clusters using the RA3 instance automatically use Redshift-managed storage to store data. Nother action outside of using Amazon Redshift Serverless or RA3 instances is required to use this capability. /redshift/faqs/;How can I run queries from Redshift for the data stored in the AWS Data Lake?;You need the flexibility to scale and pay for compute separate from storage. You query a fraction of your total data. Your data volume is growing rapidly or is expected to grow rapidly. You want the flexibility to size the cluster based only on your performance needs. /redshift/faqs/;When should I consider using RA3 instances?;You need the flexibility to scale and pay for compute separate from storage. You query a fraction of your total data. Your data volume is growing rapidly or is expected to grow rapidly. You want the flexibility to size the cluster based only on your performance needs. /redshift/faqs/;What feature can I use for location-based analytics?; Amazon Athena and Amazon Redshift Serverless address different needs and use cases even if both services are serverless and enable SQL users. With its Massively Parallel Processing (MPP) architecture that separates storage and compute and machine learning led automatic optimization capabilities, a data warehouse like Amazon Redshift, whether it's serverless or provisioned, is a great choice for customers that need the best price performance at any scale for complex BI and analytics workloads. Customers can use Amazon Redshift as a central component of their data architecture with deep integrations available to access data in place or ingest or move data easily into the warehouse for high performance analytics, through ZeroETL and no-code methods. Customers can access data stored in Amazon S3, operational databases like Aurora and Amazon RDS, third party data warehouses through the integration with AWS Data Exchange, and combine with data stored in the Amazon Redshift data warehouse for analytics. They can get data warehousing started easily and conduct machine learning on top of all this data. Amazon Athena is well suited for interactive analytics and data exploration of data in your data lake or any data source through an extensible connector framework (includes 30-plus out-of-box connectors for applications and on premises or other cloud analytics systems) without worrying about ingesting or processing data. Amazon Athena is built on open-source engines and frameworks such as Spark, Presto, and Apache Iceberg, giving customers the flexibility to use Python or SQL or work on open data formats. If customers want to do interactive analytics using open-source frameworks and data formats, Amazon Athena is a great place to start. /redshift/faqs/;What are the use cases for data sharing?;A central ETL cluster sharing data with many BI/analytics clusters to provide read workload isolation and optional charge-ability. A data provider sharing data to external consumers. Sharing common datasets such as customers, products across different business groups and collaborating for broad analytics and data science. Decentralizing a data warehouse to simplify management. Sharing data between development, test, and production environments. Accessing Redshift data from other AWS analytic services. /redshift/faqs/;How does Amazon Redshift keep my data secure?; AWS Lambda user-defined functions (UDFs) enable you to use an AWS Lambda function as a UDF in Amazon Redshift and invoke it from Redshift SQL queries. This functionality enables you to write custom extensions for your SQL query to achieve tighter integration with other services or third-party products. You can write Lambda UDFs to enable external tokenization, data masking, identification or de-identification of data by integrating with vendors like Protegrity, and protect or unprotect sensitive data based on a user’s permissions and groups, in query time. /redshift/faqs/;Does Amazon Redshift support data masking or data tokenization?;With support for dynamic data masking, customers can easily protect their sensitive data and control granular access by managing Data Masking policies. Suppose you have applications that have multiple users and objects with sensitive data that cannot be exposed to all the users. You have requirements to provide a different granular security level that you want to give different groups of users. Redshift Dynamic Data Masking is configurable to allow customers to define consistent, format-preserving, and irreversible masked data values. Once the feature is GA, you begin using it immediately. The security admins can create and apply policies with just a few commands. /neptune/faqs/;What is Amazon Neptune?;Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. SQL queries for highly connected data are complex and hard to tune for performance. Instead, with Amazon Neptune you can use open and popular graph query languages to execute powerful queries that are easy to write and perform well on connected data. The core of Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. You can use Neptune for graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. Amazon Neptune is fully managed and handles the time-consuming tasks such as provisioning, patching, backup, recovery, failure detection and repair. You pay a simple monthly charge for each Amazon Neptune database instance you use. There are no upfront costs or long-term commitments required. /neptune/faqs/;What popular graph query languages does Amazon Neptune support?;Amazon Neptune supports both the open source Apache TinkerPop Gremlin graph traversal language and the W3C standard Resource Description Framework’s (RDF) SPARQL query language. /neptune/faqs/;How can I migrate from an existing Apache TinkerPop Gremlin application to Amazon Neptune?;Apache TinkerPop Gremlin Server that supports both Websocket and REST connections. Once you provision an instance of Amazon Neptune, you can configure your existing TinkerPop application to use the endpoint provided by the service. See also Accessing the Graph via Gremlin. /neptune/faqs/;Do I need to change client drivers to use Amazon Neptune’s Gremlin Server?;Amazon Neptune provides an HTTP REST endpoint that implements the SPARQL 1.1 Protocol. Once you provision a service instance, you can configure your application to point to the SPARQL endpoint. See also Accessing the Graph via SPARQL. /neptune/faqs/;Do I need to change client drivers to use Amazon Neptune’s SPARQL Endpoint?;Yes. Please see the Amazon Neptune SLA. /neptune/faqs/;Is Neptune ACID (Atomicity, Consistency, Isolation, Durability) compliant?;Amazon Neptune is designed to support graph applications that require high throughput and low latency graph queries. With support for up to 15 read replicas, Amazon Neptune can support 100,000s of queries per second. /neptune/faqs/;Why are Amazon RDS permissions and resources required to use Amazon Neptune?;No, Amazon Neptune is a purpose-built, high-performance graph database engine. Neptune efficiently stores and navigates graph data, and uses a scale-up, in-memory optimized architecture to allow for fast query evaluation over large graphs. /neptune/faqs/;Does Amazon Neptune have a service level agreement (SLA)?;Please see our pricing page for current pricing information. /neptune/faqs/;What types of graph query workloads are optimized to work with Amazon Neptune?;Please see our pricing page for current information on regions and prices. /neptune/faqs/;Does Amazon Neptune perform query optimization?;. /neptune/faqs/;Is Amazon Neptune built on a relational database?;The minimum storage is 10GB. Based on your database usage, your Amazon Neptune storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. There is no need to provision storage in advance. /neptune/faqs/;How much does Amazon Neptune cost?;"When you modify your DB Instance class, your requested changes will be applied during your specified maintenance window. Alternatively, you can use the ""Apply Immediately"" flag to apply your scaling requests immediately. Both of these options will have an availability impact for a few minutes as the scaling operation is performed. Bear in mind that any other pending system changes will also be applied." /neptune/faqs/;In which AWS regions is Amazon Neptune available?;Automated backups are always enabled on Amazon Neptune DB Instances. Backups do not impact database performance. /neptune/faqs/;What are IOs in Amazon Neptune and how are they calculated?;Amazon Neptune automatically divides your database volume into 10GB segments spread across many disks. Each 10GB chunk of your database volume is replicated six ways, across three Availability Zones. Amazon Neptune is designed to transparently handle the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Neptune storage is also self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically. /neptune/faqs/;How do I scale the compute resources associated with my Amazon Neptune DB Instance?;Amazon Neptune supports Read Replicas, which share the same underlying volume as the primary instance. Updates made by the primary are visible to all Amazon Neptune Replicas. Feature Amazon Neptune Replicas Number of replicas Up to 15 Replication Type Asynchronous (milliseconds) Performance impact on primary Low Act as failover target Yes (no data loss) Automated failover Yes /neptune/faqs/;How do I enable backups for my DB Instance?;No, Amazon Neptune does not support cross-region replicas. /neptune/faqs/;Can I take DB Snapshots and keep them around as long as I want?;Yes. You can assign a promotion priority tier to each instance on your cluster. When the primary instance fails, Amazon Neptune will promote the replica with the highest priority to primary. If there is contention between 2 or more replicas in the same priority tier, then Amazon Neptune will promote the replica that is the same size as the primary instance. /neptune/faqs/;If my database fails, what is my recovery path?;You can modify the priority tier for an instance at any time. Simply modifying priority tiers will not trigger a failover. /neptune/faqs/;What happens to my automated backups and DB Snapshots if I delete my DB Instance?;You can assign lower priority tiers to replicas that you don’t want promoted to the primary instance. However, if the higher priority replicas on the cluster are unhealthy or unavailable for some reason, then Amazon Neptune will promote the lower priority replica. /neptune/faqs/;Can I share my snapshots with another AWS account?;Failover is automatically handled by Amazon Neptune so that your applications can resume database operations as quickly as possible without manual administrative intervention. If you have an Amazon Neptune Replica, in the same or a different Availability Zone, when failing over, Amazon Neptune flips the canonical name record (CNAME) for your DB primary endpoint to a healthy replica, which is in turn is promoted to become the new primary. Start-to-finish, failover typically completes within 30 seconds. Additionally, the read replicas endpoint doesn't require any CNAME updates during failover. If you do not have an Amazon Neptune Replica (i.e. single instance), Neptune will first attempt to create a new DB Instance in the same Availability Zone as the original instance. If unable to do so, Neptune will attempt to create a new DB Instance in a different Availability Zone. From start to finish, failover typically completes in under 15 minutes. Your application should retry database requests in the event of connection loss. /neptune/faqs/;Will I be billed for shared snapshots?;Amazon Neptune will automatically detect a problem with your primary instance and begin routing your read/write traffic to an Amazon Neptune Replica. On average, this failover will complete within 30 seconds. In addition, the read traffic that your Amazon Neptune Replicas were serving will be briefly interrupted. /neptune/faqs/;Can I automatically share snapshots?;Since Amazon Neptune Replicas share the same data volume as the primary instance, there is virtually no replication lag. We typically observe lag times in the 10s of milliseconds. /neptune/faqs/;How many accounts can I share snapshots with?;Yes, all Amazon Neptune DB Instances must be created in a VPC. With Amazon VPC, you can define a virtual network topology that closely resembles a traditional network that you might operate in your own datacenter. This gives you complete control over who can access your Amazon Neptune databases. /dms/faqs/;What is AWS Database Migration Service?;AWS Database Migration Service (AWS DMS) is a managed migration and replication service that helps you move your databases and analytics workloads to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. /dms/faqs/;How do I get started with AWS Database Migration Service?;Getting started with AWS Database Migration Service is quick and simple. Most data replication tasks can be set up in less than 10 minutes. /dms/faqs/;How much does AWS DMS cost?;AWS DMS is an affordable, low-cost option to migrate your databases and analytics workloads. You pay only for the replication instances and any additional log storage. Data transfer is free. You can find full pricing details on the DMS pricing page. /dms/faqs/;How much does AWS DMS Schema Conversion cost?;AWS DMS Schema Conversion is free to use as a part of DMS. Pay only for the storage used. /dms/faqs/;What are the database migration steps when using AWS Database Migration Service?;During a typical simple database migration, you will create a target database, migrate the database schema, set up the data replication process, initiate the full load and a subsequent change data capture and apply, and conclude with a switchover of your production environment to the new database once the target database is caught up with the source database. /dms/faqs/;Is the database migration process using AWS DMS different for continuous data replication?;The only difference is in the last step (the production environment switchover), which is absent for continuous data replication. Your data replication task will run until you change or terminate it. /dms/faqs/;Can I monitor the progress of a database migration task?;Yes. AWS Database Migration Service has a variety of metrics displayed in the AWS Management Console. It provides an end-to-end view of the data replication process, including diagnostic and performance data for each point in the replication pipeline. /dms/faqs/;How do I integrate AWS Database Migration Service with other applications?;AWS Database Migration Service provides a provisioning API that allows creating a replication task directly from your development environment, or scripting their creation at scheduled times during the day. /dms/faqs/;What source databases and target databases does AWS Database Migration Service support?;AWS Database Migration Service (DMS) supports a range of homogeneous and heterogeneous data replications. Either the source or the target database (or both) need to reside in RDS or on EC2. Replication between on-premises to on-premises databases is not supported. /dms/faqs/;Will AWS Database Migration Service help me convert my Oracle PL/SQL and SQL Server T-SQL code to Amazon RDS for MySQL and Amazon RDS for PostgreSQL stored procedures?;Yes, part of the AWS Database Migration Service is AWS DMS Schema Conversion (DMS SC) which automates the conversion of Oracle PL/SQL and SQL Server T-SQL code to equivalent code in the Amazon RDS for MySQL dialect of SQL or the equivalent PL/pgSQL code in PostgreSQL. When a code fragment cannot be automatically converted to the target language, DMS SC will clearly document the locations that require manual input from the application developer. A downloadable version, called AWS Schema Conversion Tool (AWS SCT), is also available. /dms/faqs/;Does AWS Database Migration Service migrate the database schema for me?;Yes, when you need to use a more customizable schema migration process (for example, when you are migrating your production database and need to move your stored procedures and secondary database objects), you can use the built-in Schema Conversion feature of AWS DMS for heterogeneous migrations. Alternative options include downloading AWS Schema Conversion Tool or using the schema export tools native to the source engine, if you are doing homogeneous migrations such as: /dms/faqs/;How are AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) related?;AWS DMS and AWS SCT work in conjunction to both migrate databases and support ongoing replication for a variety of uses such as populating data lakes and warehouses, synchronizing systems, and so on. AWS SCT can copy database schemas for homogeneous migrations and convert them for heterogeneous migrations. The schemas can be between databases (for example, Oracle to PostgreSQL), or between data warehouses (for example, Netezza to Amazon Redshift). /dms/faqs/;In addition to one-time data migration, can I use AWS Database Migration Service for continuous data replication?;Yes, you can use AWS Database Migration Service for one-time data migration into RDS and EC2-based databases as well as for continuous data replication. AWS Database Migration Service will capture changes on the source database and apply them in a transactionally consistent way to the target. /dms/faqs/;Why should I use AWS Database Migration Service instead of my own self-managed replication solution?;AWS Database Migration Service is very simple to use. Replication tasks can be set up in minutes instead of hours or days, compared to the self-managed replication solutions that have to be installed and configured. AWS Database Migration Service monitors for replication tasks, network or host failures, and automatically provisions a host replacement in case of failures that can’t be repaired. Users of AWS Database Migration Service don’t have to overprovision capacity and invest in expensive hardware and replication software, as they typically have to do with self-managed solutions. /dms/faqs/;Can I replicate data from encrypted data sources?;Yes, AWS Database Migration Service can read and write from and to encrypted databases. AWS Database Migration Service connects to your database endpoints on the SQL interface layer. If you use the Transparent Data Encryption features of Oracle or SQL Server, AWS Database Migration Service will be able to extract decrypted data from such sources and replicate it to the target. /dms/faqs/;What is AWS DMS Fleet Advisor?;AWS DMS Fleet Advisor is a free, fully managed capability of AWS Database Migration Service (AWS DMS). It automates migration planning and helps you migrate database and analytics fleets to the cloud at scale with minimal effort. /dms/faqs/;When should I use AWS DMS Fleet Advisor versus AWS Application Discovery Service and Migration Evaluator?;AWS DMS Fleet Advisor is intended for users looking to migrate a large number of database and analytics servers to AWS. When you are ready to migrate your database and analytics workloads to target services in AWS, you should use AWS DMS Fleet Advisor to discover and analyze your Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) database workloads. Fleet Advisor allows you to build a customized migration plan by determining the complexity of migrating your source databases to target services in AWS. /dms/faqs/;When should I use AWS DMS Fleet Advisor in conjunction with AWS Application Discovery Service and Migration Evaluator?;Migration Evaluator and AWS Application Discovery Service help you gain early insights into the inventory portfolio of your entire on-premises data center. When you are ready to perform a deeper analysis of your database and analytics workloads to determine the migration paths in AWS, you should use AWS DMS Fleet Advisor to create a database migration plan to AWS. /dms/faqs/;What is the AWS DMS support lifecycle policy?;The AWS DMS support lifecycle policy specifies how long support will be available for each DMS version, from when a version is released to when it is no longer supported. /dms/faqs/;What is the purpose of the support lifecycle policy?;The support lifecycle policy aims to provide predictable and consistent guidelines for support for each AWS DMS version release. The guidelines will benefit customers to strategically plan their migration and upgrades. /dms/faqs/;What are the support timelines for AWS DMS Releases?;Below is a summary of the support timelines for all AWS DMS releases. This table will be updated as we release new DMS versions. The end of support date will start 18 months after each version release, with the exception of a longer support period for versions launched prior to 2022. /dms/faqs/;What is the AWS DMS support lifecycle policy?;The AWS DMS support lifecycle policy specifies how long support will be available for each DMS version, from when a version is released to when it is no longer supported. /dms/faqs/;What is the purpose of the support lifecycle policy?;The support lifecycle policy aims to provide predictable and consistent guidelines for support for each AWS DMS version release. The guidelines will benefit customers to strategically plan their migration and upgrades. /dms/faqs/;What are the support timelines for AWS DMS Releases?;Below is a summary of the support timelines for all AWS DMS releases. This table will be updated as we release new DMS versions. The end of support date will start 18 months after each version release, with the exception of a longer support period for version 3.3.3, 3.3.4, 3.4.2, 3.4.3 and 3.4.4. /dms/faqs/;How are the timelines communicated?;Support timelines for each AWS DMS version release will be included in the associated DMS Release Notes. In addition, AWS will send DMS instance owners a quarterly reminder of they are running a release that will no longer be supported in the following quarter. /dms/faqs/;What is a preferred DMS version?;The DMS service designates one of the newest releases of DMS as the preferred version. This preferred version is the version that will be used for automatic upgrades and is the default choice for customers creating a new DMS instance. /dms/faqs/;How do you define the latest preferred AWS DMS version?;New DMS versions are only released after extensive testing. After the release of a new version, the DMS service team closely monitors reliability metrics and customer feedback. Once we are confident that there are no significant issues with the new release, we will mark that release as the new preferred version which you can find when selecting the version upon creation of the replication instance. /dms/faqs/;Is the support policy term the same for major and minor version of DMS?;AWS DMS does not differentiate between a major and minor version release, and does not plan to have a different support policy. /dms/faqs/;Will AWS DMS automatically update my instance to latest preferred version?;If you enable auto upgrade, your replication instance will be automatically updated to the latest preferred version as it becomes available. If you opt out of the auto-upgrade, AWS DMS will update your instances to the latest preferred version once the end-of-life date has been reached, which will be communicated via email and console notification prior to upgrade. You can learn more about how to upgrade the DMS engine version using the AWS Console or AWS CLI in this DMS User Guide. /dms/faqs/;How do I enable auto-upgrade?;The auto upgrade setting in your replication instance is turned on by default. To check or make any modification to this setting using AWS CLI, DMS API, or console you can use this Modifying a Replication Instance guide. /dms/faqs/;I have instances on a version that is not covered in support. How does this affect my existing instances and jobs? What do you recommend as next steps?;After the end of life date for a replication instance version has passed, AWS DMS may remove the release version from the console and upgrade your replication instance to the latest preferred version in order to continue providing support. We recommend you to upgrade to the latest AWS DMS release as soon as possible. /dms/faqs/;Who can I reach out to if I need more information?;You can reach out to AWS Developer Support for more information. /documentdb/faqs/;What is Amazon DocumentDB (with MongoDB compatibility)?;Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed enterprise document database service that supports native JSON workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data. Developers can use the same MongoDB application code, drivers, and tools as they do today to run, manage, and scale workloads on Amazon DocumentDB. Enjoy improved performance, scalability, and availability without worrying about managing the underlying infrastructure. Customers can use AWS Database Migration Service (DMS) to easily migrate their on-premises or Amazon Elastic Compute Cloud (EC2) MongoDB non-relational databases to Amazon DocumentDB with virtually no downtime. There are no upfront investments required to use Amazon DocumentDB, and customers only pay for the capacity they use. /documentdb/faqs/;What use cases are well-suited for a document database like Amazon DocumentDB?;Document-oriented databases are one of the fastest growing categories of noSQL databases, with the primary reason being that document databases offer both flexible schemas and extensive query capabilities. The document model is a great choice for use cases with dynamic datasets that require ad-hoc querying, indexing, and aggregations. With the scale that Amazon DocumentDB provides, it is used by a wide variety of customers for use cases such as content management, personalization, catalogs, mobile and web applications, IoT, and profile management. /documentdb/faqs/;"What does ""MongoDB-compatible"" mean?";“MongoDB compatible” means that Amazon DocumentDB interacts with the Apache 2.0 open source MongoDB 3.6, 4.0, and 5.0 APIs. As a result, you can use the same MongoDB drivers, applications, and tools with Amazon DocumentDB with little or no changes. While Amazon DocumentDB supports a vast majority of the MongoDB APIs that customers actually use, it does not support every MongoDB API. Our focus has been to deliver the capabilities that customer actually use and need. Since launch, we have continued to work backwards from customers and have delivered an additional 80+ capabilities, including MongoDB 4.0 and 5.0 compatibility, transactions, and sharding. To learn more about the supported MongoDB APIs, see the compatibility documentation. To learn more about recent Amazon DocumentDB launches, see “Amazon DocumentDB Announcements” on the Amazon DocumentDB resources page. /documentdb/faqs/;Is Amazon DocumentDB restricted by the MongoDB SSPL license?;No. Amazon DocumentDB does not utilize any MongoDB SSPL code and thus is not restricted by this license. Instead, Amazon DocumentDB interacts with the Apache 2.0 open-source MongoDB 3.6, 4.0, and 5.0 APIs. We will continue to listen and work backward from our customers to deliver the capabilities that they need. To learn more about the supported MongoDB APIs, see the compatibility documentation. To learn more about recent Amazon DocumentDB launches, see “Amazon DocumentDB Announcements” on the Amazon DocumentDB resources page. /documentdb/faqs/;How can I migrate data from an existing MongoDB database to Amazon DocumentDB?;Customers can use AWS Database Migration Service (DMS) to easily migrate their on-premises or Amazon Elastic Compute Cloud (EC2) MongoDB databases to Amazon DocumentDB with virtually no downtime. With DMS, you can migrate from a MongoDB replica set or from a sharded cluster to Amazon DocumentDB. Additionally, you can use most existing tools to migrate data from a MongoDB database to Amazon DocumentDB, including mongodump/mongorestore, mongoexport/mongoimport, and third-party tools that support Change Data Capture (CDC) via the oplog. For more information, see Migrating to Amazon DocumentDB. /documentdb/faqs/;Do I need to change client drivers to use Amazon DocumentDB?;No, Amazon DocumentDB works with a vast majority of MongoDB drivers compatible with MongoDB 3.4+. /documentdb/faqs/;Does Amazon DocumentDB support ACID transactions?;Yes. With the launch of support for MongoDB 4.0 compatibility, Amazon DocumentDB supports the ability to perform atomicity, consistency, isolation, durability (ACID) transactions across multiple documents, statements, collections, and databases. /documentdb/faqs/;Is Amazon DocumentDB subject to MongoDB's end of life (EOL) schedule?;No, Amazon DocumentDB does not follow the same support lifecycles as MongoDB and MongoDB's EOL schedule does not apply to Amazon DocumentDB. /documentdb/faqs/;How do I access my Amazon DocumentDB cluster?;Amazon DocumentDB clusters are deployed within a customer's Amazon VPC (VPC) and can be accessed directly by Amazon Elastic Compute Cloud (EC2) instances or other AWS services that are deployed in the same VPC. Additionally, Amazon DocumentDB can be accessed by Amazon EC2 instances or other AWS services in different VPCs in the same region or other regions via VPC peering. Access to Amazon DocumentDB clusters must be done through the mongo shell or with MongoDB drivers. Amazon DocumentDB requires that you authenticate when connecting to a cluster. For additional options, see Connecting to an Amazon DocumentDB Cluster from Outside an Amazon VPC. /documentdb/faqs/;Why are Amazon RDS permissions and resources required to use Amazon DocumentDB?;"For certain management features such as instance lifecycle management, encryption-at-rest with Amazon Key Management Service (KMS) keys and security groups management, Amazon DocumentDB leverages operational technology that is shared with Amazon Relational Database Service (RDS) and Amazon Neptune. When using the describe-db-instances and describe-db-clusters AWS CLI APIs, we recommend filtering for Amazon DocumentDB resources using the following parameter: ""--filter Name=engine,Values=docdb""." /documentdb/faqs/;What instances types does Amazon DocumentDB offer?;Please see the Amazon DocumentDB pricing page for current information on available instance types per region. /documentdb/faqs/;How do I try Amazon DocumentDB?;To try Amazon DocumentDB, please see the Getting Started guide. /documentdb/faqs/;Does Amazon DocumentDB have an SLA?;Yes. For more information, please see Amazon DocumentDB (with MongoDB compatibility) Service Level Agreement. /documentdb/faqs/;What type of performance can I expect from Amazon DocumentDB?;When writing to storage, Amazon DocumentDB only persists a write-ahead logs, and does not need to write full buffer page syncs. As a result of this optimization, which does not compromise durability, Amazon DocumentDB writes are typically faster than traditional databases. Amazon DocumentDB clusters can scale out to millions of reads per second with up to 15-read replicas. /documentdb/faqs/;How much does Amazon DocumentDB cost and in which AWS regions is Amazon DocumentDB available?;Please see the Amazon DocumentDB pricing page for current information on regions and prices. /documentdb/faqs/;Does Amazon DocumentDB have a free tier and can you get started for free?;Yes, you can try Amazon DocumentDB for free using the 1-month free trial. If you have not used Amazon DocumentDB before, you are eligible for a one month free trial. Your organization gets 750 hours per month of t3.medium instance usage, 30 million IOs, 5 GB of storage, and 5 GB of backup storage for free for 30 days. Once your one month free trial expires or your usage exceeds the free allowance, you can shut down your cluster to avoid any charges, or keep it running at our standard on-demand rates. To learn more, refer to the DocumentDB free trial page. /documentdb/faqs/;What is Amazon DocumentDB Elastic Clusters?;Amazon DocumentDB Elastic Clusters enables you to elastically scale your document database to handle millions of writes and reads, with petabytes of storage capacity. Elastic Clusters simplifies how customers interact with Amazon DocumentDB by automatically managing the underlying infrastructure and removing the need to create, remove, upgrade, or scale instances. /documentdb/faqs/;How do I get started with Elastic Clusters?;You can create an Elastic Clusters cluster using the Amazon DocumentDB API, SDK, CLI, CloudFormation (CFN), or the AWS console. When provisioning your cluster, you specify how many shards and the compute per shard that your workload needs. Once you have created your cluster, you are ready to start leveraging Elastic Clusters’ elastic scalability. Now, you can connect to the Elastic Clusters cluster and read or write data from your application. Elastic Clusters is elastic. Depending on your workload’s needs, you can add or remove compute by modifying your shard count and/or compute per shard using the AWS console, API, CLI, or SDK. Elastic Clusters will automatically provision/de-provision the underlying infrastructure and rebalance your data. /documentdb/faqs/;How does Elastic Clusters work?;Elastic Clusters uses sharding to partition data across Amazon DocumentDB’s distributed storage system. Sharding, also known as partitioning, splits large data sets into small data sets across multiple nodes enabling customers to scale out their database beyond vertical scaling limits of a single database. Elastic Clusters utilizes the separation of compute and storage in Amazon DocumentDB. Rather than re-partitioning collections by moving small chunks of data between compute nodes, Elastic Clusters can copy data efficiently within the distributed storage system. /documentdb/faqs/;What types of sharding does Elastic Clusters support?;Elastic Clusters supports hash-based partitioning. /documentdb/faqs/;How is Elastic Clusters different from MongoDB sharding?;With Elastic Clusters, you can easily scale out or scale in your workload on Amazon DocumentDB typically with little to no application downtime or impact to performance regardless of data size. A similar operation on MongoDB would impact application performance and take hours, and in some cases days. Elastic Clusters also offers differentiated management capabilities such as no impact backups and rapid point in time restore enabling customers to focus more time on their applications rather than managing their database. /documentdb/faqs/;Do I need to make any changes to my application to use Elastic Clusters?;No. You do not need to make any changes to your application to use Elastic Clusters. /documentdb/faqs/;Can I convert my existing Amazon DocumentDB cluster to an Elastic Clusters cluster?;No, in the near-term, you can leverage AWS Database Migration service (DMS) to migrate data from an existing Amazon DocumentDB cluster to an Elastic Clusters cluster. /documentdb/faqs/;How do I define a shard key?;Choosing an optimal shard key for Elastic Clusters is no different than other databases. A great shard key has two characteristics - high frequency and high cardinality. For example, if your application stores user_orders in DocumentDB, then generally you have to retrieve the data by the user. Therefore, you want all orders related to a given user to be in one shard. In this case, user_id would be a good shard key. Read more information. /documentdb/faqs/;What are the concepts associated with Elastic Clusters?;Elastic Clusters: An Amazon DocumentDB cluster that allows you to scale your workload’s throughput to millions of reads/writes per second and storage to petabytes. An Elastic Cluster cluster comprises of one or more shards for compute and a storage volume, and is highly available across multiple Availability Zones by default. Shard: A shard provides compute for Elastic Clusters cluster. A shard by default will have three nodes, one writer node and two reader nodes. You can have a maximum of 32 shards and each shard can have a maximum of 64 vCPUs. Shard key: Shard key is an optional field in your JSON documents that Elastic Clusters uses to distribute read and write traffic to the matching shard. You are advised to pick a key that has lots of unique values. A good shard key will evenly partition your data across the underlying shards, giving your workload the best throughput and performance. Sharded collection: A collection whose data is distributed across an Elastic Clusters cluster. /documentdb/faqs/;How does Elastic Clusters relate to other AWS services?;Elastic Clusters integrates with other AWS services in the same way DocumentDB does today. First, you can use AWS Database Migration Service (DMS) to migrate from MongoDB and other relational databases to Elastic Clusters. Second, you can monitor the health and performance of your Elastic Clusters cluster using Amazon CloudWatch. Third, you can set up authentication and authorization through AWS IAM users and roles and use AWS VPC for secure VPC-only connections. Last, you can use AWS Glue to import and export data from/to other AWS services such as S3, Redshift and OpenSearch. /documentdb/faqs/;Can I migrate my existing MongoDB sharded workloads to Elastic Clusters?;Yes. You can migrate your existing MongoDB sharded workloads to Elastic Clusters. You can either use the AWS Database Migration Service or native MongoDB tools, such as mongodump and mongorestore, to migrate your MongoDB workload to Elastic Clusters. Elastic Clusters also supports MongoDB’s commonly used APIs, such as shardCollection(), giving you the flexibility to reuse existing tooling and scripts with Amazon DocumentDB. /documentdb/faqs/;What are the minimum and maximum storage limits of an Amazon DocumentDB cluster?;The minimum storage is 10 GB. Based on your cluster usage, your Amazon DocumentDB storage will automatically grow, up to 128 TiB in 10 GB increments with no impact on performance. With Amazon DocumentDB Elastic Clusters, storage will automatically grow up to 4 PiB in 10 GB increments. For either case, there is no need to provision storage in advance. /documentdb/faqs/;How does Amazon DocumentDB scale?;Amazon DocumentDB scales in two dimensions: storage and compute. Amazon DocumentDB's storage automatically scales from 10 GB to 128 TiB in Instance-based Clusters, and up to 4 PiB for Amazon DocumentDB Elastic Clusters. Amazon DocumentDB's compute capacity can be scaled up by creating larger instances and horizontally (for greater read throughput) by adding additional replica instances to the cluster. /documentdb/faqs/;How do I scale the compute resources associated with my Amazon DocumentDB cluster?;"You can scale the compute resources allocated to your instance in the AWS Management Console by selecting the desired instance and clicking the “modify” button. Memory and CPU resources are modified by changing your instance class. When you modify your instance class, your requested changes will be applied during your specified maintenance window. Alternatively, you can use the ""Apply Immediately"" flag to apply your scaling requests immediately. Both of these options will have an availability impact for a few minutes as the scaling operation is performed. Bear in mind that any other pending system changes will also be applied." /documentdb/faqs/;How do I enable backups for my cluster?;Automated backups are always enabled on Amazon DocumentDB clusters. Amazon DocumentDB’s simple database backup capability enables point-in-time recovery for your clusters. You can increase your backup window for point-in-time restores up to 35 days. Backups do not impact database performance. /documentdb/faqs/;Can I take cluster snapshots and keep them around as long as I want?;Yes. Manual snapshots can be retained beyond the backup window and there is no performance impact when taking snapshots. Note that restoring data from cluster snapshots requires creating a new cluster. /documentdb/faqs/;If my instance fails, what is my recovery path?;Amazon DocumentDB automatically maintains six copies of your data across three Availability Zones (AZs) and will automatically attempt to recover your instance in a healthy AZ with no data loss. In the unlikely event your data is unavailable within Amazon DocumentDB storage, you can restore from a cluster snapshot or perform a point-in-time restore operation to a new cluster. Note that the latest restorable time for a point-in-time restore operation can be up to five minutes in the past. /documentdb/faqs/;What happens to my automated backups and cluster snapshots if I delete my cluster?;You can choose to create a final snapshot when deleting your instance. If you do, you can use this snapshot to restore the deleted instance at a later date. Amazon DocumentDB retains this final user-created snapshot along with all other manually created snapshots after the instance is deleted. Only snapshots are retained after the instance is deleted (i.e., automated backups created for point-in-time restore are not kept). /documentdb/faqs/;What happens to my automated backups and cluster snapshots if I delete my account?;Deleting your AWS account will delete all automated backups and snapshot backups contained in the account. /documentdb/faqs/;Can I share my snapshots with another AWS account?;Yes. Amazon DocumentDB gives you the ability to create snapshots of your cluster, which you can use later to restore a cluster. You can share a snapshot with a different AWS account, and the owner of the recipient account can use your snapshot to restore a cluster that contains your data. You can even choose to make your snapshots public – that is, anybody can restore a cluster containing your (public) data. You can use this feature to share data between your various environments (production, dev/test, staging, etc.) that have different AWS accounts, as well as keep backups of all your data secure in a separate account in case your main AWS account is ever compromised. /documentdb/faqs/;Will I be billed for shared snapshots?;There is no charge for sharing snapshots between accounts. However, you may be charged for the snapshots themselves, as well as any clusters that you restore from shared snapshots. /documentdb/faqs/;Can I automatically share snapshots?;We do not support sharing automatic cluster snapshots. To share an automatic snapshot, you must manually create a copy of the snapshot, and then share the copy. /documentdb/faqs/;Can I share my Amazon DocumentDB snapshots across different regions?;No. Your shared Amazon DocumentDB snapshots will only be accessible by accounts in the same region as the account that shares them. /documentdb/faqs/;Can I share an encrypted Amazon DocumentDB snapshot?;Yes. You can share encrypted Amazon DocumentDB snapshots. The recipient of the shared snapshot must have access to the KMS key that was used to encrypt the snapshot. /documentdb/faqs/;Can I use Amazon DocumentDB snapshots outside of the service?;No. Amazon DocumentDB snapshots can only be used inside of the service. /documentdb/faqs/;What happens to my backups if I delete my cluster?;You can choose to create a final snapshot when deleting your cluster. If you do, you can use this snapshot to restore the deleted cluster at a later date. Amazon DocumentDB retains this final user-created snapshot along with all other manually created snapshots after the cluster is deleted. /documentdb/faqs/;How does Amazon DocumentDB improve my cluster’s fault tolerance to disk failures?;Amazon DocumentDB automatically divides your storage volume into 10 GB segments spread across many disks. Each 10 GB chunk of your storage volume is replicated six ways, across three Availability Zones (AZs). Amazon DocumentDB is designed to transparently handle the loss of up to two copies of data without affecting write availability and up to three copies without affecting read availability. Amazon DocumentDB’s storage volume is also self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically. /documentdb/faqs/;How does Amazon DocumentDB improve recovery time after a database crash?;Unlike other databases, after a database crash, Amazon DocumentDB does not need to replay the redo log from the last database checkpoint (typically five minutes) and confirm that all changes have been applied, before making the database available for operations. This reduces database restart times to less than 60 seconds in most cases. Amazon DocumentDB moves the cache out of the database process and makes it available immediately at restart time. This prevents you from having to throttle access until the cache is repopulated to avoid brownouts. /documentdb/faqs/;What kind of replicas does Amazon DocumentDB support?;Amazon DocumentDB supports read replicas, which share the same underlying storage volume as the primary instance. Updates made by the primary instance are visible to all Amazon DocumentDB replicas. Feature: Amazon DocumentDB read replicas Number of replicas: Up to 15 Replication Type: Asynchronous (typically milliseconds) Performance impact on primary: Low Act as failover target: Yes (no data loss) Automated failover: Yes /documentdb/faqs/;Can I have cross-region replicas with Amazon DocumentDB?;Yes, you can replicate your data across regions using the Global Cluster feature. Global Clusters span across multiple AWS Regions. Global clusters replicate your data to clusters in up to five Regions with little to no impact on performance. Global clusters provide faster recovery from Region-wide outages and enable low-latency global reads. To learn more see our blog post. /documentdb/faqs/;Can I prioritize certain replicas as failover targets over others?;Yes. You can assign a promotion priority tier to each instance on your cluster. If the primary instance fails, Amazon DocumentDB will promote the replica with the highest priority to primary. If there are inconsistencies between two or more replicas in the same priority tier, then Amazon DocumentDB will promote the replica that is the same size as the primary instance. /documentdb/faqs/;Can I modify priority tiers for instances after they have been created?;You can modify the priority tier for an instance at any time. Simply modifying priority tiers will not trigger a failover. /documentdb/faqs/;Can I prevent certain replicas from being promoted to the primary instance?;You can assign lower priority tiers to replicas that you do not want promoted to the primary instance. However, if the higher priority replicas on the cluster are unhealthy or unavailable for some reason, then Amazon DocumentDB will promote the lower priority replica. /documentdb/faqs/;How does Amazon DocumentDB assure high availability of my cluster?;Amazon DocumentDB can be deployed in a high-availability configuration by using replica instances in multiple AWS Availability Zones as failover targets. In the event of a primary instance failure, a replica instance is automatically promoted to be the new primary with minimal service interruption. /documentdb/faqs/;How can I improve upon the availability of a single Amazon DocumentDB instance?;You can add additional Amazon DocumentDB replicas. Amazon DocumentDB replicas share the same underlying storage as the primary instance. Any Amazon DocumentDB replica can be promoted to become primary without any data loss and therefore can be used for enhancing fault tolerance in the event of a primary instance failure. To increase cluster availability, simply create one to 15 replicas, in multiple AZs, and Amazon DocumentDB will automatically include them in failover primary selection in the event of an instance outage. /documentdb/faqs/;What happens during failover and how long does it take?;Failover is automatically handled by Amazon DocumentDB so that your applications can resume database operations as quickly as possible without manual administrative intervention. If you have an Amazon DocumentDB replica instance in the same or a different Availability Zone, when failing over, Amazon DocumentDB flips the canonical name record (CNAME) for your instance to point at the healthy replica, which is in turn promoted to become the new primary. Start-to-finish, failover typically completes within 30 seconds. If you do not have an Amazon DocumentDB replica instance (i.e. a single instance cluster), Amazon DocumentDB will attempt to create a new instance in the same Availability Zone as the original instance. This replacement of the original instance is done on a best-effort basis and may not succeed, for example, if there is an issue that is broadly affecting the Availability Zone. Your application should retry database connections in the event of connection loss. /documentdb/faqs/;If I have a primary instance and an Amazon DocumentDB replica instance actively taking read traffic and a failover occurs, what happens?;Amazon DocumentDB will automatically detect a problem with your primary instance and begin routing your read/write traffic to an Amazon DocumentDB replica instance. On average, this failover will complete within 30 seconds. In addition, the read traffic that your Amazon DocumentDB replicas instances were serving will be briefly interrupted. /documentdb/faqs/;How far behind the primary will my replicas be?;Since Amazon DocumentDB replicas share the same data volume as the primary instance, there is virtually no replication lag. We typically observe lag times in the 10s of milliseconds. /documentdb/faqs/;Can I use Amazon DocumentDB in Amazon Virtual Private Cloud (Amazon VPC)?;Yes. All Amazon DocumentDB clusters must be created in a VPC. With Amazon VPC, you can define a virtual network topology that closely resembles a traditional network that you might operate in your own datacenter. This gives you complete control over who can access your Amazon DocumentDB clusters. /documentdb/faqs/;Does Amazon DocumentDB support role-based access control (RBAC)?;Amazon DocumentDB supports RBAC with built-in roles. RBAC enables you to enforce least privilege as a best practice by restricting the actions that users are authorized to perform. For more information, see Amazon DocumentDB role-based access control. /documentdb/faqs/;How do the existing MongoDB authentication modes work with Amazon DocumentDB?;Amazon DocumentDB utilizes VPC’s strict network and authorization boundary. Authentication and authorization for Amazon DocumentDB management APIs is provided by IAM users, roles, and policies. Authentication to an Amazon DocumentDB database are done via standard MongoDB tools and drivers with Salted Challenge Response Authentication Mechanism (SCRAM), the default authentication mechanism for MongoDB. /documentdb/faqs/;Does Amazon DocumentDB support encrypting my data-at-rest?;Yes. Amazon DocumentDB allows you to encrypt your clusters using keys you manage through AWS Key Management Service (KMS). On a cluster running with Amazon DocumentDB encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, snapshots, and replicas in the same cluster. Encryption and decryption are handled seamlessly. For more information about the use of KMS with Amazon DocumentDB, see the Encrypting Amazon DocumentDB Data at Rest. /documentdb/faqs/;Can I encrypt an existing unencrypted cluster?;Currently, encrypting an existing unencrypted Amazon DocumentDB cluster is not supported. To use Amazon DocumentDB encryption for an existing unencrypted cluster, create a new cluster with encryption enabled and migrate your data into it. /documentdb/faqs/;What compliance certifications does Amazon DocumentDB meet?;Amazon DocumentDB was designed to meet the highest security standards and to make it easy for you to verify our security and meet your own regulatory and compliance obligations. Amazon DocumentDB has been assessed to comply with PCI DSS, ISO 9001, 27001, 27017, and 27018, SOC 1, 2 and 3, and Health Information Trust Alliance (HITRUST) Common Security Framework (CSF) certification, in addition to being HIPAA eligible. AWS compliance reports are available for download in AWS Artifact. /application-discovery/faqs/;What is AWS Application Discovery Service?;AWS Application Discovery Service collects and presents data to enable enterprise customers to understand the configuration, usage, and behavior of servers in their IT environments. Server data is retained in the Application Discovery Service where it can be tagged and grouped into applications to help organize AWS migration planning. Collected data can be exported for analysis in Excel or other cloud migration analysis tools. /application-discovery/faqs/;How does the Application Discovery Service help enterprises migrate to AWS?;Application Discovery Service helps enterprises obtain a snapshot of the current state of their data center servers by collecting server specification information, hardware configuration, performance data, and details of running processes and network connections. Once the data is collected, you can use it to perform a Total Cost of Ownership (TCO) analysis and then create a cost optimized migration plan based on your unique business requirements. /application-discovery/faqs/;How does Application Discovery Service work?;Application Discovery Service supports both agent and agentless-based on-premises tooling, in addition to file-based import. With agentless discovery, customers deploy the tooling on centralized servers which then leverage public APIs within the on-premises environment to discover resources and monitor utilization. This process allows one install to monitor many servers. For customers that need higher resolution data, including information about running processes, the collection tooling is deployed on each server (physical or virtual) within the on-premises environment. /application-discovery/faqs/;How can I get started using Application Discovery Service?;To get started with Application Discovery Service, simply visit the AWS Migration Hub console. /application-discovery/faqs/;Where is Application Discovery Service available?;Application Discovery Service is available worldwide. This means you can perform discovery on resources regardless of their location. To see which regions the service is hosted in, please refer to the AWS Region Table. /application-discovery/faqs/;What is the Migration Hub home region?;Before using the Migration Hub and Application Discovery Service, you need to select a Migration Hub home region from the Migration Hub Settings page or using the Migration Hub Config API. Application Discovery Service uses the Migration Hub home region as the only AWS region to store your discovery and planning data. The data stored in the Migration Hub home region provides a single repository of discovery and migration planning information for your entire portfolio and a single view of migrations into multiple AWS regions. See the docs to learn more about the Migration Hub home region. /application-discovery/faqs/;Which on-premises discovery tool should I use?;Customers looking for process-level dependency data are advised to use the AWS Application Discovery Service Discovery Agent. The data collected may be used in both Migration Hub and Amazon Athena. /application-discovery/faqs/;What data does the AWS Application Discovery Service Discovery Agent capture?;The Discovery Agent captures system configuration, system performance, running processes, and details of the network connections between systems. /application-discovery/faqs/;What operating systems does Application Discovery Service provide agents for?;Application Discovery Service launched a new 2.0 version of the Discovery Agent that offers better operating system support. 2.0 version of the Discovery Agent supports Microsoft Windows Server 2008 R1 SP2, 2008 R2 SP1, 2012 R1, 2012 R2, 2016, 2019, Amazon Linux 2012.03, 2015.03, Amazon Linux 2 (9/25/2018 update and later), Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04, Red Hat Enterprise Linux 5.11, 6.10, 7.3, 7.7, 8.1, CentOS 5.11, 6.9, 7.3 and SUSE 11 SP4, 12 SP5. /application-discovery/faqs/;How is the data protected while in transit to AWS?;The Discovery Agent uses HTTPS/TLS to transmit data to Application Discovery Service. The Discovery Agent can be operated in an offline test mode that writes data to a local file so customers can review collected data before enabling online mode. /application-discovery/faqs/;How do I install the Discovery Agent in my data center?;Please refer to the documentation for details on how to install the Discovery Agent. /application-discovery/faqs/;Will the Discovery Agent grant AWS remote access to my data center server?;No, the Discovery Agent deployed on your data center server will not grant AWS remote access. However, the Discovery Agent does need to establish an outbound SSL connection to transfer the collected data to AWS. /application-discovery/faqs/;Can I run agents in my EC2 instances?;Yes. You can install the Discovery Agents on your EC2 instances to perform discovery and report upon performance information, network connections, and running processes, just as for any other server. /application-discovery/faqs/;What does ‘agentless’ Application Discovery mean?;‘Agentless’ means that software does not need to be installed on each host to use Application Discovery. Simply install the Agentless Collector as an OVA on the VMware vCenter. /application-discovery/faqs/;What data does the Agentless Collector capture?;The Agentless Connector is delivered as an Open Virtual Appliance (OVA) package that can be deployed to a VMware host. Once configured with credentials to connect to vCenter, the Agentless Collector collects VM inventory, configuration, and performance history such as CPU, memory, and disk usage and uploads it to an Application Discovery Service data store. /application-discovery/faqs/;What operating systems does the agentless discovery support?;Agentless discovery is OS agnostic. It collects information about VMware virtual machines regardless of the VM operating system. /application-discovery/faqs/;How is the data protected while in transit to AWS?;The Agentless Collector uses HTTPS/TLS to transmit data to the Application Discovery Service. /application-discovery/faqs/;How do I install the Agentless Collector in my data center?;Please refer to the documentation for details on how to install the Agentless Collector. /application-discovery/faqs/;How can I start the data collection?;Data collection is controlled from the local Agentless Collector UI. You will need access to the on-premises environment to start or stop collection. /application-discovery/faqs/;Will the Agentless Collector grant AWS remote access to my data center servers?;No, the Agentless Collector deployed on your VMware environment will not grant AWS remote access to your data center servers. However, the tool requires VMware credentials in order to collect data. These credentials reside locally and are never shared with AWS. The Agentless Collector establishes outbound SSL connection to transfer only the collected data. /application-discovery/faqs/;Can I run agentless discovery in my EC2 instances?;No. The Agentless Collector installs on VMware and collects information only from the VMware vCenter. /application-discovery/faqs/;What kind of information is captured by Application Discovery Service?;Application Discovery Service is designed to capture a variety of data including static configuration such as server hostnames, IP addresses, MAC addresses, CPU allocation, network throughput, memory allocation, disk resource allocations, and DNservers. It also captures resource utilization metrics such as CPU usage and memory usage. In addition, the Discovery Agent can help determine server workloads and network relationships by identifying network connections between systems. /application-discovery/faqs/;Does this service capture any storage metrics?;Yes, disk metrics, such as read and write volume, throughput, allocated/provisioned and utilized capacity, are captured by this service. /application-discovery/faqs/;How often is the information within Application Discovery Service updated?;Information is gathered only when the Discovery Agent or the Agentless Collector is online. /application-discovery/faqs/;Can I ingest data into Application Discovery Service from my existing configuration management database (CMDB)?;Yes, you can import information about your on-premises servers and applications into the Migration Hub so you can track the status of application migrations. To import your data, you can download and populate the import CSV template and then upload it using the Migration Hub import console or by invoking the Application Discovery Service APIs. /application-discovery/faqs/;How do I access the data from this service?;Summary data can be viewed in the AWS console. You can export detailed data collected by Application Discovery Service using the AWS Console or a public API. The service exports data in CSV format. /application-discovery/faqs/;What can I see inside Amazon Athena with Data Exploration turned on?;If the Data Exploration feature is turned on when you get to Amazon Athena, you can see a database called “application_discovery_service_database”. Inside the database, a list of tables is created by default for you. These tables include: /application-discovery/faqs/;Is there cost associated with using Application Discovery Service’s Data Exploration feature?;The Application Discovery Service discovery tools are available at no charge. However, additional charges may apply for streaming agent data via Amazon Kinesis Data Firehose, for storing the agent data in a S3 bucket and for querying the agent data in Amazon Athena. The cost of using each of these AWS resources will vary based on the actual time period you collect data via agents, the number of agents you have deployed, the network activity on each server where the agent is deployed, and the number of queries that are run on the collected data. /application-migration-service/faqs/;What is AWS Application Migration Service?; To start using AWS Application Migration Service, simply click here or sign into the AWS Management Console and navigate to “AWS Application Migration Service” in the “Migration & Transfer” category. You can follow the steps provided in the console to set up and operate AWS Application Migration Service, or refer to the Quick Start Guide. /application-migration-service/faqs/;How do I get started with AWS Application Migration Service?;The AWS Application Migration Service (AWS MGN) – A Technical Introduction training is recommended if you want to learn more about how to use the service, or if you are assisting customers with a migration using AWS Application Migration Service. This free online training is offered by AWS Training and Certification, and covers key concepts, service architecture, and implementation approaches for AWS Application Migration Service. /application-migration-service/faqs/;What source infrastructure does AWS Application Migration Service support?; AWS Application Migration Service allows you to migrate physical, virtual, and cloud source servers to AWS for a variety of supported operating systems (OS). AWS Application Migration Service supports commonly used applications such as SAP, Oracle, and Microsoft SQL Server. /application-migration-service/faqs/;What operating systems and applications are supported by AWS Application Migration Service?; Yes. AWS Application Migration Service supports agentless replication from VMware vCenter versions 6.7 and 7.0 to AWS. You can perform agentless snapshot replication from your vCenter source environment to AWS by installing the AWS MGN vCenter Client in your vCenter environment. This option is intended for users who want to rehost their applications to AWS but cannot install agents due to company policies or technical restrictions. When possible, we recommend using the agent-based replication option as it provides continuous data replication and shortens cutover windows. /application-migration-service/faqs/;Does AWS Application Migration Service support agentless replication?; Contact AWS Premium Support to receive product support for AWS Application Migration Service according to your support plan. /application-migration-service/faqs/;How can I receive product support for AWS Application Migration Service?; The shortened service name for AWS Application Migration Service is AWS MGN. “MGNis an abbreviation of the word “migration.” /application-migration-service/faqs/;What does “AWS MGN” stand for?;" AWS Application Migration Service (AWS MGNis based on CloudEndure Migration technology and improves on it. You can find more information on when to use CloudEndure Migration and AWS Application Migration Service in the section below titled ""AWS Application Migration Service and other AWS services.""" /application-migration-service/faqs/;Where can I find CloudEndure Migration?;Note: CloudEndure Migration is no longer available in most AWS Regions. It will continue to be available for use in AWS GovCloud and China Regions and AWS Outposts through November 30, 2023. Learn more /application-migration-service/faqs/;Can I use AWS Migration Hub with AWS Application Migration Service?; Yes, you can use AWS Application Migration Service to migrate your instances and your databases from EC2-Classic to VPC with minimal downtime. Learn more about how to prepare for EC2-Classic retiring and how to migrate from EC2-Classic to a VPC. /application-migration-service/faqs/;Can I use AWS Application Migration Service to migrate from EC2-Classic to a VPC?; AWS Application Migration Service is the next generation of CloudEndure Migration, and offers key features and operational benefits that are not available with CloudEndure Migration. View technical comparison table » /application-migration-service/faqs/;How am I charged for AWS Application Migration Service?;The free period starts as soon as you install the AWS Replication Agent on your source server and continues during active source server replication. /application-migration-service/faqs/;How is my data encrypted while in transit from my source server to AWS using AWS Application Migration Service?; Yes, with AWS Application Migration Service you can control the data replication path using private connectivity options such as a VPNAWS Direct Connect, VPC peering, or another private connection. Learn more about private connectivity options with AWS Application Migration Service. /application-migration-service/faqs/;Can I avoid using the public internet to replicate my data to AWS using AWS Application Migration Service?; Please refer to the Security in AWS Application Migration Service documentation to understand how to apply the shared responsibility model when you use AWS Application Migration Service. /application-migration-service/faqs/;How can I learn more about keeping my data secure when using AWS Application Migration Service?; Please refer to the AWS Regional Services List for the most up-to-date information. /migration-hub/faqs/;What is AWS Migration Hub?;AWS Migration Hub provides access to the tools you need to collect and inventory your existing IT assets based on actual usage, analyze application components and infrastructure dependencies, and group resources into applications. You can generate migration strategy and Amazon Elastic Compute Cloud (EC2) instance recommendations for business case and migration planning, track the progress of application migrations to AWS, and modernize applications already running on AWS. /migration-hub/faqs/;Why should I use AWS Migration Hub?;"AWS Migration Hub is the one destination for cloud migration and modernization, giving you the tools you need to accelerate and simplify your journey with AWS. If you’re making the case for cloud within your organization; planning, executing, and tracking a portfolio of applications migrating to AWS; or modernizing applications currently running on AWS, Migration Hub can help with your cloud transformation journey." /migration-hub/faqs/;What migration tools integrate with AWS Migration Hub?;AWS Application Migration Service, AWS Server Migration Service, AWS Database Migration Service, and ATADATA ATAmotion are integrated with AWS Migration Hub and automatically report migration status to Migration Hub. See the Migration Hub Documentation for more details about authorizing tools to send status to Migration Hub. /migration-hub/faqs/;How does AWS Migration Hub help me track the progress of my application migrations?;AWS Migration Hub helps you by providing visibility into your migration progress. You use one of the integrated migration tools and then return to the hub to see the status of your migration. You can group servers into applications once the migration has started or, you can discover and group your servers before you start. /migration-hub/faqs/;How does AWS Migration Hub help me understand my IT environment?;AWS Migration Hub helps you understand your IT environment by letting you explore information collected by AWS discovery tools and stored in the AWS Application Discovery Service’s repository. With the repository populated, you can view technical specifications and performance information about the discovered resources in Migration Hub. You can export data from the Application Discovery Service repository, analyze it, and import server groupings as an “application”. Once grouped, the application grouping is used to aggregate the migration status from each migration tool used to migrate the servers and databases in the application. /migration-hub/faqs/;How much does it cost to use AWS Migration Hub?;AWS Migration Hub is available to all AWS customers at no additional charge. You pay only for the cost of the migration tools you use and any resources being consumed on AWS. /migration-hub/faqs/;What does your AWS Migration Hub Refactor Spaces Service Level Agreement guarantee?;Our SLA guarantees a Monthly Uptime Percentage of at least 99.9% for AWS Migration Hub Refactor Spaces within a Region. /migration-hub/faqs/;How do I know if I qualify for a SLA Service Credit?;You are eligible for a SLA credit for AWS Migration Hub Refactor Spaces if the Region that you are operating in has an Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle. For full details on all of the terms and conditions of the SLA, as well as details on how to submit a claim, please see https://aws.amazon.com/migration-hub/sla/refactor-spaces/. /migration-hub/faqs/;How do I get started with AWS Migration Hub?;Follow the Getting Started Guide in our documentation to get started. /migration-hub/faqs/;What is the Migration Hub home Region?;Before using most features in Migration Hub (except Refactor Spaces), you need to select a Migration Hub home Region from the Migration Hub Settings page or by using the Migration Hub Config API. /migration-hub/faqs/;What Regions can I migrate to using AWS Migration Hub?;AWS Migration Hub helps you track the status of your migrations in all AWS Regions, provided your migration tools are available in that Region. The migration tools that integrate with Migration Hub (for example, AWS Application Migration Service and AWS Database Migration Service) send migration status to your selected Migration Hub home Region. The home Region is used to store your discovery and migration tracking data and is set prior to first use of the service. Migration status is aggregated from all destination Regions and visible in the home Region. Note that integrated tools will not send status unless you have authorized (connected) them on the Tools page of the Migration Hub console. /migration-hub/faqs/;Where is AWS Migration Hub available?;AWS Migration Hub is available worldwide for tracking the progress of application migrations, regardless of where the application currently resides. Refer to the AWS Region Table for availability of Migration Hub tools for inventory collection, planning and recommendation, and modernization capabilities. /migration-hub/faqs/;How is access granted to AWS Migration Hub?;AWS Migration Hub requires an AWS account role, which will be added automatically the first time you access the console as an admin user. Integrated migration tools can be authorized on the Tools page of the Migration Hub console. Refer to the Authentication and Access Control section of the AWS Migration Hub User Guide for more details. /migration-hub/faqs/;How does AWS Migration Hub help me understand my IT environment?;AWS Migration Hub helps you understand your IT environment by letting you explore information collected by AWS discovery tools and stored in the AWS Application Discovery Service repository. With the repository populated, you can view technical specifications and performance information about the discovered resources in Migration Hub, analyze it, visualize and tag server and application dependencies, and group servers into applications. You can also export data and import groupings as an “application.” Once grouped, the application grouping is used to aggregate the migration status from each migration tool used to migrate the servers and databases in the application. /migration-hub/faqs/;How do I view my IT portfolio in AWS Migration Hub?;To view your IT assets in AWS Migration Hub, first you perform discovery using an AWS discovery tool or by migrating with an integrated migration tool. Then you can explore your environment from within Migration Hub. You can learn more about any resource found by clicking on the server ID shown on the Servers page of the Migration Hub console. You then see the server details page. If you used an AWS discovery tool to discover your servers, you will see collected data, including technical specifications and average utilization. /migration-hub/faqs/;How do I add resources into the Discovery Repository?;When you first visit AWS Migration Hub, you’re prompted to perform discovery or start migrating. If you decide to start migrating without performing discovery, your application servers and database servers will appear as resources in Migration Hub as you migrate them with integrated migration tools that you’ve authorized in the Migration Hub console. /migration-hub/faqs/;How do I group servers into an application?;Before grouping servers into an application, you need to populate AWS Migration Hub’s Servers list. Servers are added to the Servers list whenever you run AWS discovery tools or by using an integrated migration tool. Once your Servers list is populated, select one or more resources on the Servers page in the Migration Hub console and then choose “Group as Application.” If you’re discovering servers using the AWS Discovery agent, you can also group them into applications from the network visualization tool. Select one or more servers from the network graph and choose “Group as application.” /migration-hub/faqs/;How do I view applications?;You can visit the Applications page in the Migrate section of the AWS Migration Hub console to see the list of applications and their current migration status. Only resources that are grouped to applications using the Discover section’s Servers page or AWS SDK/CLI will appear on the Applications page. Applications can have one of three migration statuses: “not started,” “in progress,” and “completed.” /migration-hub/faqs/;Can I see applications created by other users within the same account?;Yes. Applications created by any IAM user within an account will be visible to any other IAM users within the same account that are granted access to AWS Migration Hub. Any changes made will be visible to all users with permission. /migration-hub/faqs/;Can I see applications that exist in other AWS accounts?;"You access AWS Migration Hub using an IAM User associated with an AWS account. This only allows you to see details from your AWS account; you do not have visibility into other accounts." /migration-hub/faqs/;How does the AWS Migration Hub import feature work?;You can access the AWS Migration Hub import feature either from the Migration Hub console or by invoking the Application Discovery Service APIs. The imported data is stored in the Application Discovery Service data repository in encrypted format. /migration-hub/faqs/;What kind of data can I import using the import template?;Migration Hub import allows you to import server details, including server specifications, utilization, tags and applications that are associated with your servers. You can import data from any source as long as the data is populated using the Migration Hub CSV import template. /migration-hub/faqs/;I imported an incorrect file. Can I overwrite or delete it?;Yes. You can delete an incorrect file by visiting the Discover > Tools > Imports section and then selecting the “Delete imported data” option. To overwrite an existing imported file, delete the file and upload a new file with the corrected records. /migration-hub/faqs/;Is there a limit on the number of import files that I can upload?;No. There is no limit on the number of import files that you can upload. However, we do restrict the number of records and servers that you can import. For details, refer to the Migration Hub import limits section in the documentation. /migration-hub/faqs/;Do I need to pay for importing data?;No. There is no charge for importing your data. /migration-hub/faqs/;I don't have data for all the fields in the import template. Can I still import my data?;Yes. You can import data even if you don’t have data populated for all the fields in the import template. For each row, if you populate your own matching key (“ExternalId”), import will use it to uniquely identify and import the records. If you don’t specify the matching key, for each row, import will use the values specified for “IPAddress,” “HostName,” “MACAddress,” or a combination of “VMware.MoRefId” and “VMware.vCenterId” to determine the uniqueness of a given server. Rows that don’t contain a value for the matching key (“ExternalId”) or for any of the above fields will not be imported. /migration-hub/faqs/;What are the criteria for identifying an incorrect record?;Import does a data validation check for all the imported fields that are part of the CSV import template. For example, if the value of “IPAddress” is invalid, the import feature will flag that record as incorrect. In addition, an import record will be considered invalid and won’t be imported if it doesn’t have at least one of these fields populated: “ExternalId,” “MACAddress,” “HostName,” “IPAddress,” or a combination of “VMware.VCenterId” and “VMware.MoRefId.” /migration-hub/faqs/;What is the EC2 instance recommendations feature?;EC2 instance recommendations is a feature of AWS Migration Hub that analyzes the data collected from each on-premises server, including server specification, CPU, and memory utilization, to recommend the least expensive EC2 instance required to run the on-premises workload. You can also fine-tune recommendations by specifying preferences for AWS purchasing option, AWS Region, EC2 instance type exclusions, and CPU/RAM utilization metric (average, peak, or percentile). /migration-hub/faqs/;Do I need to install the AWS Application Discovery Service Discovery Connector or Discovery Agent to use the EC2 instance recommendations feature?;No. To use the EC2 instance recommendations feature, you need to ensure that on-premises server details are available in AWS Migration Hub. You can import existing server inventory information from a source such as a content management database (CMDB), or use the AWS Application Discovery Service to collect data directly from your environment. /migration-hub/faqs/;How does the EC2 instance recommendations feature provide a match for a given server?;The EC2 instance recommendations feature recommends the most cost-effective EC2 instance type that can satisfy the given CPU and RAM requirements while taking into account your selected instance type preferences such as AWS purchasing option, AWS Region, EC2 instance type exclusions, and CPU/RAM utilization metric (average, peak, or percentile). /migration-hub/faqs/;Does the EC2 instance recommendations feature provide recommendations for Burstable Performance Instances?;Yes. The EC2 instance recommendations feature provides recommendations for Burstable Performance Instances. It uses “average” and “peak” CPU data points to compute an estimated number of consumed CPU credits and associated cost to more accurately compare the projected price with other instance families. /migration-hub/faqs/;What happens if I have discovery data from multiple sources for the same server in AWS Migration Hub? Which data source is used to calculate the EC2 instance recommendation for that server?;If discovery data is available from multiple sources for the same server, the EC2 instance recommendations feature will use the most recent and complete data to provide an instance recommendation. For example, if you upload the CPU/RAM specification for a given server using Migration Hub import, a recommendation will be generated based on the imported data. If you then install the AWS Application Discovery Service (ADS) Discovery Agent on this server, the ADS agent will also capture the server specification details. The next time you request EC2 instance recommendations for that server, the feature will use the ADS agent-collected specifications to generate the recommendation, since the agent's data is more recent and complete. /migration-hub/faqs/;Does the EC2 instance recommendations feature recommend current generation instances?;Yes. The EC2 instance recommendations feature only recommends current generation instances. It doesn't provide recommendations for previous generation instances. /migration-hub/faqs/;When should I use the EC2 instance recommendations feature in AWS Migration Hub compared to a more detailed cost assessment with TSO Logic?;Right-sizing your compute resources is one dimension of understanding your total cost of ownership (TCO). Use the EC2 instance recommendation feature of Migration Hub when you want an understanding of your projected EC2 costs. We also offer a more detailed assessment, including optimizations for Microsoft licensing and storage costs, using TSO Logic, an AWS Company. Contact AWS Sales or an AWS Partner to learn more about this detailed assessment. /migration-hub/faqs/;How do I use AWS Migration Hub when migrating applications?;After you’ve created one or more application groupings from servers discovered using AWS discovery tools or by starting to migrate using an integrated migration tool, you can start or continue to migrate the server or database outside Migration Hub. Return to Migration Hub to view the migration status of each resource in the application. /migration-hub/faqs/;Does AWS Migration Hub automatically migrate my applications for me?;No. AWS Migration Hub does not automate the steps of the migration. It provides a single place for you to track the progress of the applications you are migrating. /migration-hub/faqs/;What do I need to do in order for my application’s migration progress to appear in AWS Migration Hub?;To view migration progress in AWS Migration Hub, two things must be true. The resources that you’re migrating must be in the AWS Discovery repository, and you must use supported tools to perform the migration. If you start migrating without performing discovery with AWS Discovery Collectors, the servers or databases reported by supported migration tools will be automatically added to your AWS Application Discovery Service repository. Once they’re added, you can group these servers as applications and track their status in a single grouping as the migration progresses. /migration-hub/faqs/;What is the experience if I don’t do a strict re-host migration, moving the resources exactly from on-premises to AWS?;AWS Migration Hub will show the status of the resource migrations that are done with supported tools, provided that the resource is grouped in an application. It doesn’t need to be a strict rehost migration. For instance, if you move the contents of a database using AWS Database Migration Service, you will see updates in Migration Hub if the server corresponding to the database migration is grouped in an application. /migration-hub/faqs/;What if I’m using a tool that isn’t integrated with AWS Migration Hub?;Tools that are not integrated with AWS Migration Hub will not report status in the Migration Hub Management Console. You can still see the status of other resources in the application and the application level status or you can update the status via your own automation using the CLI or APIs. /migration-hub/faqs/;How can other tools publish status to AWS Migration Hub?;Migration tools can publish your status to AWS Migration Hub by writing to the AWS Migration Hub API. Partners interested in onboarding must have achieved the Migration Competency through the AWS Competency Program. Learn more about the Competency Program and apply for the Migration Competency here. /migration-hub/faqs/;What is Strategy Recommendations?;AWS Migration Hub’s Strategy Recommendations helps you easily build a migration and modernization strategy for your applications running on premises or in AWS. Strategy Recommendations provides guidance on the strategy and tools that help you migrate and modernize at scale. /migration-hub/faqs/;Why should I use Strategy Recommendations?;Strategy Recommendations helps you identify a tailored migration and modernization strategy at scale and provides the tools and services to help you run the strategy. It also helps you identify the incompatibilities (anti-patterns) in the source code that need to be resolved to implement these recommendations. /migration-hub/faqs/;What migration and modernization options does Strategy Recommendations support?;Strategy Recommendations supports analysis for potential rehost (EC2) and replatform (managed environments such as RDS and Elastic BeanStalk, Containers, and OS upgrades) options for applications running on Windows Server 2003 or above or a wide variety of Linux distributions, including Ubuntu, RedHat, Oracle Linux, Debian, and Fedora. Strategy Recommendations offers additional refactor analysis for custom applications written in C# and Java, and licensed databases (such as Microsoft SQL Server and Oracle). /migration-hub/faqs/;What additional options do I have to modernize my Windows workloads?;Please visit Modernize Windows Workloads with AWS to know more. /migration-hub/faqs/;What is application transformation?;Application transformation is the process of refactoring, rearchitecting, and rewriting applications to maximize the availability, scalability, business agility, and cost optimization benefits of running in the cloud. /migration-hub/faqs/;What is Refactor Spaces?;Refactor Spaces helps you accelerate application refactoring to take full advantage of computing in AWS and simplifies app transformation by making it easy to manage the refactor process while operating in production. By using Refactor Spaces, you focus on the refactor of your applications, not the creation and management of the underlying infrastructure that makes refactoring possible. Refactor Spaces helps reduce the business risk of evolving applications into microservices or extending legacy applications that can’t be modified with new features written in microservices. /migration-hub/faqs/;Why should I use Refactor Spaces?;Refactor Spaces addresses a common pair of practical problems when transforming applications: setting up an infrastructure for application refactoring and operating evolving applications at scale. Refactor Spaces helps you combine existing applications and microservices into a single application while allowing different approaches for architecture and technology, team alignment, and process between the parts. Using Refactor Spaces, you can transform legacy applications or extend them with microservices that run on any AWS compute target (such as EC2, Amazon Elastic Container Service, Amazon Elastic Kubernetes Service, AWS Fargate, and AWS Lambda). Refactor Spaces provides significant time savings by creating an infrastructure for application refactoring in minutes. /migration-hub/faqs/;What kind of applications can I refactor?;Any application targeted for refactor, rewrite, or rearchitecture is a candidate for using Refactor Spaces as long as its external interface is an HTTP-based protocol and running in AWS (or it can be rehosted with Application Migration Service or replatformed first). Refactor Spaces is commonly used to refactor older legacy and monolithic applications, but it also helps you navigate the refactoring and rearchitecture of modern services and applications. /migration-hub/faqs/;How does Refactor Spaces work with other AWS services?;Refactor Spaces orchestrates other AWS services to create refactoring environments and stitch together existing applications and microservices into Refactor Spaces applications that are easier to operate while the app is evolving. Application refactoring environments are built using Transit Gateway, Resource Access Manager, and API Gateway. Using these, Refactor Spaces helps keep existing applications separate from microservices through a multi-account structure that is bridged together with networking for easy cross-account communication. /migration-hub/faqs/;What is a Refactor Spaces Environment?;A Refactor Spaces Environment provides a unified view of networking, applications, and services across AWS accounts and is the container for your existing application and new microservices. The environment orchestrates Transit Gateway, Resource Access Manager, and VPCs to bridge networking across accounts to simplify communication between old and new services. The account where an environment is created is the environment owner. The owner can share the environment with other AWS accounts and manages applications, services, and routes added to the environment. /migration-hub/faqs/;What is a Refactor Spaces Application?;A Refactor Spaces Application provides configurable request routing to your existing application and new microservices. Applications include a proxy that simplifies strangler fig refactoring in AWS. When creating an application inside an environment, Refactor Spaces orchestrates API Gateway, Network Load Balancer (NLB), and AWS Lambda resource policies. The application’s proxy and routing are used to keep underlying architecture changes transparent to app consumers. /migration-hub/faqs/;What is a Refactor Spaces Service?;A Refactor Spaces Service represents the endpoint of an existing application or a new microservice. Services can have a VPC with either a URL endpoint or an AWS Lambda endpoint. Refactor Spaces automatically bridges service VPCs together within an environment using Transit Gateway, and traffic is permitted between any AWS resources in service VPCs across all accounts in the environment. When setting a route to a service, if the service has a Lambda endpoint, traffic is routed using API Gateway’s Lambda integration. For services with a URL endpoint, traffic is routed using an API Gateway VPC Link and NLB target group. /migration-hub/faqs/;How do I start incremental app refactoring with Refactor Spaces?;Refactor Spaces can be used through the AWS Management Console, AWS SDK/CLI, or CloudFormation (CFN). Usually, you’ll want to start with at least two accounts—one for your existing application and one to own the Refactor Spaces environment and manage traffic routing between services. Your AWS accounts can be new or existing accounts that are standalone, part of an AWS Organization, or provisioned by AWS Control Tower. /migration-hub/faqs/;Can I privately access AWS Migration Hub Refactor Spaces APIs from my VPC without using public IP addresses?;Yes, you can privately access Refactor Spaces APIs from your VPC (created using Amazon Virtual Private Cloud) by creating VPC Endpoints. With VPC Endpoints, the routing between the VPC and Refactor Spaces is handled by the AWS network without the need for an internet gateway, NAT gateway, or virtual private network (VPNconnection. The latest generation of VPC Endpoints used by Refactor Spaces is powered by AWS PrivateLink, a technology that enables private connectivity between AWS services using Elastic Network Interfaces (ENIs) with private IP addresses in your VPCs. To learn more about PrivateLink support, visit the Refactor Spaces PrivateLink documentation. /migration-hub/faqs/;What is migration orchestration?;Migration orchestration is a process automation mechanism that uses templates, synchronizes multiple tasks into a workflow, and manages dependencies to achieve the desired goal of a migration project. /migration-hub/faqs/;What is AWS Migration Hub Orchestrator?;AWS Migration Hub Orchestrator is designed to automate and simplify the migration of applications to AWS. Orchestrator helps you reduce migration costs and time by removing many of the manual tasks involved in migrating large-scale enterprise applications, managing dependencies between different tools, and providing visibility into migration progress in one place. Use predefined and customizable workflow templates in Orchestrator that offer a prescribed set of migration tasks, migration tools, and automation opportunities to orchestrate complex workflows and interdependent tasks, simplifying the process of migrating to AWS. /migration-hub/faqs/;What are the benefits of using AWS Migration Orchestrator?;Orchestrator helps you accelerate migrations, simplify the migration process, and adapt the migration tools and process for your use cases. /migration-hub/faqs/;Why should I use AWS Migration Hub Orchestrator?;Orchestrator simplifies and accelerates the migration of applications to AWS. /migration-hub/faqs/;How do I use Orchestrator?;You can access Orchestrator from the AWS Migration Hub console or the AWS Command Line Interface (CLI). Use Orchestrator to complete the prerequisites of discovering or importing source servers, grouping the discovered servers into applications, and installing a plugin in the source environment. Next, pick one of the predefined workflow templates to create a workflow to orchestrate the application migration. If you wish, you can also specify custom steps to be automated or manually completed as part of the workflow. Once you define the workflow, you can run, pause, or delete it. You can also track the status of the workflow at step level and step group level in Orchestrator. /migration-hub/faqs/;What is a workflow template in Orchestrator?;A workflow template is a playbook with a prescribed set of migration tasks, dependencies, suitable migration tools, and recommended automation opportunities. For example, the predefined workflow template for migrating SAP NetWeaver–based applications with HANdatabases includes step-by-step tasks for automated validation of connectivity between source servers and plugins, the ability to provision a new SAP environment using AWS Launch Wizard, automated validation of source and target environments, automated migration of HANdatabase and applications, and post-migration validation. /migration-hub/faqs/;Which predefined workflow templates are provided in Orchestrator?;Orchestrator currently supports five predefined workflow templates that you can use. The first template helps you migrate SAP NetWeaver–based applications with HANdatabases using AWS Launch Wizard and HANSystem Replication. The second template helps you accelerate the rehosting of any applications using AWS Application Migration Service (MGN). The third and fourth templates help you replatform your SQL Server databases to Amazon RDS and rehost your SQL Server databases to Amazon EC2 using native backup and restore. The fifth template helps you import your on-premises virtual machine (VM) images to AWS with a console-based experience for generating Amazon Machine Image (AMI) from your VM image that you have built to meet your IT security, configuration management, and compliance requirements. All of these templates include predefined automation of tasks with an option to add new steps and automation scripts. /server-migration-service/faqs/;What are the main differences between AWS Application Migration Service and AWS Server Migration Service?;Application Migration Service (MGNutilizes continuous, block-level replication and enables short cutover windows measured in minutes. Server Migration Service (SMS) utilizes incremental, snapshot-based replication and enables cutover windows measured in hours. /server-migration-service/faqs/;How do I get started with AWS Server Migration Service?;Start server migration with AWS Command Line Interface. Visit the User Guide for more details, including how to replicate a server. /server-migration-service/faqs/;What is the output of AWS Server Migration Service?;Each server volume replicated is saved as a new Amazon Machine Image (AMI), which can be launched as an EC2 instance (virtual machine) in the AWS cloud. If you are using application groupings, Server Migration Service will launch the servers in a CloudFormation stack using an auto-generated CloudFormation template. /server-migration-service/faqs/;What kind of servers can be migrated to AWS using AWS Server Migration Service?;Currently, you can migrate virtual machines from VMware vSphere, Windows Hyper-V, or Microsoft Azure to AWS using AWS Server Migration Service. /server-migration-service/faqs/;In what Regions is AWS Server Migration Service available?;Refer to the Region Table. /server-migration-service/faqs/;How do I track the status of the migration?;You can view details of all running replication jobs using the get-replication-jobs command. /server-migration-service/faqs/;What operating systems does AWS Server Migration Service support?;"AWS Server Migration Service supports migrating Windows Server 2003, 2008, 2012, and 2016, and Windows 7, 8, and 10; Red Hat Enterprise Linux (RHEL), SUSE/SLES, CentOS, Ubuntu, Oracle Linux, Fedora, and Debian Linux operating systems. Learn more." /server-migration-service/faqs/;How long can I replicate my server volumes from on-premises to AWS?;You can replicate your on-premises servers to AWS for up to 90 days (per server). Usage time is calculated from the time a server replication begins until you terminate the replication job. After 90 days, your replication job will be automatically terminated. If you want to increase this limit, please discuss your use case with the AWS Support team. /server-migration-service/faqs/;What is the AWS Server Migration Service Connector?;The connector appliance is a pre-configured FreeBSD virtual machine (in OVA format). To set up AWS Server Migration Service, you need to first deploy the AWS Server Migration Service Connector virtual appliance on your on-premises VMware vCenter environment. /server-migration-service/faqs/;How many AWS Server Migration Service Connectors do I need to install?;You need to install one AWS Server Migration Service Connector for each VMware vCenter environment. /server-migration-service/faqs/;What permissions does the AWS Server Migration Service Connector require from VMware vCenter?;At minimum, the AWS Server Migration Service Connector requires the ability to create and delete snapshots on VMs that need to be migrated to AWS. Learn more. /server-migration-service/faqs/;Can I use a proxy for communicating with the AWS Server Migration Service?;"Yes. The AWS Server Migration Service Connector supports password-based proxy; it does not support NTLM-based proxy." /server-migration-service/faqs/;Are server volumes securely transferred from my data center to AWS?;Yes. Replicated server volumes are encrypted in transit by Transport Layer Security (TLS). /server-migration-service/faqs/;What data does the AWS Server Migration Service Connector capture from VMware vCenter?;The AWS Server Migration Service Connector captures VM inventory information from VMware vCenter and replicates server volumes to AWS. /server-migration-service/faqs/;How do I update the AWS Server Migration Service Connector?;"Updates are automatically downloaded and applied when you enable the auto-upgrade option; otherwise, they can be applied on-demand." /server-migration-service/faqs/;Is it secure to deploy the AWS Server Migration Service Connector on my virtualization environment?;Yes, the AWS Server Migration Service Connector only captures basic VM inventory information and snapshots of server volumes from VMware vCenter and does not gather any sensitive information. /server-migration-service/faqs/;Where are my replicated server volumes stored?;Your replicated server volumes are converted to AMIs and stored in your AWS account. /server-migration-service/faqs/;Does AWS Premium Support cover AWS Server Migration Service?;Yes, AWS Premium Support covers issues related to your use of the AWS Server Migration Service. Learn more. /server-migration-service/faqs/;What other support options are available?;Visit the AWS Community discussion forum. /server-migration-service/faqs/;What are the key benefits of using AWS Server Migration Service for migrating on premise Hyper-V VMs?;By automating an incremental replication of live server volumes to the AWS cloud as AMIs, SMS allows customers to speed up migration process and reduce manual labor of migration significantly. SMS orchestrates server migrations by allowing customers to schedule replications and track the progress of a group of servers, alleviating the logistical burden of coordinating large-scale server migrations. With the support of incremental replication, customers are able to test server migrations easily. /server-migration-service/faqs/;Who can benefit from AWS Server Migration Service for Hyper-V?;Any customer who is looking to migrate their Microsoft Hyper-V virtual machines managed by SCVMM or standalone Hyper-Vs to AWS will benefit from Hyper-V support in SMS. This may include enterprise customers, System integrators, and IT consulting firms who help enterprise customers migrate Hyper-V workloads to AWS. /server-migration-service/faqs/;What components does AWS Server Migration Service have for Hyper-V VM migration?;SMS has an on-premises appliance, the SMS Connector, which talks to the service in AWS. The Connector incrementally transfers volumes of running Hyper-V VMs to the SMS service, and the service creates the AMI incrementally from the transferred volume. /server-migration-service/faqs/;Where can I download the SMS Connector for Hyper-V VM migration?;You can visit the AWS User Guide for the instructions to download SMS Connectors. /server-migration-service/faqs/;What permissions are required in System Center to deploy SMS Connector and start migration?;The SMS connector securely communicates with the SCVMM server using Windows remote management protocol (WinRM). Connector requires a non-admin AD user that is added to “Remote Management Users” group on the SCVMM host, has limited permissions on the CMIV2 and SCVMM WMI objects and is a part of the “Delegated administrator” group within SCVMM. A firewall port needs to be opened on the SCVMM server to allow for secure transfer of remote commands from Connector deployed on a private network within your datacenter. The AD user also needs read permissions on the VM data store on the Hyper-V machine. For additional security, customers may configure WinRM to allow only encrypted traffic over SSL using self-signed certificate and limit access to Connector IP/hostname alone. For more details, please refer to the SMS technical documentation. /server-migration-service/faqs/;Do I need to have SCVMM for AWS SMS to work with my Hyper-V hosts?;No. The SMS Connector can be configured to use SCVMM or standalone Hyper-V VMs. /server-migration-service/faqs/;I’m operating both VMware and Hyper-V environments. Can I migrate VMware VMs and Hyper-V VMs simultaneously?;Yes, but you will need two separate SMS Connectors to simultaneously migrate VMs from both VMware and Hyper-V environments. /server-migration-service/faqs/;What versions of Hyper-V, a component of the Windows server, does SMS support?;SMS Supports Hyper-V running on Windows Servers 2012 R2 and above. /server-migration-service/faqs/;What operating systems does AWS Server Migration Service support?;"AWS Server Migration Service supports migrating Windows Server 2003, 2008, 2012, and 2016, and Windows 7, 8, and 10; Red Hat Enterprise Linux (RHEL), SUSE/SLES, CentOS, Ubuntu, Oracle Linux, Fedora, and Debian Linux operating systems. Learn more." /server-migration-service/faqs/;What is multi-server migration in AWS Server Migration Service?;AWS Server Migration Service now offers multi-server migration support that makes it easier and cost effective to migrate applications from on-premises datacenters to Amazon EC2. Multi-server migration provides you the ability to migrate entire application stacks as opposed to migrating each server individually. You can group servers into applications, replicate the entire application together, and monitor its migration status. You can also easily launch and configure the migrated application with an auto-generated CloudFormation Template. /server-migration-service/faqs/;How does multi-server support work in AWS Server Migration Service?;You can group on-premises servers into an application with one or more sub-groups, specify a replication frequency, provide configuration scripts, and a landing zone where the replicated application should be launched. Server Migration Service will orchestrate migration of all the underlying servers that are part of the application groups, wait for all of the AMIs to be available, and create a CloudFormation Template that can be launched in the landing zone. Once you launch the application using the auto-generated CloudFormation Template, Server Migration Service will configure it based on your specified configuration scripts. /server-migration-service/faqs/;I already use AWS Server Migration Service. How does this capability benefit me?;With multi-server migration, you no longer have to write custom tooling to coordinate the migrations of multiple servers that make up your application—you can migrate all of your servers together as a single unit. You have the ability to launch all of the servers automatically through a CloudFormation Template and keep it continuously updated with the latest AMIs produced for each replication run. You can start using the multi-server migration without any new setup. /server-migration-service/faqs/;How do I get started with using multi-server migration in Server Migration Service?;Once the on-premises server catalog is imported into Server Migration Service using the SMS Connector, you can get started by configuring an application from the CLI or APIs. If you are already using Server Migration Service and have configured a SMS role using a managed policy, no action is required. Multi-server migration requires a new role to be able to launch instances using CloudFormation. See technical documentation for more details on how to setup permissions for Server Migration Service. /server-migration-service/faqs/;Can I migrate the applications defined in AWS Application Discovery Service using AWS Server Migration Service?;Currently, the application groupings defined using AWS Application Discovery Service are not available through Server Migration Service. However, support is being added for applications discovered/defined in Application Discovery Service to be automatically available in Server Migration Service for multi-server migration. /server-migration-service/faqs/;Can I customize the CloudFormation Template that AWS Server Migration Service auto-generates?;No, the ability to customize the auto-generated template is not available at this time. /server-migration-service/faqs/;How do I re-configure my application after I launch it in the AWS cloud?;You can provide a custom configuration script per server while defining an application. Once the server is migrated and launched in the AWS cloud, the service will run the configuration script on your behalf. For example, you can use a configuration script to update database connection strings on your application servers without having to log into each instance and updating it manually. /server-migration-service/faqs/;What type of snapshot consistency does AWS Server Migration Service provide?;AWS Server Migration Service creates crash consistent snapshots for the servers that are part of an application group. These snapshots are triggered around the same time for all the servers within an application. Additionally, for servers running Windows operating system in VMware environments, you can use Volume Shadow Copy Service (VSS) to take application consistent snapshots. /server-migration-service/faqs/;What is a replatforming assistant for Microsoft SQL Server?;Learn more about the replatforming assistant for Microsoft SQL Server in our latest What's New post. /vpc/faqs/;What is Amazon Virtual Private Cloud?;Amazon VPC lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address ranges, creation of subnets, and configuration of route tables and network gateways. You can also create a hardware Virtual Private Network (VPNconnection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter. /vpc/faqs/;What are the components of Amazon VPC?;Amazon VPC comprises a variety of objects that will be familiar to customers with existing networks: /vpc/faqs/;How will I be charged and billed for my use of Amazon VPC?;"There are no additional charges for creating and using the VPC itself. Usage charges for other Amazon Web Services, including Amazon EC2, still apply at published rates for those resources, including data transfer charges. If you connect your VPC to your corporate datacenter using the optional hardware VPN connection, pricing is per VPN connection-hour (the amount of time you have a VPN connection in the ""available"" state.) Partial hours are billed as full hours. Data transferred over VPN connections will be charged at standard AWS Data Transfer rates. For VPC-VPN pricing information, please visit the pricing section of the Amazon VPC product page." /vpc/faqs/;What usage charges will I incur if I use other AWS services, such as Amazon S3, from Amazon EC2 instances in my VPC?;Usage charges for other Amazon Web Services, including Amazon EC2, still apply at published rates for those resources. Data transfer charges are not incurred when accessing Amazon Web Services, such as Amazon S3, via your VPC’s Internet gateway. /vpc/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /vpc/faqs/;What are the connectivity options for my Amazon VPC?;You may connect your Amazon VPC to: /vpc/faqs/;What IP address ranges can I use within my Amazon VPC?;You can use any IPv4 address range, including RFC 1918 or publicly routable IP ranges, for the primary CIDR block. For the secondary CIDR blocks, certain restrictions apply. Publicly routable IP blocks are only reachable via the Virtual Private Gateway and cannot be accessed over the Internet through the Internet gateway. AWS does not advertise customer-owned IP address blocks to the Internet. You can allocate up to 5 Amazon-provided or BYOIP IPv6 GUA CIDR blocks to a VPC by calling the relevant API or via the AWS Management Console. /vpc/faqs/;How do I assign IP address ranges to Amazon VPCs?;You assign a single Classless Internet Domain Routing (CIDR) IP address range as the primary CIDR block when you create a VPC and can add up to four (4) secondary CIDR blocks after creation of the VPC. Subnets within a VPC are addressed from these CIDR ranges by you. Please note that while you can create multiple VPCs with overlapping IP address ranges, doing so will prohibit you from connecting these VPCs to a common home network via the hardware VPN connection. . For this reason we recommend using non-overlapping IP address ranges. You can allocate up to 5 Amazon-provided or BYOIP IPv6 CIDR blocks to your VPC. /vpc/faqs/;What IP address ranges are assigned to a default Amazon VPC?;Default VPCs are assigned a CIDR range of 172.31.0.0/16. Default subnets within a default VPC are assigned /20 netblocks within the VPC CIDR range. /vpc/faqs/;Can I advertise my VPC public IP address range to the internet and route the traffic through my datacenter, via the AWS Site-to-Site VPN, and to my Amazon VPC?;Yes, you can route traffic via the AWS Site-to-Site VPN connection and advertise the address range from your home network. /vpc/faqs/;How large of a VPC can I create?;Currently, Amazon VPC supports five (5) IP address ranges, one (1) primary and four (4) secondary for IPv4. Each of these ranges can be between /28 (in CIDR notation) and /16 in size. The IP address ranges of your VPC should not overlap with the IP address ranges of your existing network. /vpc/faqs/;Can I change the size of a VPC?;Yes. You can expand your existing VPC by adding four (4) secondary IPv4 IP ranges (CIDRs) to your VPC. You can shrink your VPC by deleting the secondary CIDR blocks you have added to your VPC. Likewise, you can add up to five (5) additionally IPv6 IP ranges (CIDRs) to your VPC. You can shrink your VPC by deleting these additional ranges. /vpc/faqs/;How many subnets can I create per VPC?;Currently you can create 200 subnets per VPC. If you would like to create more, please submit a case at the support center. /vpc/faqs/;Is there a limit on how large or small a subnet can be?;The minimum size of a subnet is a /28 (or 14 IP addresses.) for IPv4. Subnets cannot be larger than the VPC in which they are created. /vpc/faqs/;Can I use all the IP addresses that I assign to a subnet?;No. Amazon reserves the first four (4) IP addresses and the last one (1) IP address of every subnet for IP networking purposes. /vpc/faqs/;How do I assign private IP addresses to Amazon EC2 instances within a VPC?;When you launch an Amazon EC2 instance within a subnet that is not IPv6-only, you may optionally specify the primary private IPv4 address for the instance. If you do not specify the primary private IPv4 address, AWS automatically addresses it from the IPv4 address range you assign to that subnet. You can assign secondary private IPv4 addresses when you launch an instance, when you create an Elastic Network Interface, or any time after the instance has been launched or the interface has been created. In case you launch an Amazon EC2 instance within an IPv6-only subnet, AWS automatically addresses it from the Amazon-provided IPv6 GUA CIDR of that subnet. The instance’s IPv6 GUA will remain private unless you make them reachable to/from the internet with the right security group, NACL, and route table configuration. /vpc/faqs/;Can I change the private IP addresses of an Amazon EC2 instance while it is running and/or stopped within a VPC?;For an instance launched in an IPv4 or dual-stack subnet, the primary private IPv4 address is retained for the instance's or interface's lifetime. Secondary private IPv4 addresses can be assigned, unassigned, or moved between interfaces or instances at any time. For an instance launched in an IPv6-only subnet, the assigned IPv6 GUA which is also the first IP address on the instance's primary network interface can be modified by associating a new IPv6 GUA and removing the existing IPv6 GUA at any time. /vpc/faqs/;If an Amazon EC2 instance is stopped within a VPC, can I launch another instance with the same IP address in the same VPC?;No. An IPv4 address assigned to a running instance can only be used again by another instance once that original running instance is in a “terminated” state. However, the IPv6 GUA assigned to a running instance can be used again by another instance after it is removed from the first instance. /vpc/faqs/;Can I assign IP addresses for multiple instances simultaneously?;No. You can specify the IP address of one instance at a time when launching the instance. /vpc/faqs/;Can I assign any IP address to an instance?;You can assign any IP address to your instance as long as it is: /vpc/faqs/;Can I assign multiple IP addresses to an instance?;Yes. You can assign one or more secondary private IP addresses to an Elastic Network Interface or an EC2 instance in Amazon VPC. The number of secondary private IP addresses you can assign depends on the instance type. See EC2 User Guide for more information on the number of secondary private IP addresses that can be assigned per instance type. /vpc/faqs/;Can I assign one or more Elastic IP (EIP) addresses to VPC-based Amazon EC2 instances?;Yes, however, the EIP addresses will only be reachable from the Internet (not over the VPN connection). Each EIP address must be associated with a unique private IP address on the instance. EIP addresses should only be used on instances in subnets configured to route their traffic directly to the Internet gateway. EIPs cannot be used on instances in subnets configured to use a NAT gateway or a NAT instance to access the Internet. This is applicable only for IPv4. Amazon VPCs do not support EIPs for IPv6 at this time. /vpc/faqs/;What is the Bring Your Own IP feature?;Bring Your Own IP (BYOIP) enables customers to move all or part of their existing publicly routable IPv4 or IPv6 address space to AWS for use with their AWS resources. Customers will continue to own the IP range. Customers can create Elastic IPs from the IPv4 space they bring to AWS and use them with EC2 instances, NAT Gateways, and Network Load Balancers. Customers can also associate up to 5 CIDRs to a VPC from the IPv6 space they bring to AWS. Customers will continue to have access to Amazon-supplied IPs and can choose to use BYOIP Elastic IPs, Amazon-supplied IPs, or both. /vpc/faqs/;Why should I use BYOIP?;You may want to bring your own IP addresses to AWS for the following reasons: IP Reputation: Many customers consider the reputation of their IP addresses to be a strategic asset and want to use those IPs on AWS with their resources. For example, customers who maintain services such as outbound e-mail MTA and have high reputation IPs, can now bring over their IP space and successfully maintain their existing sending success rate. /vpc/faqs/;How can I use IP addresses from a BYOIP prefix with AWS resources?;Your BYOIP prefix will show as an IP pool in your account. You can create Elastic IPs (EIPs) from the IPv4 pool and use them like regular Elastic IPs (EIPs) with any AWS resource that supports EIPs. Currently, EC2 instances, NAT Gateways, and Network Load Balancers support EIPs. You can associate CIDRs from your IPv6 pool to your VPC. The IPv6 addresses brought over via BYOIP work exactly the same as Amazon-provided IPv6 addresses. For example, you can associate these IPv6 addresses to subnets, Elastic Network Interfaces (ENI) and EC2 instances within your VPC. /vpc/faqs/;What happens if I release a BYOIP Elastic IP?;When you release a BYOIP Elastic IP it goes back to the BYOIP IP pool from which it was allocated. /vpc/faqs/;In which AWS Regions is BYOIP available?;The feature is currently available in the Africa (Cape Town), Asia Pacific (Hong-Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Canada (Central), Europe (Dublin), Europe (Frankfurt), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), US West (Northern California), US East (NVirginia), US East (Ohio), US West (Oregon), AWS GovCloud (US-West) AWS GovCloud (US-East) Regions. /vpc/faqs/;Can a BYOIP prefix be shared with multiple VPCs in the same account?;Yes. You can use the BYOIP prefix with any number of VPCs in the same account. /vpc/faqs/;How many IP ranges can I bring via BYOIP?;You can bring a maximum of five IP ranges to your account. /vpc/faqs/;What is the most specific prefix that I can bring via BYOIP?;Via BYOIP, the most specific IPv4 prefix you can bring is a /24 IPv4 prefix and a /56 IPv6 prefix. If you intend to advertise your Ipv6 prefix to the internet then most specific IPv6 prefix is /48. /vpc/faqs/;Which RIR prefixes can I use for BYOIP?;You can use ARINRIPE, and APNIC registered prefixes. /vpc/faqs/;Can I bring a reassigned or reallocated prefix?;We are not accepting reassigned or reallocated prefixes at this time. IP ranges should be a net type of direct allocation or direct assignment. /vpc/faqs/;Can I move a BYOIP prefix from one AWS Region to another?;Yes. You can do that by de-provisioning the BYOIP prefix from the current region and then provisioning it to the new region. /vpc/faqs/;What is VPC IP Address Manager (IPAM)?; You should use IPAM to make IP address management more efficient. Existing mechanisms that leverage spreadsheets or home-grown tools require manual work, and are error-prone. With IPAM, as an example, you can roll out applications faster as your developers no longer need to wait for the central IP address administration team to allocate IP addresses. You can also detect overlapping IP addresses and fix them before there is a network outage. In addition, you can create alarms for IPAM to notify you if the address pools are nearing exhaustion or if resources fail to comply with allocations rules set on a pool. These are some of the many reasons you should use IPAM. /vpc/faqs/;Why should you use IPAM?; AWS IPAM provides the following features: /vpc/faqs/;Does Amazon Provide Contiguous CIDR Blocks and how do they work with IPAM?;Yes, Amazon Provides Contiguous IPv6 CIDR blocks for VPC allocation. Contigous CIDR blocks allow you to aggregated CIDRs in a single entry across networking and security constructs like access control lists, route tables, security groups, and firewalls. You can provision Amazon IPv6 CIDRs into a publicly scoped pool, and use all of the IPAM features to manage and monitor IP usage. Allocation of these CIDR blocks start in /52 increments, and larger blocks are available upon request. For example, you can allocate /52 CIDR from Amazon and use IPAM to share across accounts and create VPCs in those accounts. /vpc/faqs/;Can I specify which subnet will use which gateway as its default?;Yes. You may create a default route for each subnet. The default route can direct traffic to egress the VPC via the Internet gateway, the virtual private gateway, or the NAT gateway. /vpc/faqs/;How do I secure Amazon EC2 instances running within my VPC?;Amazon EC2 security groups can be used to help secure instances within an Amazon VPC. Security groups in a VPC enable you to specify both inbound and outbound network traffic that is allowed to or from each Amazon EC2 instance. Traffic which is not explicitly allowed to or from an instance is automatically denied. /vpc/faqs/;What are the differences between security groups in a VPC and network ACLs in a VPC?;Security groups in a VPC specify which traffic is allowed to or from an Amazon EC2 instance. Network ACLs operate at the subnet level and evaluate traffic entering and exiting a subnet. Network ACLs can be used to set both Allow and Deny rules. Network ACLs do not filter traffic between instances in the same subnet. In addition, network ACLs perform stateless filtering while security groups perform stateful filtering. /vpc/faqs/;What is the difference between stateful and stateless filtering?;Stateful filtering tracks the origin of a request and can automatically allow the reply to the request to be returned to the originating computer. For example, a stateful filter that allows inbound traffic to TCP port 80 on a webserver will allow the return traffic, usually on a high numbered port (e.g., destination TCP port 63, 912) to pass through the stateful filter between the client and the webserver. The filtering device maintains a state table that tracks the origin and destination port numbers and IP addresses. Only one rule is required on the filtering device: Allow traffic inbound to the web server on TCP port 80. /vpc/faqs/;Within Amazon VPC, can I use SSH key pairs created for instances within Amazon EC2, and vice versa?;Yes. /vpc/faqs/;Can Amazon EC2 instances within a VPC communicate with Amazon EC2 instances not within a VPC?;Yes. If an Internet gateway has been configured, Amazon VPC traffic bound for Amazon EC2 instances not within a VPC traverses the Internet gateway and then enters the public AWS network to reach the EC2 instance. If an Internet gateway has not been configured, or if the instance is in a subnet configured to route through the virtual private gateway, the traffic traverses the VPN connection, egresses from your datacenter, and then re-enters the public AWS network. /vpc/faqs/;Can Amazon EC2 instances within a VPC in one region communicate with Amazon EC2 instances within a VPC in another region?;Yes. Instances in one region can communicate with each other using Inter-Region VPC Peering, public IP addresses, NAT gateway, NAT instances, VPN Connections or Direct Connect connections. /vpc/faqs/;Can Amazon EC2 instances within a VPC communicate with Amazon S3?;Yes. There are multiple options for your resources within a VPC to communicate with Amazon S3. You can use VPC Endpoint for S3, which makes sure all traffic remains within Amazon's network and enables you to apply additional access policies to your Amazon S3 traffic. You can use an Internet gateway to enable Internet access from your VPC and instances in the VPC can communicate with Amazon S3. You can also make all traffic to Amazon S3 traverse the Direct Connect or VPN connection, egress from your datacenter, and then re-enter the public AWS network. /vpc/faqs/;Can I monitor the network traffic in my VPC?;Yes. You can use Amazon VPC traffic mirroring and Amazon VPC flow logs features to monitor the network traffic in your Amazon VPC. /vpc/faqs/;What is Amazon VPC flow logs?;VPC flow logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow logs data can be published to either Amazon CloudWatch Logs or Amazon S3. You can monitor your VPC flow logs to gain operational visibility about your network dependencies and traffic patterns, detect anomalies and prevent data leakage, or troubleshoot network connectivity and configuration issues. The enriched metadata in flow logs help you gain additional insights about who initiated your TCP connections, and the actual packet-level source and destination for traffic flowing through intermediate layers such as the NAT Gateway. You can also archive your flow logs to meet compliance requirements. To learn more about Amazon VPC flow logs, please refer to the documentation. /vpc/faqs/;How can I use VPC flow logs?;You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, each network interface in that subnet or VPC is monitored. While creating a flow log subscription, you can choose the metadata fields you wish to capture, the maximum aggregation interval, and your preferred log destination. You can also choose to capture all traffic or only accepted or rejected traffic. You can use tools like CloudWatch Log Insights or CloudWatch Contributor Insights to analyze your VPC flow logs delivered to CloudWatch Logs. You can use tools like Amazon Athena or AWS QuickSight to query and visualize your VPC flow logs delivered to Amazon S3. You can also build a custom downstream application to analyze your logs or use partner solutions such as Splunk, Datadog, Sumo Logic, Cisco StealthWatch, Checkpoint CloudGuard, New Relic etc. /vpc/faqs/;Do VPC flow logs support AWS Transit Gateway?;Yes, you can create VPC flow log for a Transit Gateway or for an individual Transit Gateway attachment. With this feature Transit Gateway can export detailed information such as source/destination IPs, ports, protocol, traffic counters, timestamps and various metadata for network flows traversing via the Transit Gateway. To learn more about Amazon VPC flow logs support for Transit Gateway, please refer to the documentation. /vpc/faqs/;Does using Flow Logs impact my network latency or performance?;Flow log data is collected outside of the path of your network traffic, and therefore does not affect network throughput or latency. You can create or delete flow logs without any risk of impact to network performance. /vpc/faqs/;How much VPC flow logs cost?;Data ingestion and archival charges for vended logs apply when you publish flow logs to CloudWatch Logs or to Amazon S3. For more information and examples, see Amazon CloudWatch Pricing. You can also track charges from publishing flow logs using cost allocation tags. /vpc/faqs/;What is Amazon VPC traffic mirroring?;Amazon VPC traffic mirroring makes it easy for customers to replicate network traffic to and from an Amazon EC2 instance and forward it to out-of-band security and monitoring appliances for use-cases such as content inspection, threat monitoring, and troubleshooting. These appliances can be deployed on an individual EC2 instance or a fleet of instances behind a Network Load Balancer (NLB) with User Datagram Protocol (UDP) listener. /vpc/faqs/;How does Amazon VPC traffic mirroring work?;The traffic mirroring feature copies network traffic from Elastic Network Interface (ENI) of EC2 instances in your Amazon VPC. The mirrored traffic can be sent to another EC2 instance or to an NLB with a UDP listener. Traffic mirroring encapsulates all copied traffic with VXLAN headers. The mirror source and destination (monitoring appliances) can be in the same VPC or in a different VPC, connected via VPC peering or AWS Transit Gateway. /vpc/faqs/;Which resources can be monitored with Amazon VPC traffic mirroring ?;Traffic mirroring supports network packet captures at the Elastic Network Interface (ENI) level for EC2 instances. Refer to the Traffic Mirroring documentation for the EC2 instances that support Amazon VPC Traffic Mirroring. /vpc/faqs/;What type of appliances are supported with Amazon VPC traffic mirroring?;Customers can either use open source tools or choose from a wide-range of monitoring solution available on AWS Marketplace. Traffic mirroring allows customers to stream replicated traffic to any network packet collector/broker or analytics tool, without requiring them to install vendor-specific agents. /vpc/faqs/;How is Amazon VPC traffic mirroring different from Amazon VPC flow logs?;Amazon VPC flow logs allow customers to collect, store, and analyze network flow logs. The information captured in flow logs includes information about allowed and denied traffic, source and destination IP addresses, ports, protocol number, packet and byte counts, and an action (accept or reject). You can use this feature to troubleshoot connectivity and security issues and to make sure that the network access rules are working as expected. /vpc/faqs/;Can a VPC span multiple Availability Zones?;Yes. /vpc/faqs/;Can a subnet span Availability Zones?;No. A subnet must reside within a single Availability Zone. /vpc/faqs/;How do I specify which Availability Zone my Amazon EC2 instances are launched in?;When you launch an Amazon EC2 instance, you must specify the subnet in which to launch the instance. The instance will be launched in the Availability Zone associated with the specified subnet. /vpc/faqs/;How do I determine which Availability Zone my subnets are located in?;"When you create a subnet you must specify the Availability Zone in which to place the subnet. When using the VPC Wizard, you can select the subnet's Availability Zone in the wizard confirmation screen. When using the API or the CLI you can specify the Availability Zone for the subnet as you create the subnet. If you don’t specify an Availability Zone, the default ""NPreference"" option will be selected and the subnet will be created in an available Availability Zone in the region." /vpc/faqs/;Am I charged for network bandwidth between instances in different subnets?;If the instances reside in subnets in different Availability Zones, you will be charged $0.01 per GB for data transfer. /vpc/faqs/;When I call DescribeInstances(), do I see all of my Amazon EC2 instances, including those in EC2-Classic and EC2-VPC?;Yes. DescribeInstances() will return all running Amazon EC2 instances. You can differentiate EC2-Classic instances from EC2-VPC instances by an entry in the subnet field. If there is a subnet ID listed, the instance is within a VPC. /vpc/faqs/;When I call DescribeVolumes(), do I see all of my Amazon EBS volumes, including those in EC2-Classic and EC2-VPC?;Yes. DescribeVolumes() will return all your EBS volumes. /vpc/faqs/;How many Amazon EC2 instances can I use within a VPC?;For instances that require IPv4 addressing, you can run any number of Amazon EC2 instances within a VPC, so long as your VPC is appropriately sized to have an IPv4 address assigned to each instance. You are initially limited to launching 20 Amazon EC2 instances at any one time and a maximum VPC size of /16 (65,536 IPs). If you would like to increase these limits, please complete the following form. For IPv6 only instances, the VPC size of /56 provides you the ability to launch virtually unlimited number of Amazon EC2 instances. /vpc/faqs/;Can I use my existing AMIs in Amazon VPC?;You can use AMIs in Amazon VPC that are registered within the same region as your VPC. For example, you can use AMIs registered in us-east-1 with a VPC in us-east-1. More information is available in the Amazon EC2 Region and Availability Zone FAQ. /vpc/faqs/;Can I use my existing Amazon EBS snapshots?;Yes, you may use Amazon EBS snapshots if they are located in the same region as your VPC. More details are available in the Amazon EC2 Region and Availability Zone FAQ. /vpc/faqs/;Can I boot an Amazon EC2 instance from an Amazon EBS volume within Amazon VPC?;Yes, however, an instance launched in a VPC using an Amazon EBS-backed AMI maintains the same IP address when stopped and restarted. This is in contrast to similar instances launched outside a VPC, which get a new IP address. The IP addresses for any stopped instances in a subnet are considered unavailable. /vpc/faqs/;Can I use Amazon EC2 Reserved Instances with Amazon VPC?;Yes. You can reserve an instance in Amazon VPC when you purchase Reserved Instances. When computing your bill, AWS does not distinguish whether your instance runs in Amazon VPC or standard Amazon EC2. AWS automatically optimizes which instances are charged at the lower Reserved Instance rate to ensure you always pay the lowest amount. However, your instance reservation will be specific to Amazon VPC. Please see the Reserved Instances page for further details. /vpc/faqs/;Can I employ Amazon CloudWatch within Amazon VPC?;Yes. /vpc/faqs/;Can I employ Auto Scaling within Amazon VPC?;Yes. /vpc/faqs/;Can I launch Amazon EC2 Cluster Instances in a VPC?;Yes. Cluster instances are supported in Amazon VPC, however, not all instance types are available in all regions and Availability Zones. /vpc/faqs/;What are instance hostnames?;When you launch an instance, it is assigned a hostname. There are two options available, an IP based name or a Resource based name, and this parameter is configurable at instance launch. The IP based name uses a form of the Private IPv4 address while the Resource based name uses a form of the instance-id. /vpc/faqs/;Can I change the instance hostname of my Amazon EC2 instance?;Yes, you can change the hostname of an instance form IP based to Resource based or vice versa by stopping the instance and then changing the resource based naming options. /vpc/faqs/;Can I use the instance hostnames as DNS hostnames?;Yes, the instance hostname can be used as DNhostnames. For instances launched in an IPv4-only or dual-stack subnet, the IP based name always resolves to the Private IPv4 address on the primary network interface of the instance and this cannot be turned off. Additionally, the Resource based name can be configured to resolve to either the Private IPv4 address on the primary network interface, or the first IPv6 GUA on the primary network interface, or both. For instances launched in an IPv6-only subnet, the Resource based name will be configured to resolve to the first IPv6 GUA on the primary network interface. /vpc/faqs/;What is a default VPC?;A default VPC is a logically isolated virtual network in the AWS cloud that is automatically created for your AWS account the first time you provision Amazon EC2 resources. When you launch an instance without specifying a subnet-ID, your instance will be launched in your default VPC. /vpc/faqs/;What are the benefits of a default VPC?;When you launch resources in a default VPC, you can benefit from the advanced networking functionalities of Amazon VPC (EC2-VPC) with the ease of use of Amazon EC2 (EC2-Classic). You can enjoy features such as changing security group membership on the fly, security group egress filtering, multiple IP addresses, and multiple network interfaces without having to explicitly create a VPC and launch instances in the VPC. /vpc/faqs/;What accounts are enabled for default VPC?;If your AWS account was created after March 18, 2013 your account may be able to launch resources in a default VPC. See this Forum Announcement to determine which regions have been enabled for the default VPC feature set. Also, accounts created prior to the listed dates may utilize default VPCs in any default VPC enabled region in which you’ve not previously launched EC2 instances or provisioned Amazon Elastic Load Balancing, Amazon RDS, Amazon ElastiCache, or Amazon Redshift resources. /vpc/faqs/;How can I tell if my account is configured to use a default VPC?;"The Amazon EC2 console indicates which platforms you can launch instances in for the selected region, and whether you have a default VPC in that region. Verify that the region you'll use is selected in the navigation bar. On the Amazon EC2 console dashboard, look for ""Supported Platforms"" under ""Account Attributes"". If there are two values, EC2-Classic and EC2-VPC, you can launch instances into either platform. If there is one value, EC2-VPC, you can launch instances only into EC2-VPC. Your default VPC ID will be listed under ""Account Attributes"" if your account is configured to use a default VPC. You can also use the EC2 DescribeAccountAttributes API or CLI to describe your supported platforms." /vpc/faqs/;Will I need to know anything about Amazon VPC in order to use a default VPC?;No. You can use the AWS Management Console, AWS EC2 CLI, or the Amazon EC2 API to launch and manage EC2 instances and other AWS resources in a default VPC. AWS will automatically create a default VPC for you and will create a default subnet in each Availability Zone in the AWS region. Your default VPC will be connected to an Internet gateway and your instances will automatically receive public IP addresses, just like EC2-Classic. /vpc/faqs/;What are the differences between instances launched in EC2-Classic and EC2-VPC?;See Differences between EC2-Classic and EC2-VPC in the EC2 User Guide. /vpc/faqs/;Do I need to have a VPN connection to use a default VPC?;No. Default VPCs are attached to the Internet and all instances launched in default subnets in the default VPC automatically receive public IP addresses. You can add a VPN connection to your default VPC if you choose. /vpc/faqs/;Can I create other VPCs and use them in addition to my default VPC?;Yes. To launch an instance into nondefault VPCs you must specify a subnet-ID during instance launch. /vpc/faqs/;Can I create additional subnets in my default VPC, such as private subnets?;Yes. To launch into nondefault subnets, you can target your launches using the console or the --subnet option from the CLI, API, or SDK. /vpc/faqs/;How many default VPCs can I have?;"You can have one default VPC in each AWS region where your Supported Platforms attribute is set to ""EC2-VPC""." /vpc/faqs/;What is the IP range of a default VPC?;The default VPC CIDR is 172.31.0.0/16. Default subnets use /20 CIDRs within the default VPC CIDR. /vpc/faqs/;How many default subnets are in a default VPC?;One default subnet is created for each Availability Zone in your default VPC. /vpc/faqs/;Can I specify which VPC is my default VPC?;Not at this time. /vpc/faqs/;Can I specify which subnets are my default subnets?;Not at this time. /vpc/faqs/;Can I delete a default VPC?;Yes, you can delete a default VPC. Once deleted, you can create a new default VPC directly from the VPC Console or by using the CLI. This will create a new default VPC in the region. This does not restore the previous VPC that was deleted. /vpc/faqs/;Can I delete a default subnet?;Yes, you can delete a default subnet. Once deleted, you can create a new default subnet in the availability zone by using the CLI or SDK. This will create a new default subnet in the availability zone specified. This does not restore the previous subnet that was deleted. /vpc/faqs/;I have an existing EC2-Classic account. Can I get a default VPC?;"The simplest way to get a default VPC is to create a new account in a region that is enabled for default VPCs, or use an existing account in a region you've never been to before, as long as the Supported Platforms attribute for that account in that region is set to ""EC2-VPC""." /vpc/faqs/;I really want a default VPC for my existing EC2 account. Is that possible?;Yes, however, we can only enable an existing account for a default VPC if you have no EC2-Classic resources for that account in that region. Additionally, you must terminate all non-VPC provisioned Elastic Load Balancers, Amazon RDS, Amazon ElastiCache, and Amazon Redshift resources in that region. After your account has been configured for a default VPC, all future resource launches, including instances launched via Auto Scaling, will be placed in your default VPC. To request your existing account be setup with a default VPC, please go to Account and Billing -> Service: Account -> Category: Convert EC2 Classic to VPC and raise a request. We will review your request, your existing AWS services and EC2-Classic presence and guide you through the next steps. /vpc/faqs/;How are IAM accounts impacted by default VPC?;If your AWS account has a default VPC, any IAM accounts associated with your AWS account use the same default VPC as your AWS account. /vpc/faqs/;What is EC2-Classic?;EC2-Classic is a flat network that we launched with EC2 in the summer of 2006. With EC2-Classic, your instances run in a single, flat network that you share with other customers. Over time, inspired by our customers’ evolving needs, we launched Amazon Virtual Private Cloud (VPC) in 2009 to allow you to run instances in a virtual private cloud that's logically isolated to your AWS account. Today, while majority of our customers use Amazon VPC, we have a few customers who still use EC2-Classic. /vpc/faqs/;What’s changing?;We are retiring Amazon EC2-Classic on August 15, 2022 and we need you to migrate any EC2 instances and other AWS resources running on EC2-Classic to Amazon VPC before this date. The following section provides more information on the EC2-Class retirement as well as tools and resources to assist you in migration. /vpc/faqs/;How is my account impacted by the retirement of EC2-Classic?;"You are affected by this change only if you have EC2-Classic enabled on your account in any of the AWS regions. You can use the console or the describe-account-attributes command to check whether you have EC2-Classic enabled for an AWS region; please refer to this document for more details. If you do not have any active AWS resources running on EC2-Classic in any region, we request you to turn off EC2-Classic from your account for that region. Turning off EC2-Classic in a region allows you to launch Default VPC there. To do so go to the AWS Support Center at console.aws.amazon.com/support, choose “Create case” and then “Account and billing support”, for “Type” choose “Account”, for “Category” choose “Convert EC2 Classic to VPC”, fill in the other details as required, and choose “Submit”. We will automatically turn off EC2-Classic from your account on October 30, 2021 for any AWS region where you have not had any AWS resources (EC2 Instances, Amazon Relational Database, AWS Elastic Beanstalk, Amazon Redshift, AWS Data Pipeline, Amazon EMR, AWS OpsWorks) on EC2-Classic since January 1, 2021. On the other hand, if you have AWS resources running on EC2-Classic, we request you to plan their migration to Amazon VPC as soon as possible. You will not be able to launch any instances or AWS services on EC2-Classic platform beyond August 15, 2022. Any workloads or services in running state will gradually loose access to all AWS services on EC2-Classic as we retire them beginning August 16, 2022." /vpc/faqs/;What are the benefits of moving from EC2-Classic to Amazon VPC?;Amazon VPC gives you complete control over your virtual network environment on AWS, logically isolated to your AWS account. In the EC2-Classic environment, your workloads are sharing a single flat network with other customers. The Amazon VPC environment offers many other advantages over the EC2-Classic environment including the ability to select your own IP address space, public and private subnet configuration, and management of route tables and network gateways. All services and instances currently available in EC2-Classic have comparable services available in the Amazon VPC environment. Amazon VPC also offers a much wider and latest generation of instances than EC2-Classic. Further information about Amazon VPC is available in this link. /vpc/faqs/;How do I migrate from EC2-Classic to VPC?;To help you migrate your resources, we have published playbooks and built solutions that you will find below. To migrate, you must recreate your EC2-Classic resources in your VPC. First, you can use this script to identify all resources provisioned in EC2-Classic across all regions in an account. You can then use the migration guide for the relevant AWS resources from below: /vpc/faqs/;What are the important dates I should be aware of?;We will take the following two actions ahead of the August 15, 2022 retirement date: /vpc/faqs/;Can I attach or detach one or more network interfaces to an EC2 instance while it’s running?;Yes. /vpc/faqs/;Can I have more than two network interfaces attached to my EC2 instance?;The total number of network interfaces that can be attached to an EC2 instance depends on the instance type. See the EC2 User Guide for more information on the number of allowed network interfaces per instance type. /vpc/faqs/;Can I attach a network interface in one Availability Zone to an instance in another Availability Zone?;Network interfaces can only be attached to instances residing in the same Availability Zone. /vpc/faqs/;Can I attach a network interface in one VPC to an instance in another VPC?;Network interfaces can only be attached to instances in the same VPC as the interface. /vpc/faqs/;Can I use Elastic Network Interfaces as a way to host multiple websites requiring separate IP addresses on a single instance?;Yes, however, this is not a use case best suited for multiple interfaces. Instead, assign additional private IP addresses to the instance and then associate EIPs to the private IPs as needed. /vpc/faqs/;Will I get charged for an Elastic IP Address that is associated to a network interface but the network interface isn’t attached to a running instance?;Yes. /vpc/faqs/;Can I detach the primary interface (eth0) on my EC2 instance?;No. You can attach and detach secondary interfaces (eth1-ethn) on an EC2 instance, but you can’t detach the eth0 interface. /vpc/faqs/;Can I create a peering connection to a VPC in a different region?;Yes. Peering connections can be created with VPCs in different regions. Inter-region VPC peering is available globally in all commercial regions (excluding China). /vpc/faqs/;Can I peer my VPC with a VPC belonging to another AWS account?;Yes, assuming the owner of the other VPC accepts your peering connection request. /vpc/faqs/;Can I peer two VPCs with matching IP address ranges?;No. Peered VPCs must have non-overlapping IP ranges. /vpc/faqs/;How much do VPC peering connections cost?;There is no charge for creating VPC peering connections, however, data transfer across peering connections is charged. See the Data Transfer section of the EC2 Pricing page for data transfer rates. /vpc/faqs/;Can I use AWS Direct Connect or hardware VPN connections to access VPCs I’m peered with?;No. “Edge to Edge routing” isn’t supported in Amazon VPC. Refer to the VPC Peering Guide for additional information. /vpc/faqs/;Do I need an Internet Gateway to use peering connections?;No. VPC peering connections do not require an Internet Gateway. /vpc/faqs/;Is VPC peering traffic within the region encrypted?;No. Traffic between instances in peered VPCs remains private and isolated – similar to how traffic between two instances in the same VPC is private and isolated. /vpc/faqs/;If I delete my side of a peering connection, will the other side still have access to my VPC?;No. Either side of the peering connection can terminate the peering connection at any time. Terminating a peering connection means traffic won’t flow between the two VPCs. /vpc/faqs/;If I peer VPC A to VPC B and I peer VPC B to VPC C, does that mean VPCs A and C are peered?;No. Transitive peering relationships are not supported. /vpc/faqs/;What if my peering connection goes down?;"AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck." /vpc/faqs/;Are there any bandwidth limitations for peering connections?;"Bandwidth between instances in peered VPCs is no different than bandwidth between instances in the same VPC. Note: A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs. Read more about Placement Groups." /vpc/faqs/;Is Inter-Region VPC Peering traffic encrypted?;Traffic is encrypted using modern AEAD (Authenticated Encryption with Associated Data) algorithms. Key agreement and key management is handled by AWS. /vpc/faqs/;How do DNS translations work with Inter-Region VPC Peering?;By default, a query for a public hostname of an instance in a peered VPC in a different region will resolve to a public IP address. Route 53 private DNcan be used to resolve to a private IP address with Inter-Region VPC Peering. /vpc/faqs/;Can I reference security groups across an Inter-Region VPC Peering connection?;No. Security groups cannot be referenced across an Inter-Region VPC Peering connection. /vpc/faqs/;Does Inter-Region VPC Peering support IPv6?;Yes. Inter-Region VPC Peering supports IPv6. /vpc/faqs/;Can Inter-Region VPC Peering be used with EC2-Classic Link?;No. Inter-Region VPC Peering cannot be used with EC2-ClassicLink. /vpc/faqs/;What does ClassicLink cost?;"There is no additional charge for using ClassicLink; however, existing cross Availability Zone data transfer charges will apply. For more information, consult the EC2 pricing page." /vpc/faqs/;How do I use ClassicLink?;In order to use ClassicLink, you first need to enable at least one VPC in your account for ClassicLink. Then you associate a Security Group from the VPC with the desired EC2-Classic instance. The EC2-Classic instance is now linked to the VPC and is a member of the selected Security Group in the VPC. Your EC2-Classic instance cannot be linked to more than one VPC at the same time. /vpc/faqs/;Does the EC2-Classic instance become a member of the VPC?;The EC2-Classic instance does not become a member of the VPC. It becomes a member of the VPC Security Group that was associated with the instance. All the rules and references to the VPC Security Group apply to communication between instances in EC2-Classic instance and resources within the VPC. /vpc/faqs/;Can I use EC2 public DNS hostnames from my EC2-Classic and EC2-VPC instances to address each other, in order to communicate using private IP?;No. The EC2 public DNhostname will not resolve to the private IP address of the EC2-VPC instance when queried from an EC2-Classic instance, and vice-versa. /vpc/faqs/;Are there any VPCs for which I cannot enable ClassicLink?;"Yes. ClassicLink cannot be enabled for a VPC that has a Classless Inter-Domain Routing (CIDR) that is within the 10.0.0.0/8 range, with the exception of 10.0.0.0/16 and 10.1.0.0/16. In addition, ClassicLink cannot be enabled for any VPC that has a route table entry pointing to the 10.0.0.0/8 CIDR space to a target other than ""local""." /vpc/faqs/;Can traffic from an EC2-Classic instance travel through the Amazon VPC and egress through the Internet gateway, virtual private gateway, or to peered VPCs?;Traffic from an EC2-Classic instance can only be routed to private IP addresses within the VPC. They will not be routed to any destinations outside the VPC, including Internet gateway, virtual private gateway, or peered VPC destinations. /vpc/faqs/;Does ClassicLink affect the access control between the EC2-Classic instance, and other instances that are in the EC2-Classic platform?;ClassicLink does not change the access control defined for an EC2-Classic instance through its existing Security Groups from the EC2-Classic platform. /vpc/faqs/;Will ClassicLink settings on my EC2-Classic instance persist through stop/start cycles?;The ClassicLink connection will not persist through stop/start cycles of the EC2-Classic instance. The EC2-Classic instance will need to be linked back to a VPC after it is stopped and started. However, the ClassicLink connection will persist through instance reboot cycles. /vpc/faqs/;Will my EC2-Classic instance be assigned a new, private IP address after I enable ClassicLink?;There is no new private IP address assigned to the EC2-Classic instance. When you enable ClassicLink on an EC2-Classic instance, the instance retains and uses its existing private IP address to communication with resources in a VPC. /vpc/faqs/;Does ClassicLink allow EC2-Classic Security Group rules to reference VPC Security Groups, or vice versa?;ClassicLink does not allow EC2-Classic Security Group rules to reference VPC Security Groups, or vice versa. /vpc/faqs/;How can I use AWS PrivateLink?;As a service user, you will need to create interface type VPC endpoints for services that are powered by PrivateLink. These service endpoints will appear as Elastic Network Interfaces (ENIs) with private IPs in your VPCs. Once these endpoints are created, any traffic destined to these IPs will get privately routed to the corresponding AWS services. /vpc/faqs/;Which services are currently available on AWS PrivateLink?;The following AWS services support this feature: Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Kinesis Streams, Service Catalog, EC2 Systems Manager, Amazon SNS, and AWS DataSync. Many SaaS solutions support this feature as well. Please visit AWS Marketplace for more SaaS products powered by AWS PrivateLink. /vpc/faqs/;Can I privately access services powered by AWS PrivateLink over AWS Direct Connect?;Yes. The application in your on-premises can connect to the service endpoints in Amazon VPC over AWS Direct Connect. The service endpoints will automatically direct the traffic to AWS services powered by AWS PrivateLink. /vpc/faqs/;What CloudWatch metrics are available for the interface-based VPC endpoint?;Currently, no CloudWatch metric is available for the interface-based VPC endpoint. /vpc/faqs/;Who pays the data transfer costs for the traffic going via the interface-based VPC endpoint?;The concept of data transfer costs is similar to that of data transfer costs for EC2 instances. Since an interface-based VPC endpoint is an ENin the subnet, data transfer charges depend on the source of the traffic. If the traffic to this interface is coming from a resource across AZ, EC2 cross-AZ data transfer charges apply to the consumer end. Customers in the consumer VPC can use AZ-specific DNendpoint to make sure the traffic stays within the same AZ if they have provisioned each AZ available in their account. /vpc/faqs/;How many VPCs, subnets, Elastic IP addresses, and internet gateways can I create?;You can have: /vpc/faqs/;Can I obtain AWS support with Amazon VPC?;Yes. Click here for more information on AWS support. /cloudfront/faqs/;What is Amazon CloudFront?;Amazon CloudFront is a web service that gives businesses and web application developers an easy and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations. /cloudfront/faqs/;What can I do with Amazon CloudFront?;Amazon CloudFront provides a simple API that lets you: /cloudfront/faqs/;How do I get started with Amazon CloudFront?;Click the “Create Free Account” button on the Amazon CloudFront detail page. If you choose to use another AWS service as the origin for the files served through Amazon CloudFront, you must sign up for that service before creating CloudFront distributions. /cloudfront/faqs/;How do I use Amazon CloudFront?;To use Amazon CloudFront, you: /cloudfront/faqs/;How does Amazon CloudFront provide higher performance?;Amazon CloudFront employs a global network of edge locations and regional edge caches that cache copies of your content close to your viewers. Amazon CloudFront ensures that end-user requests are served by the closest edge location. As a result, viewer requests travel a short distance, improving performance for your viewers. For files not cached at the edge locations and the regional edge caches, Amazon CloudFront keeps persistent connections with your origin servers so that those files can be fetched from the origin servers as quickly as possible. Finally, Amazon CloudFront uses additional optimizations – e.g. wider TCP initial congestion window – to provide higher performance while delivering your content to viewers. /cloudfront/faqs/;How does Amazon CloudFront lower my costs to distribute content over the Internet?;Like other AWS services, Amazon CloudFront has no minimum commitments and charges you only for what you use. Compared to self-hosting, Amazon CloudFront spares you from the expense and complexity of operating a network of cache servers in multiple sites across the internet and eliminates the need to over-provision capacity in order to serve potential spikes in traffic. Amazon CloudFront also uses techniques such as collapsing simultaneous viewer requests at an edge location for the same file into a single request to your origin server. This reduces the load on your origin servers reducing the need to scale your origin infrastructure, which can bring you further cost savings. /cloudfront/faqs/;How does Amazon CloudFront speed up my entire website?;Amazon CloudFront uses standard cache control headers you set on your files to identify static and dynamic content. Delivering all your content using a single Amazon CloudFront distribution helps you make sure that performance optimizations are applied to your entire website or web application. When using AWS origins, you benefit from improved performance, reliability, and ease of use as a result of AWS’s ability to track and adjust origin routes, monitor system health, respond quickly when any issues occur, and the integration of Amazon CloudFront with other AWS services. You also benefit from using different origins for different types of content on a single site – e.g. Amazon S3 for static objects, Amazon EC2 for dynamic content, and custom origins for third-party content – paying only for what you use. /cloudfront/faqs/;How is Amazon CloudFront different from Amazon S3?;Amazon CloudFront is a good choice for distribution of frequently accessed static content that benefits from edge delivery—like popular website images, videos, media files or software downloads. /cloudfront/faqs/;How is Amazon CloudFront different from traditional content delivery solutions?;Amazon CloudFront lets you quickly obtain the benefits of high performance content delivery without negotiated contracts or high prices. Amazon CloudFront gives all developers access to inexpensive, pay-as-you-go pricing – with a self-service model. Developers also benefit from tight integration with other Amazon Web Services. The solution is simple to use with Amazon S3, Amazon EC2, and Elastic Load Balancing as origin servers, giving developers a powerful combination of durable storage and high performance delivery. Amazon CloudFront also integrates with Amazon Route 53 and AWS CloudFormation for further performance benefits and ease of configuration. /cloudfront/faqs/;What types of content does Amazon CloudFront support?;Amazon CloudFront supports content that can be sent using the HTTP or WebSocket protocols. This includes dynamic web pages and applications, such as HTML or PHP pages or WebSocket-based applications, and any popular static files that are a part of your web application, such as website images, audio, video, media files or software downloads. Amazon CloudFront also supports delivery of live or on-demand media streaming over HTTP. /cloudfront/faqs/;Does Amazon CloudFront work with non-AWS origin servers?;Yes. Amazon CloudFront works with any origin server that holds the original, definitive versions of your content, both static and dynamic. There is no additional charge to use a custom origin. /cloudfront/faqs/;How does Amazon CloudFront enable origin redundancy?;For every origin that you add to a CloudFront distribution, you can assign a backup origin that can be used to automatically serve your traffic if the primary origin is unavailable. You can choose a combination of HTTP 4xx/5xx status codes that, when returned from the primary origin, trigger the failover to the backup origin. The two origins can be any combination of AWS and non-AWS origins. /cloudfront/faqs/;Does Amazon CloudFront offer a Service Level Agreement (SLA)?;Yes. The Amazon CloudFront SLA provides for a service credit if a customer’s monthly uptime percentage is below our service commitment in any billing cycle. More information can be found here. /cloudfront/faqs/;Can I use the AWS Management Console with Amazon CloudFront?;Yes. You can use the AWS Management Console to configure and manage Amazon CloudFront though a simple, point-and-click web interface. The AWS Management Console supports most of Amazon CloudFront’s features, letting you get Amazon CloudFront’s low latency delivery without writing any code or installing any software. Access to the AWS Management Console is provided free of charge at https://console.aws.amazon.com. /cloudfront/faqs/;What tools and libraries work with Amazon CloudFront?;There are a variety of tools for managing your Amazon CloudFront distribution and libraries for various programming languages available in our resource center. /cloudfront/faqs/;Can I point my zone apex (example.com versus www.example.com) at my Amazon CloudFront distribution?;"Yes. By using Amazon Route 53, AWS’s authoritative DNservice, you can configure an ‘Alias’ record that lets you map the apex or root (example.com) of your DNname to your Amazon CloudFront distribution. Amazon Route 53 will then respond to each request for an Alias record with the right IP address(es) for your CloudFront distribution. Route 53 doesn't charge for queries to Alias records that are mapped to a CloudFront distribution. These queries are listed as ""Intra-AWS-DNS-Queries"" on the Amazon Route 53 usage report." /cloudfront/faqs/;What is CloudFront Regional Edge Cache?;CloudFront delivers your content through a worldwide network of data centers called edge locations. The regional edge caches are located between your origin web server and the global edge locations that serve content directly to your viewers. This helps improve performance for your viewers while lowering the operational burden and cost of scaling your origin resources. /cloudfront/faqs/;How does regional edge caching work?;Amazon CloudFront has multiple globally dispersed Regional Edge Caches (or RECs), providing an additional caching layer close your end-users. They are located between your origin webserver and AWS edge locations that serve content directly to your users. As cached objects become less popular, individual edge locations may remove those objects to make room for more commonly requested content. Regional Edge Caches have a larger cache width than any individual edge location, so objects remain cached longer. This helps keep more of your content closer to your viewers, reducing the need for CloudFront to go back to your origin webserver and improving overall performance for viewers. For example, CloudFront edge locations in Europe now go to the regional edge cache in Frankfurt to fetch an object before going back to your origin webserver. Regional edge cache locations can be used with any origin, such as S3, EC2, or custom origins. RECs are skipped in Regions currently hosting your application origins. /cloudfront/faqs/;Is regional edge cache feature enabled by default?;"Yes. You do not need to make any changes to your CloudFront distributions; this feature is enabled by default for all new and existing CloudFront distributions. There are no additional charges to use this feature." /cloudfront/faqs/;Where are the edge network locations used by Amazon CloudFront located?;Amazon CloudFront uses a global network of edge locations and regional edge caches for content delivery. You can see a full list of Amazon CloudFront locations here. /cloudfront/faqs/;Can I choose to serve content (or not serve content) to specified countries?;Yes, the Geo Restriction feature lets you specify a list of countries in which your users can access your content. Alternatively, you can specify the countries in which your users cannot access your content. In both cases, CloudFront responds to a request from a viewer in a restricted country with an HTTP status code 403 (Forbidden). /cloudfront/faqs/;How accurate is your GeoIP database?;The accuracy of the IP Address to country lookup database varies by region. Based on recent tests, our overall accuracy for the IP address to country mapping is 99.8%. /cloudfront/faqs/;Can I serve a custom error message to my end users?;Yes, you can create custom error messages (for example, an HTML file or a .jpg graphic) with your own branding and content for a variety of HTTP 4xx and 5xx error responses. Then you can configure Amazon CloudFront to return your custom error messages to the viewer when your origin returns one of the specified errors to CloudFront. /cloudfront/faqs/;How long will Amazon CloudFront keep my files at the edge locations?;By default, if no cache control header is set, each edge location checks for an updated version of your file whenever it receives a request more than 24 hours after the previous time it checked the origin for changes to that file. This is called the “expiration period.” You can set this expiration period as short as 0 seconds, or as long as you’d like, by setting the cache control headers on your files in your origin. Amazon CloudFront uses these cache control headers to determine how frequently it needs to check the origin for an updated version of that file. For expiration period set to 0 seconds, Amazon CloudFront will revalidate every request with the origin server. If your files don’t change very often, it is best practice to set a long expiration period and implement a versioning system to manage updates to your files. /cloudfront/faqs/;How do I remove an item from Amazon CloudFront edge locations?;There are multiple options for removing a file from the edge locations. You can simply delete the file from your origin and as content in the edge locations reaches the expiration period defined in each object’s HTTP header, it will be removed. In the event that offensive or potentially harmful material needs to be removed before the specified expiration time, you can use the Invalidation API to remove the object from all Amazon CloudFront edge locations. You can see the charge for making invalidation requests here. /cloudfront/faqs/;Is there a limit to the number of invalidation requests I can make?;If you're invalidating objects individually, you can have invalidation requests for up to 3,000 objects per distribution in progress at one time. This can be one invalidation request for up to 3,000 objects, up to 3,000 requests for one object each, or any other combination that doesn't exceed 3,000 objects. /cloudfront/faqs/;Is Amazon CloudFront PCI compliant?;Yes, Amazon CloudFront is included in the set of services that are compliant with the Payment Card Industry Data Security Standard (PCI DSS) Merchant Level 1, the highest level of compliance for service providers. Please see our developer's guide for more information. /cloudfront/faqs/;Is Amazon CloudFront HIPAA eligible?;Yes, AWS has expanded its HIPAA compliance program to include Amazon CloudFront as a HIPAA eligible service. If you have an executed Business Associate Agreement (BAA) with AWS, you can use Amazon CloudFront to accelerate the delivery of protected health information (PHI). For more information, see HIPAA Compliance and our developer's guide. /cloudfront/faqs/;Is Amazon CloudFront SOC compliant?;Yes, Amazon CloudFront is compliant with SOC (System & Organization Control) measures. SOC Reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. For more information see, AWS SOC Compliance and our developer's guide. /cloudfront/faqs/;How do I request an AWS SOC1, SOC 2, or SOC 3 Report?;The AWS SOC 1 and SOC 2 reports are available to customers by using AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact. The latest AWS SOC 3 Report is publicly available on the AWS website. /cloudfront/faqs/;What types of HTTP requests are supported by Amazon CloudFront?;Amazon CloudFront currently supports GET, HEAD, POST, PUT, PATCH, DELETE and OPTIONrequests. /cloudfront/faqs/;Does Amazon CloudFront cache POST responses?;Amazon CloudFront does not cache the responses to POST, PUT, DELETE, and PATCH requests – these requests are proxied back to the origin server. You may enable caching for the responses to OPTIONrequests. /cloudfront/faqs/;How do I use HTTP/2?;"If you have an existing Amazon CloudFront distribution, you can turn on HTTP/2 using the API or the Management Console. In the Console, go to the “Distribution Configuration” page and navigate to the section “Supported HTTP Versions.” There, you can select ""HTTP/2, HTTP/1.1, or HTTP/1.0"". HTTP/2 is automatically enabled for all new CloudFront distributions." /cloudfront/faqs/;What if my origin does not support HTTP/2?;Amazon CloudFront currently supports HTTP/2 for delivering content to your viewers’ clients and browsers. For communication between the edge location and your origin servers, Amazon CloudFront will continue to use HTTP/1.1. /cloudfront/faqs/;Does Amazon CloudFront support HTTP/2 without TLS?;Not currently. However, most of the modern browsers support HTTP/2 only over an encrypted connection. You can learn more about using SSL with Amazon CloudFront here. /cloudfront/faqs/;What is HTTP/3?;HTTP/3 is the third major version of the Hypertext Transfer Protocol. HTTP/3 uses QUIC, a user datagram protocol (UDP) based, stream-multiplexed, and secure transport protocol that combines and improves upon the capabilities of existing transmission control protocol (TCP), TLS, and HTTP/2. HTTP/3 offers several benefits over previous HTTP versions, including faster response times and enhanced security. /cloudfront/faqs/;What is QUIC?;HTTP/3 is powered by QUIC, a new highly performant, resilient, and secure internet transport protocol. CloudFront's HTTP/3 support is built on top of s2n-quic, a new open source QUIC protocol implementation in Rust. To learn more about QUIC, refer the “Introducing s2n-quic“ blog. /cloudfront/faqs/;What are the key benefits of using HTTP/3 with Amazon CloudFront?;Customers are constantly looking to deliver faster and more secure applications for their end users. As internet penetration increases globally and more users come online via mobile and from remote networks, the need for improved performance and reliability is greater than ever. HTTP/3 enables this as it offers several performance improvements over previous HTTP versions: /cloudfront/faqs/;How do I enable HTTP/3 on my CloudFront distributions?;"You can turn on HTTP/3 for new and existing Amazon CloudFront distributions using the CloudFront Console, the UpdateDistribution API action, or using a Cloudformation template. In the Console, go to the “Distribution Configuration” page and navigate to the section “Supported HTTP Versions.” There, you can select ""HTTP/3, HTTP/2, HTTP/1.1, or HTTP/1.0.""" /cloudfront/faqs/;Do I need to make changes to my applications before enabling HTTP/3?;When you enable HTTP/3 on your CloudFront distribution, CloudFront automatically adds the Alt-Svc header, which it uses to advertise that HTTP/3 support is available and you don’t need to manually add the Alt-Svc header. We expect you to enable support for multiple protocols in your applications, such that if the application fails to establish a HTTP/3 connection it will fall back to HTTP /1.1 or HTTP/2. i.e., clients that do not support HTTP/3 will still be able to communicate with HTTP/3 enabled CloudFront distributions using HTTP/1.1 or HTTP/2. Fallback support is a required part of the HTTP/3 specification and is implemented by all major browsers that support HTTP/3. /cloudfront/faqs/;What if my origin does not support HTTP/3?;CloudFront currently supports HTTP/3 for communication between your viewers’ clients/browsers and CloudFront edge locations. For communication between the edge location and your origin servers, CloudFront will continue to use HTTP/1.1. /cloudfront/faqs/;How do Amazon CloudFront's TLS security policies interact with HTTP/3?;HTTP/3 uses QUIC - which requires TLSv1.3. Therefore, independent of the security policy you have chosen, only TLSv1.3 and the supported TLSv1.3 cipher suites can be used to establish HTTP/3 connections. Refer Supported protocols and ciphers between viewers and CloudFront section of the CloudFront developers guide for details. /cloudfront/faqs/;Is there a separate charge for enabling HTTP/3?;No, there is no separate charge for enabling HTTP/3 on Amazon CloudFront distributions. HTTP/3 requests will be charged at the request pricing rates as per your pricing plan. /cloudfront/faqs/;What are WebSockets?;WebSocket is a real-time communication protocol that provides bidirectional communication between a client and a server over a long-held TCP connection. By using a persistent open connection, the client and the server can send real-time data to each other without the client having to frequently reinitiate connections checking for new data to exchange. WebSocket connections are often used in chat applications, collaboration platforms, multiplayer games, and financial trading platforms. Refer to our documentation to learn more about using the WebSocket protocol with Amazon CloudFront. /cloudfront/faqs/;How do I enable my Amazon CloudFront distribution to support the WebSocket protocol?;You can use WebSockets globally, and no additional configuration is needed to enable the WebSocket protocol within your CloudFront resource as it is now supported by default. /cloudfront/faqs/;When is a WebSocket connection established through Amazon CloudFront?;Amazon CloudFront establishes WebSocket connections only when the client includes the 'Upgrade: websocket' header and the server responds with the HTTP status code 101 confirming that it can switch to the WebSocket protocol. /cloudfront/faqs/;Does Amazon CloudFront support secured WebSockets over TLS?;Yes. Amazon CloudFront supports encrypted WebSocket connections (WSS) using the SSL/TLS protocol. /cloudfront/faqs/;Can I configure my CloudFront distribution to deliver content over HTTPS using my own domain name?;By default, you can deliver your content to viewers over HTTPS by using your CloudFront distribution domain name in your URLs, for example, https://dxxxxx.cloudfront.net/image.jpg. If you want to deliver your content over HTTPS using your own domain name and your own SSL certificate, you can use one of our Custom SSL certificate support features. Learn more. /cloudfront/faqs/;What is Field-Level Encryption?;Field-Level Encryption is a feature of CloudFront that allows you to securely upload user-submitted data such as credit card numbers to your origin servers. Using this functionality, you can further encrypt sensitive data in an HTTPS form using field-specific encryption keys (which you supply) before a PUT/ POST request is forwarded to your origin. This ensures that sensitive data can only be decrypted and viewed by certain components or services in your application stack. To learn more about field-level encryption, see Field-Level Encryption in our documentation. /cloudfront/faqs/;I am already using SSL/ TLS encryption with CloudFront, do I still need Field-Level Encryption?;Many web applications collect sensitive data such as credit card numbers from users that is then processed by application services running on the origin infrastructure. All these web applications use SSL/TLS encryption between the end user and CloudFront, and between CloudFront and your origin. Now, your origin could have multiple micro-services that perform critical operations based on user input. However, typically sensitive information only needs to be used by a small subset of these micro-services, which means most components have direct access to these data for no reason. A simple programming mistake, such as logging the wrong variable could lead to a customer’s credit card number being written to a file. /cloudfront/faqs/;What is the difference between SNI Custom SSL and Dedicated IP Custom SSL of Amazon CloudFront?;Dedicated IP Custom SSL allocates dedicated IP addresses to serve your SSL content at each CloudFront edge location. Because there is a one to one mapping between IP addresses and SSL certificates, Dedicated IP Custom SSL works with browsers and other clients that do not support SNI. Due to the current IP address costs, Dedicated IP Custom SSL is $600/month prorated by the hour. /cloudfront/faqs/;What is Server Name Indication?;Server Name Indication (SNI) is an extension of the Transport Layer Security (TLS) protocol. This mechanism identifies the domain (server name) of the associated SSL request so the proper certificate can be used in the SSL handshake. This allows a single IP address to be used across multiple servers. SNrequires browser support to add the server name, and while most modern browsers support it, there are a few legacy browsers that do not. For more details see the SNsection of the CloudFront Developer Guide or the SNWikipedia article. /cloudfront/faqs/;Does CloudFront Integrate with AWS Certificate Manager?;Yes, you can now provision SSL/TLS certificates and associate them with CloudFront distributions within minutes. Simply provision a certificate using the new AWS Certificate Manager (ACM) and deploy it to your CloudFront distribution with a couple of clicks, and let ACM manage certificate renewals for you. ACM allows you to provision, deploy, and manage the certificate with no additional charges. /cloudfront/faqs/;Does Amazon CloudFront support access controls for paid or private content?;Yes, Amazon CloudFront has an optional private content feature. When this option is enabled, Amazon CloudFront will only deliver files when you say it is okay to do so by securely signing your requests. Learn more about this feature by reading the CloudFront Developer Guide. /cloudfront/faqs/;How can I safeguard my web applications delivered via CloudFront from DDoS attacks?;As an AWS customer, you get AWS Shield Standard at no additional cost. AWS Shield is a managed service that provides protection against DDoS attacks for web applications running on AWS. AWS Shield Standard provides protection for all AWS customers against common and most frequently occurring Infrastructure (layer 3 and 4) attacks like SYN/UDP Floods, Reflection attacks, and others to support high availability of your applications on AWS. /cloudfront/faqs/;How can I protect my web applications delivered via CloudFront?;You can integrate your CloudFront distribution with AWS WAF, a web application firewall that helps protect web applications from attacks by allowing you to configure rules based on IP addresses, HTTP headers, and custom URI strings. Using these rules, AWS WAF can block, allow, or monitor (count) web requests for your web application. Please see AWS WAF Developer Guide for more information. /cloudfront/faqs/;Can I add or modify request headers forwarded to the origin?;"Yes, you can configure Amazon CloudFront to add custom headers, or override the value of existing headers, to requests forwarded to your origin. You can use these headers to help validate that requests made to your origin were sent from CloudFront; you can even configure your origin to only allow requests that contain the custom header values you specify. Additionally, if you use multiple CloudFront distributions with the same origin, you can use custom headers to distinguish origin request made by each different distribution. Finally, custom headers can be used to help determine the right CORS headers returned for your requests. You can configure custom headers via the CloudFront API and the AWS Management Console. There are no additional charges for this feature. For more details on how to set your custom headers, you can read more here." /cloudfront/faqs/;How does Amazon CloudFront handle HTTP cookies?;Amazon CloudFront supports delivery of dynamic content that is customized or personalized using HTTP cookies. To use this feature, you specify whether you want Amazon CloudFront to forward some or all of your cookies to your custom origin server. Amazon CloudFront then considers the forwarded cookie values when identifying a unique object in its cache. This way, your end users get both the benefit of content that is personalized just for them with a cookie and the performance benefits of Amazon CloudFront. You can also optionally choose to log the cookie values in Amazon CloudFront access logs. /cloudfront/faqs/;How does Amazon CloudFront handle query string parameters in the URL?;A query string may be optionally configured to be part of the cache key for identifying objects in the Amazon CloudFront cache. This helps you build dynamic web pages (e.g. search results) that may be cached at the edge for some amount of time. /cloudfront/faqs/;Can I specify which query parameters are used in the cache key?;Yes, the query string whitelisting feature allows you to easily configure Amazon CloudFront to only use certain parameters in the cache key, while still forwarding all of the parameters to the origin. /cloudfront/faqs/;Is there a limit to the number of query parameters that can be whitelisted?;Yes, you can configure Amazon CloudFront to whitelist up to 10 query parameters. /cloudfront/faqs/;What parameter types are supported?;Amazon CloudFront supports URI query parameters as defined in section 3.4 of RFC3986. Specifically, it supports query parameters embedded in an HTTP GET string after the ‘?’ character, and delimited by the ‘&’ character. /cloudfront/faqs/;Does CloudFront support gzip compression?;Yes, CloudFront can automatically compress your text or binary data. To use the feature, simply specify in your cache behavior settings that you would like CloudFront to compress objects automatically and ensure that your client adds Accept-Encoding: gzip in the request header (most modern web browsers do this by default). For more information on this feature, please see our developer guide. /cloudfront/faqs/;What is streaming? Why would I want to stream?;Generally, streaming refers to delivering audio and video to end users over the Internet without having to download the media file prior to playback. The protocols used for streaming include those that use HTTP for delivery such as Apple’s HTTP Live Streaming (HLS), MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH), Adobe’s HTTP Dynamic Streaming (HDS) and Microsoft’s Smooth Streaming. These protocols are different than the delivery of web pages and other online content because streaming protocols deliver media in real time – viewers watch the bytes as they are delivered. Streaming content has several potential benefits for you and your end-users: /cloudfront/faqs/;Does Amazon CloudFront support video-on-demand (VOD) streaming protocols?;Yes, Amazon CloudFront provides you with multiple options to deliver on-demand video content. If you have media files that have been converted to HLS, MPEG-DASH, or Microsoft Smooth Streaming, for example using AWS Elemental MediaConvert, prior to being stored in Amazon S3 (or a custom origin), you can use an Amazon CloudFront web distribution to stream in that format without having to run any media servers. /cloudfront/faqs/;Does Amazon CloudFront support live streaming to multiple platforms?;Yes. You can use Amazon CloudFront live streaming with any live video origination service that outputs HTTP-based streams, such as AWS Elemental MediaPackage or AWS Elemental MediaStore. MediaPackage is a video origination and just-in-time packaging service that allows video distributors to securely and reliably deliver streaming content at scale using multiple delivery and content protection standards. MediaStore is an HTTP origination and storage service that offers the high performance, immediate consistency, and predictable low latency required for live media combined with the security and durability of Amazon storage. /cloudfront/faqs/;What is Origin Shield?;Origin Shield is a centralized caching layer that helps increase your cache hit ratio to reduce the load on your origin. Origin Shield also decreases your origin operating costs by collapsing requests across regions so as few as one request goes to your origin per object. When enabled, CloudFront will route all origin fetches through Origin Shield, and only make a request to your origin if the content is not already stored in Origin Shield's cache. /cloudfront/faqs/;When should I use Origin Shield?;Origin Shield is ideal for workloads with viewers that are spread across different geographical regions or workloads that involve just-in-time packaging for video streaming, on-the-fly image handling, or similar processes. Using Origin Shield in front of your origin will reduce the number of redundant origin fetches by first checking its central cache and only making a consolidated origin fetch for content not already in Origin Shield’s cache. Similarly, Origin Shield can be used in a multi-CDN architecture to reduce the number of duplicate origin fetches across CDNby positioning Amazon CloudFront as the origin to other CDNs. Refer to the Amazon CloudFront Developer Guide for more details on these and other Origin Shield Use Cases. /cloudfront/faqs/;Which Origin Shield Region should I use?;Amazon CloudFront offers Origin Shield in AWS Regions where CloudFront has a regional edge cache. When you enable Origin Shield, you should choose the AWS Region for Origin Shield that has the lowest latency to your origin. You can use Origin Shield with origins that are in an AWS Region, and with origins that are not in AWS. For more information, see Choosing the AWS Region for Origin Shield in the Amazon CloudFront Developer Guide. /cloudfront/faqs/;Is Origin Shield resilient and highly available?;Yes. All Origin Shield Regions are built using a highly-available architecture that spans several Availability Zones with fleets of auto-scaling Amazon EC2 instances. Connections from CloudFront locations to Origin Shield also use active error tracking for each request to automatically route the request to a secondary Origin Shield location if the primary Origin Shield location is unavailable. /cloudfront/faqs/;Can I use Amazon CloudFront if I expect usage peaks higher than 150 Gbps or 250,000 RPS?;Yes. Complete our request for higher limits here, and we will add more capacity to your account within two business days. /cloudfront/faqs/;Is there a limit to the number of distributions my Amazon CloudFront account may deliver?;For the current limit on the number of distributions that you can create for each AWS account, see Amazon CloudFront Limits in the Amazon Web Services General Reference. To request a higher limit, please go to the CloudFront Limit Increase Form. /cloudfront/faqs/;What is the maximum size of a file that can be delivered through Amazon CloudFront?;The maximum size of a single file that can be delivered through Amazon CloudFront is 30 GB. This limit applies to all Amazon CloudFront distributions. /cloudfront/faqs/;What logging capabilities are available with Amazon CloudFront?;When you create or modify a CloudFront distribution, you can enable access logging. CloudFront provides two ways to log the requests that are delivered from your distributions: Standard logs and Real-time logs. CloudFront standard logs are delivered to the Amazon S3 bucket of your choice (log records are delivered within minutes of a viewer request). When enabled, CloudFront will automatically publish detailed log information in a W3C extended format into an Amazon S3 bucket that you specify. Access logs contain detailed information about each request for your content, including the object requested, the date and time of the request, the edge location serving the request, the client IP address, the referrer, the user agent, the cookie header, and the result type (for example, cache hit, or miss, or error). CloudFront doesn’t charge for standard logs, though you incur Amazon S3 charges for storing and accessing the log files. CloudFront real-time logs are delivered to the data stream of your choice in Amazon Kinesis Data Streams (log records are delivered within seconds of a viewer request). You can choose the sampling rate for your real-time logs—that is, the percentage of requests for which you want to receive real-time log records. You can also choose the specific fields that you want to receive in the log records. CloudFront real-time logs contain all the same data points as the standard logs and also contain certain additional information about each request such as viewer request headers, and country code, in a W3C extended format. CloudFront charges for real-time logs, in addition to the charges you incur for using Kinesis Data Streams. /cloudfront/faqs/;How do I determine the appropriate CloudFront logs for my use case?;You can choose a destination depending on your use case. If you have time sensitive use cases and require access log data quickly within a few seconds, then choose the real-time logs. If you need your real-time log pipeline to be cheaper, you can choose to filter the log data by enabling logs only for specific cache behaviors, or choosing a lower sampling rate. The real-time log pipeline is built for quick data delivery. Therefore, log records may be dropped if there are data delays. On the other hand, if you need a low cost log processing solution with no requirement for real-time data then the current standard log option is ideal for you. The standard logs in S3 are built for completeness and the logs are typically available in a few mins. These logs can be enabled for the entire distribution and not for specific cache behaviors. Therefore, if you require logs for adhoc investigation, audit, and analysis, you can choose to only enable the standard logs in S3. You could choose to use a combination of both the logs. Use a filtered list of real-time logs for operational visibility and then use the standard logs for audit. Use the following steps to estimate the number of shards you need: /cloudfront/faqs/;Does Amazon CloudFront offer ready-to-use reports so I can learn more about my usage, viewers, and content being served?;Yes. Whether it's receiving detailed cache statistics reports, monitoring your CloudFront usage, seeing where your customers are viewing your content from, or setting near real-time alarms on operational metrics, Amazon CloudFront offers a variety of solutions for your reporting needs. You can access all our reporting options by visiting the Amazon CloudFront Reporting & Analytics dashboard in the AWS Management Console. You can also learn more about our various reporting options by viewing Amazon CloudFront's Reports & Analytics page. /cloudfront/faqs/;Can I tag my distributions?;Yes. Amazon CloudFront supports cost allocation tagging. Tags make it easier for you to allocate costs and optimize spending by categorizing and grouping AWS resources. For example, you can use tags to group resources by administrator, application name, cost center, or a specific project. To learn more about cost allocation tagging, see Using Cost Allocation Tags. If you are ready to add tags to you CloudFront distributions, see Amazon CloudFront Add Tags page. /cloudfront/faqs/;Can I get a history of all Amazon CloudFront API calls made on my account for security, operational or compliance auditing?;Yes. To receive a history of all Amazon CloudFront API calls made on your account, you simply turn on AWS CloudTrail in the CloudTrail's AWS Management Console. For more information, visit AWS CloudTrail home page. /cloudfront/faqs/;Do you have options for monitoring and alarming metrics in real time?;You can monitor, alarm and receive notifications on the operational performance of your Amazon CloudFront distributions within just a few minutes of the viewer request using Amazon CloudWatch. CloudFront automatically publishes six operational metrics, each at 1-minute granularity, into Amazon CloudWatch. You can then use CloudWatch to set alarms on any abnormal patterns in your CloudFront traffic. To learn how to get started monitoring CloudFront activity and setting alarms via CloudWatch, please view our walkthrough in the Amazon CloudFront Developer Guide or simply navigate to the Amazon CloudFront Management Console and select Monitoring & Alarming in the navigation pane. /cloudfront/faqs/;How do I customize content with CloudFront Functions?;CloudFront Functions is natively built into CloudFront, allowing customers to easily build, test, and deploy functions within the same service. Our GitHub repo makes it easy for developers to get started by offering a large collection of example code that can be used as starting point for building functions. You can build functions on the CloudFront console using the IDE or the CloudFront APIs/CLI. Once your code is authored, you can test your function against a production CloudFront distribution, ensuring your function will execute properly once deployed. The test functionality in the console offers a visual editor to quickly create test events and validate functions. Once associated to a CloudFront distribution, the code is deployed to AWS’s globally distributed network of edge locations for execution in response to CloudFront requests. /cloudfront/faqs/;What are the use cases for CloudFront Functions?;CloudFront Functions is ideal for lightweight, short-running functions like the following: /cloudfront/faqs/;Is CloudFront Functions replacing Lambda@Edge?; The combination of CloudFront Functions and Lambda@Edge gives you two powerful and flexible options for running code in response to CloudFront events. Both offer secure ways to execute code in response to CloudFront events without managing infrastructure. CloudFront Functions was purpose-built for lightweight, high scale, and latency sensitive request/response transformations and manipulations. Lambda@Edge uses general-purpose runtimes that support a wide range of computing needs and customizations. You should use Lambda@Edge for computationally intensive operations. This could be computations that take longer to complete (several milliseconds to seconds), take dependencies on external 3rd party libraries, require integrations with other AWS services (e.g., S3, DynamoDB), or need networks calls for data processing. Some of the popular advanced Lambda@Edge use cases include HLS streaming manifest manipulation, integrations with 3rd party authorization and bot detection services, server-side rendering (SSR) of single-page apps (SPA) at the edge and more. See the Lambda@Edge use cases page for more details. /cloudfront/faqs/;Should I use CloudFront Functions or Lambda@Edge?; CloudFront Functions delivers the performance, scale and cost-effectiveness that you expect, but with a unique security model that offers strict isolation boundaries between the Functions code. When you run custom code in a shared, multi-tenant compute environment, maintaining a highly secure execution environment is key. A bad actor may attempt exploit bugs present in the runtime, libraries, or CPU to leak sensitive data from the server or from another customer’s functions. Without a rigorous isolation barrier between function code, these exploits are possible. Both AWS Lambda and Lambda@Edge already achieve this security isolation through the Firecracker based VM isolation. With CloudFront Functions, we have developed a process-based isolation model that provides the same security bar against side-channel attacks like Spectre and Meltdown, timing-based attacks or other code vulnerabilities. CloudFront Functions cannot access or modify data belonging to other customers. We do this by running functions in a dedicated process on a dedicated CPU. CloudFront Functions executes on process workers that only serve one customer at a time and all customer-specific data is cleared (flushed) between executions. /cloudfront/faqs/;How does AWS keep CloudFront Functions secure?;CloudFront Functions’ does not use V8 as a JavaScript engine. Functions’ security model is different, and considered more secure than the v8 isolates based model offered by some other vendors. /cloudfront/faqs/;How do I know my CloudFront Function will execute successfully?; CloudFront Functions output both metrics and execution logs to monitor the usage and performance of a function. Metrics are generated for each invocation of a function and you can see metrics from each function individually on the CloudFront or CloudWatch console. Metrics include the number of invocations, compute utilization, validation errors and execution errors. If your function results in a validation error or execution error, the error message will also appear in your CloudFront access logs, giving you better visibility into how the function impacts your CloudFront traffic. In addition to metrics, you can also generate execution logs by including a console.log() statement inside your function code. Any log statement will generate a CloudWatch log entry that will be sent to CloudWatch. Logs and metrics are included as part of the CloudFront Functions price. /cloudfront/faqs/;What is Lambda@Edge?;Lambda@Edge is an extension of AWS Lambda allowing you to run code at global edge locations without provisioning or managing servers. Lambda@Edge offers powerful and flexible serverless computing for complex functions and full application logic closer to your viewers. Lambda@Edge functions run in a Node.js or Python environment. You publish functions to a single AWS Region, and when you associate the function with a CloudFront distribution, Lambda@Edge automatically replicates your code around the world. Lambda@Edge scales automatically, from a few requests per day to thousands per second. /cloudfront/faqs/;How do I customize content with Lambda@Edge?;Lambda@Edge are executed by associating functions against specific cache behaviors in CloudFront. You can also specify at which point during the CloudFront request or response processing the function should execute (i.e., when a viewer request lands, when a request is forwarded to or received back from the origin, or right before responding back to the end viewer). You to write code using Node.js or Python from the Lambda console, API, or using frameworks like the Serverless Application Model (SAM). When you have tested your function, you associate it with the selected CloudFront cache behavior and event trigger. Once saved, the next time a request is made to your CloudFront distribution, the function is propagated to the CloudFront edge, and will scale and execute as needed. Learn more in our documentation. /cloudfront/faqs/;What Lambda@Edge events can be triggered with Amazon CloudFront?;Your Lambda@Edge functions will automatically trigger in response to the following Amazon CloudFront events: /cloudfront/faqs/;What is continuous deployment on CloudFront?;Continuous deployment on CloudFront provides the ability to test and validate the configuration changes with a portion of live traffic before deploying changes to all viewers. /cloudfront/faqs/;How can I set up continuous deployment on CloudFront?;You can set up continuous deployment by associating a staging distribution to a primary distribution through CloudFront console, SDK, Command Line Interface (CLI), or CloudFormation template. You can then define rules to split traffic by configuring the client header or dialing up a percentage of traffic to test with the staging distribution. Once set up, you can update the staging configuration with desired changes. CloudFront will manage the split of traffic to users and provide associated analytics to help you decide whether to continue deployment or rollback. Once testing with staging distributions is validated, you can merge changes to the main distribution. /cloudfront/faqs/;How will I measure the results of continuous deployment?;Continuous deployment allows for real user monitoring through real web traffic. You can use any of the existing available methods of monitoring—CloudFront console, CloudFront API, CLI, or CloudWatch—to individually measure operational metrics of both primary and staging distribution. You can measure the success criteria of your specific application by measuring and comparing throughput, latency, and availability metrics between the two distributions. /cloudfront/faqs/;Can I use existing distributions?;Yes, you can use any existing distributions as a baseline to create a staging distribution and introduce and test changes. /cloudfront/faqs/;How does continuous deployment work with CloudFront Functions & Lambda@Edge?;With continuous deployment, you can associate different functions with the primary and staging distributions. You can also use the same function with both distributions. If you update a function that’s used by both distributions, they both receive the update. /cloudfront/faqs/;How do I use continuous deployment distributions with AWS CloudFormation?;Each resource in your CloudFormation stack maps to a specific AWS resource. A staging distribution will have its own resource ID and work like any other AWS resource. You can use CloudFormation to create/update that resource. /cloudfront/faqs/;How does continuous deployment on CloudFront support session stickiness?;When you use a weight-based configuration to route traffic to a staging distribution, you can also enable session stickiness, which helps make sure that CloudFront treats requests from the same viewer as a single session. When you enable session stickiness, CloudFront sets a cookie so that all requests from the same viewer in a single session are served by one distribution, either the primary or the staging. /cloudfront/faqs/;How much does it cost?;Continuous deployment feature is available at all CloudFront edge locations at no additional cost. /cloudfront/faqs/;What is IPv6?;Every server and device connected to the Internet must have a numeric Internet Protocol (IP) address. As the Internet and the number of people using it grows exponentially, so does the need for IP addresses. IPv6 is a new version of the Internet Protocol that uses a larger address space than its predecessor IPv4. Under IPv4, every IP address is 32 bits long, which allows 4.3 billion unique addresses. An example IPv4 address is 192.0.2.1. In comparison, IPv6 addresses are 128 bits, which allow for approximately three hundred and forty trillion, trillion unique IP addresses. An example IPv6 address is: 2001:0db8:85a3:0:0:8a2e:0370:7334 /cloudfront/faqs/;What can I do with IPv6?;Using IPv6 support for Amazon CloudFront, your applications can connect to Amazon CloudFront edge locations without needing any IPv6 to IPv4 translation software or systems. You can meet the requirements for IPv6 adoption set by governments - including the U.S. Federal government – and benefit from IPv6 extensibility, simplicity in network management, and additional built-in support for security. /cloudfront/faqs/;Should I expect a change in Amazon CloudFront performance when using IPv6?;No, you will see the same performance when using either IPv4 or IPv6 with Amazon CloudFront. /cloudfront/faqs/;Are there any Amazon CloudFront features that will not work with IPv6?;All existing features of Amazon CloudFront will continue to work on IPv6, though there are two changes you may need for internal IPv6 address processing before you turn on IPv6 for your distributions. /cloudfront/faqs/;Does that mean if I want to use IPv6 at all I cannot use Trusted Signer URLs with IP whitelist?;No. If you want to use IPv6 and Trusted Signer URLs with IP whitelist you should use two separate distributions. You should dedicate a distribution exclusively to your Trusted Signer URLs with IP whitelist and disable IPv6 for that distribution. You would then use another distribution for all other content, which will work with both IPv4 and IPv6. /cloudfront/faqs/;If I enable IPv6, will the IPv6 address appear in the Access Log?;Yes, your viewer’s IPv6 addresses will now be shown in the “c-ip” field of the access logs, if you have the Amazon CloudFront Access Logs feature enabled. You may need to verify that your log processing systems continue to work for IPv6 addresses before you turn on IPv6 for your distributions. Please contact Developer Support if you have any issues with IPv6 traffic impacting your tool or software’s ability to handle IPv6 addresses in access logs. For more details, please refer to the Amazon CloudFront Access Logs documentation. /cloudfront/faqs/;Can I disable IPv6 for all my new distributions?;Yes, for both new and existing distributions, you can use the Amazon CloudFront console or API to enable / disable IPv6 per distribution. /cloudfront/faqs/;Are there any reasons why I would want to disable IPv6?;In discussions with customers, the only common case we heard about was internal IP address processing. When you enable IPv6 for your Amazon CloudFront distribution, in addition to getting an IPv6 address in your detailed access logs, you will get IPv6 addresses in the ‘X-Forwarded-For’ header that is sent to your origins. If your origin systems are only able to process IPv4 addresses, you may need to verify that your origin systems continue to work for IPv6 addresses before you turn on IPv6 for your distributions. /cloudfront/faqs/;I enabled IPv6 for my distribution but a DNS lookup doesn’t return any IPv6 addresses. What is happening?;Amazon CloudFront has very diverse connectivity around the globe, but there are still certain networks that do not have ubiquitous IPv6 connectivity. While the long term future of the Internet is obviously IPv6, for the foreseeable future every endpoint on the Internet will have IPv4 connectivity. When we find parts of the Internet that have better IPv4 connectivity than IPv6, we will prefer the former. /cloudfront/faqs/;If I use Route 53 to handle my DNS needs and I created an alias record pointing to an Amazon CloudFront distribution, do I need to update my alias records to enable IPv6?;Yes, you can create Route 53 alias records pointing to your Amazon CloudFront distribution to support both IPv4 and IPv6 by using “A” and “AAAA” record type respectively. If you want to enable IPv4 only, you need only one alias record with type “A”. For details on alias resource record sets, please refer to the Amazon Route 53 Developer Guide. /cloudfront/faqs/;What usage types are covered in the AWS free tier for Amazon CloudFront?;Starting Dec 1 2021, all AWS customers will receive 1 TB of data transfer out, 10,000,000 HTTP/HTTPS requests, plus 2,000,000 CloudFront Functions invocations each month for free. All other usage types (eg. Invalidations, Proxy requests, Lambda@edge, Origin shield, Data Transfer to Origin etc.) are excluded from the free tier. /cloudfront/faqs/;If we sign-up for Consolidated Billing, can we get the AWS Free Tier for each account?;No, customers that use Consolidated Billing to consolidate payment across multiple accounts will only have access to one Free Tier per Organization. /cloudfront/faqs/;What happens if my usage is in multiple regions, and I exceed the free tiers?;The 1 TB data transfer and 10 million Get requests are monthly free tier limits across all Edge locations. If your usage exceeds the monthly free tier limits, you simply pay standard, On-Demand AWS service rates for each region. See the AWS CloudFront Pricing page for full pricing details. /cloudfront/faqs/;How do I know how much I’ve used and if I’ve gone over the free usage tiers?;You can see current and past usage activity by region by logging into your account and going to the Billing & Cost Management Dashboard. From there you can manage your costs and usage using AWS Budgets, visualize your cost drivers and usage trends via Cost Explorer, and dive deeper into your costs using the Cost and Usage Reports. To learn more about how to control your AWS costs, check out the Control your AWS costs 10-Minute Tutorial. /cloudfront/faqs/;Does your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /cloudfront/faqs/;How much will the real-time logs cost me?;Monthly cost of Kinesis Data Stream: $47.74/month as calculated using the Kinesis calculator here. /cloudfront/faqs/;How am I charged for 304 responses?;"A 304 is a response to a conditional GET request and will result in a charge for the HTTP/HTTPS request and the Data Transfer Out to Internet. A 304 response does not contain a message-body; however, the HTTP headers will consume some bandwidth for which you would be charged standard CloudFront data transfer fees. The amount of data transfer depends on the headers associated with your object." /cloudfront/faqs/;Can I choose to only serve content from less expensive Amazon CloudFront regions?;"Yes, ""Price Classes"" provides you an option to lower the prices you pay to deliver content out of Amazon CloudFront. By default, Amazon CloudFront minimizes end user latency by delivering content from its entire global network of edge locations. However, because we charge more where our costs are higher, this means that you pay more to deliver your content with low latency to end-users in some locations. Price Classes let you reduce your delivery prices by excluding Amazon CloudFront’s more expensive edge locations from your Amazon CloudFront distribution. In these cases, Amazon CloudFront will deliver your content from edge locations within the locations in the price class you selected and charge you the data transfer and request pricing from the actual location where the content was delivered." /cloudfront/faqs/;What is the CloudFront Security Savings Bundle?;The CloudFront Security Savings Bundle is a flexible self-service pricing plan that helps you save up to 30% on your CloudFront bill in exchange for making a commitment to a consistent amount of monthly usage (e.g. $100/month) for a 1 year term. As an added benefit, AWS WAF (Web Application Firewall) usage, up to 10% of your committed plan amount, to protect CloudFront resources is included at no additional charge. For example, making a commitment of $100 of CloudFront usage per month would cover a $142.86 worth of CloudFront usage for a 30% savings compared to standard rates. Additionally, up to $10 of AWS WAF usage is included to protect your CloudFront resources at no additional charge each month (up to 10% of your CloudFront commitment). Standard CloudFront and AWS WAF charges apply to any usage above what is covered by your monthly spend commitment. As your usage grows, you can buy additional savings bundles to obtain discounts on incremental usage. /cloudfront/faqs/;What types of usage are covered by a CloudFront Security Savings Bundle?;By purchasing a CloudFront Security Savings Bundle, you receive a 30% savings that will appear on the CloudFront service portion of your monthly bill that will offset any CloudFront billed usage types including data transfer out, data transfer to origin, HTTP/S request fees, field level encryption requests, Origin Shield, invalidations, dedicated IP custom SSL, and Lambda@Edge charges. You will also receive with additional benefits that help cover AWS WAF usage associated with your CloudFront distributions. /cloudfront/faqs/;What happens when my CloudFront Security Savings Bundle  expires after the 1-year term?;Once your CloudFront Security Savings Bundle term expires, standard service charges will apply for your CloudFront and AWS WAF usage. The monthly Savings Bundle commit will no longer be billed and savings bundle benefits will no longer apply. Any time prior to expiration of your bundle term, you can choose to opt-in to automatically renew the CloudFront Security Savings Bundle for another 1 year term. /cloudfront/faqs/;How does CloudFront Security Savings Bundle work with AWS Organizations/ Consolidated Billing?;CloudFront Security Savings Bundle can be purchased in any account within an AWS Organization/Consolidated Billing family. CloudFront Security Savings Bundle benefits are applied as credits on your bill. The benefits provided by the Savings Bundle is applicable to usage across all accounts within an AWS Organization/consolidated billing family by default (credit sharing is turned on) and is dependent on when the subscribing account joins or leaves an organization. See AWS Credits to learn more how AWS credits apply across single and multiple accounts. /cloudfront/faqs/;Can I have multiple CloudFront Security Savings Bundles active at the same time?;Yes, you may purchase additional CloudFront Security Savings Bundles as your usage grows to get discounts on the incremental usage. All active CloudFront Security Savings Bundles will be taken into account when calculating your AWS bill. /route53/faqs/;What is a Domain Name System (DNS) Service?;"DNis a globally distributed service that translates human readable names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. The Internet’s DNsystem works much like a phone book by managing the mapping between names and numbers. For DNS, the names are domain names (www.example.com) that are easy for people to remember and the numbers are IP addresses (192.0.2.1) that specify the location of computers on the Internet. DNservers translate requests for names into IP addresses, controlling which server an end user will reach when they type a domain name into their web browser. These requests are called ""queries.""" /route53/faqs/;What is Amazon Route 53?;Amazon Route 53 provides highly available and scalable Domain Name System (DNS), domain name registration, and health-checking web services. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like example.com into the numeric IP addresses, such as 192.0.2.1, that computers use to connect to each other. You can combine your DNwith health-checking services to route traffic to healthy endpoints or to independently monitor and/or alarm on endpoints. You can also purchase and manage domain names such as example.com and automatically configure DNsettings for your domains. Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers, or Amazon S3 buckets – and can also be used to route users to infrastructure outside of AWS. /route53/faqs/;What can I do with Amazon Route 53?;With Amazon Route 53, you can create and manage your public DNrecords. Like a phone book, Route 53 lets you manage the IP addresses listed for your domain names in the Internet’s DNphone book. Route 53 also answers requests to translate specific domain names like into their corresponding IP addresses like 192.0.2.1. You can use Route 53 to create DNrecords for a new domain or transfer DNrecords for an existing domain. The simple, standards-based REST API for Route 53 allows you to easily create, update and manage DNrecords. Route 53 additionally offers health checks to monitor the health and performance of your application as well as your web servers and other resources. You can also register new domain names or transfer in existing domain names to be managed by Route 53. /route53/faqs/;How do I get started with Amazon Route 53?;Amazon Route 53 has a simple web service interface that lets you get started in minutes. Your DNrecords are organized into “hosted zones” that you configure with the AWS Management Console or Route 53’s API. To use Route 53, you simply: /route53/faqs/;How does Amazon Route 53 provide high availability and low latency?;Route 53 is built using AWS’s highly available and reliable infrastructure. The globally distributed nature of our DNservers helps ensure a consistent ability to route your end users to your application by circumventing any internet or network related issues. Route 53 is designed to provide the level of dependability required by important applications. Using a global anycast network of DNservers around the world, Route 53 is designed to automatically answer queries from the optimal location depending on network conditions. As a result, the service offers low query latency for your end users. /route53/faqs/;What are the DNS server names for the Amazon Route 53 service?;To provide you with a highly available service, each Amazon Route 53 hosted zone is served by its own set of virtual DNservers. The DNserver names for each hosted zone are thus assigned by the system when that hosted zone is created. /route53/faqs/;What is the difference between a Domain and a Hosted Zone?;"A domain is a general DNconcept. Domain names are easily recognizable names for numerically addressed Internet resources. For example, amazon.com is a domain. A hosted zone is an Amazon Route 53 concept. A hosted zone is analogous to a traditional DNzone file; it represents a collection of records that can be managed together, belonging to a single parent domain name. All resource record sets within a hosted zone must have the hosted zone’s domain name as a suffix. For example, the amazon.com hosted zone may contain records named www.amazon.com, and www.aws.amazon.com, but not a record named www.amazon.ca. You can use the Route 53 Management Console or API to create, inspect, modify, and delete hosted zones. You can also use the Management Console or API to register new domain names and transfer existing domain names into Route 53’s management." /route53/faqs/;What is the price of Amazon Route 53?;Amazon Route 53 charges are based on actual usage of the service for Hosted Zones, Queries, Health Checks, and Domain Names. For full details, see the Amazon Route 53 pricing page. /route53/faqs/;What types of access controls can I set for the management of my Domains on Amazon Route 53?;You can control management access to your Amazon Route 53 hosted zone and individual resource record sets by using the AWS Identity and Access Management (IAM) service. AWS IAM allows you to control who in your organization can make changes to your DNrecords by creating multiple users and managing the permissions for each of these users within your AWS Account. Learn more about AWS IAM here. /route53/faqs/;When is my hosted zone charged?;Hosted zones are billed once when they are created and then on the first day of each month. /route53/faqs/;Why do I see two charges for the same hosted zone in the same month?;Hosted zones have a grace period of 12 hours--if you delete a hosted zone within 12 hours after you create it, we don't charge you for the hosted zone. After the grace period ends, we immediately charge the standard monthly fee for a hosted zone. If you create a hosted zone on the last day of the month (for example, January 31st), the charge for January might appear on the February invoice, along with the charge for February. /route53/faqs/;Does Amazon Route 53 provide query logging capability?;You can configure Amazon Route 53 to log information about the queries that Amazon Route 53 receives including date-time stamp, domain name, query type, location etc. When you configure query logging, Amazon Route 53 starts to send logs to CloudWatch Logs. You use CloudWatch Logs tools to access the query logs. For more information please see our documentation. /route53/faqs/;Does Amazon Route 53 offer a Service Level Agreement (SLA)?;Yes. Both the Amazon Route 53 authoritative service and the Amazon Route 53 Resolver Endpoints service provide for a service credit if a customer’s monthly uptime percentage is below our service commitment in any billing cycle. More information can be found at Amazon Route 53 Service Level Agreement and Amazon Route 53 Resolver Endpoints Service Level Agreement. /route53/faqs/;Does Amazon Route 53 use an anycast network?;Yes. Anycast is a networking and routing technology that helps your end users’ DNqueries get answered from the optimal Route 53 location given network conditions. As a result, your users get high availability and improved performance with Route 53. /route53/faqs/;Is there a limit to the number of hosted zones I can manage using Amazon Route 53?;Each Amazon Route 53 account is limited to a maximum of 500 hosted zones and 10,000 resource record sets per hosted zone. Complete our request for a higher limit and we will respond to your request within two business days. /route53/faqs/;How can I import a zone into Route 53?;Route 53 supports importing standard DNzone files which can be exported from many DNproviders as well as standard DNserver software such as BIND. For newly-created hosted zones, as well as existing hosted zones that are empty except for the default Nand SOA records, you can paste your zone file directly into the Route 53 console, and Route 53 automatically creates the records in your hosted zone. To get started with zone file import, read our walkthrough in the Amazon Route 53 Developer Guide. /route53/faqs/;Can I create multiple hosted zones for the same domain name?;Yes. Creating multiple hosted zones allows you to verify your DNsetting in a “test” environment, and then replicate those settings on a “production” hosted zone. For example, hosted zone Z1234 might be your test version of example.com, hosted on name servers ns-1, ns-2, ns-3, and ns-4. Similarly, hosted zone Z5678 might be your production version of example.com, hosted on ns-5, ns-6, ns-7, and ns-8. Since each hosted zone has a virtual set of name servers associated with that zone, Route 53 will answer DNqueries for example.com differently depending on which name server you send the DNquery to. /route53/faqs/;Does Amazon Route 53 also provide website hosting?;No. Amazon Route 53 is an authoritative DNservice and does not provide website hosting. However, you can use Amazon Simple Storage Service (Amazon S3) to host a static website. To host a dynamic website or other web applications, you can use Amazon Elastic Compute Cloud (Amazon EC2), which provides flexibility, control, and significant cost savings over traditional web hosting solutions. Learn more about Amazon EC2 here. For both static and dynamic websites, you can provide low latency delivery to your global end users with Amazon CloudFront. Learn more about Amazon CloudFront here. /route53/faqs/;Which DNS record types does Amazon Route 53 support?;Amazon Route 53 currently supports the following DNrecord types: /route53/faqs/;Can I point my zone apex (example.com versus www.example.com) at my Elastic Load Balancer?;Yes. Amazon Route 53 offers a special type of record called an 'Alias' record that lets you map your zone apex (example.com) DNname to the DNname for your ELB load balancer (such as my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com). IP addresses associated with load balancers can change at any time due to scaling up, scaling down, or software updates. Route 53 responds to each request for an Alias record with one or more IP addresses for the load balancer. Route 53 supports alias records for three types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers. There is no additional charge for queries to Alias records that are mapped to AWS ELB load balancers. These queries are listed as “Intra-AWS-DNS-Queries” on the Amazon Route 53 usage report. /route53/faqs/;Can I point my zone apex (example.com versus www.example.com) at my Amazon API Gateway?;Yes. Amazon Route 53 offers a special type of record called an ‘Alias’ record that lets you map your zone apex (example.com) DNname to your Amazon API Gateway DNname (i.e. api-id.execute-api.region.amazonaws.com/stage). IP addresses associated with Amazon API Gateway can change at any time due to scaling up, scaling down, or software updates. Route 53 responds to each request for an Alias record with one or more IP addresses for the API Gateway. There is no additional charge for queries to Alias records that are mapped to Amazon API Gateways. These queries are listed as “Intra-AWS-DNS-Queries” on the Route 53 usage report. /route53/faqs/;Can I point my zone apex (example.com versus www.example.com) at my Amazon VPC endpoint?;Yes. Amazon Route 53 offers a special type of record called an ‘Alias’ record that lets you map your zone apex (example.com) DNname to your Amazon VPC Endpoint DNname (i.e. vpce-svc-03d5ebb7d9579a2b3.us-east-1.vpce.amazonaws.com). IP addresses associated with Amazon VPC Endpoints can change at any time due to scaling up, scaling down, or software updates. Route 53 responds to each request for an Alias record with one or more IP addresses for the VPC endpoint. There is no additional charge for queries to Alias records that are mapped to Amazon VPC endpoints. These queries are listed as “Intra-AWS-DNS-Queries” on the Amazon Route 53 usage report. /route53/faqs/;Does Amazon Route 53 support Weighted Round Robin (WRR)?;Yes. Weighted Round Robin allows you to assign weights to resource record sets in order to specify the frequency with which different responses are served. You may want to use this capability to do A/B testing, sending a small portion of traffic to a server on which you’ve made a software change. For instance, suppose you have two record sets associated with one DNname—one with weight 3 and one with weight 1. In this case, 75% of the time Route 53 will return the record set with weight 3 and 25% of the time Route 53 will return the record set with weight 1. Weights can be any number between 0 and 255. /route53/faqs/;What is Amazon Route 53's Latency Based Routing (LBR) feature?;LBR (Latency Based Routing) is a new feature for Amazon Route 53 that helps you improve your application’s performance for a global audience. You can run applications in multiple AWS regions and Amazon Route 53, using dozens of edge locations worldwide, will route end users to the AWS region that provides the lowest latency. /route53/faqs/;How do I get started using Amazon Route 53's Latency Based Routing (LBR) feature?;You can start using Amazon Route 53’s new LBR feature quickly and easily by using either the AWS Management Console or a simple API. You simply create a record set that includes the IP addresses or ELB names of various AWS endpoints and mark that record set as an LBR-enabled Record Set, much like you mark a record set as a Weighted Record Set. Amazon Route 53 takes care of the rest - determining the best endpoint for each request and routing end users accordingly, much like Amazon CloudFront, Amazon’s global content delivery service, does. You can learn more about how to use Latency Based Routing in the Amazon Route 53 Developer Guide. /route53/faqs/;What is the price for Amazon Route 53's Latency Based Routing (LBR) feature?;Like all AWS services, there are no upfront fees or long term commitments to use Amazon Route 53 and LBR. Customers simply pay for the hosted zones and queries they actually use. Please visit the Amazon Route 53 pricing page for details on pricing for Latency Based Routing queries. /route53/faqs/;What is Amazon Route 53's Geo DNS feature?;Route 53 Geo DNlets you balance load by directing requests to specific endpoints based on the geographic location from which the request originates. Geo DNmakes it possible to customize localized content, such as presenting detail pages in the right language or restricting distribution of content to only the markets you have licensed. Geo DNalso lets you balance load across endpoints in a predictable, easy-to-manage way, ensuring that each end-user location is consistently routed to the same endpoint. Geo DNprovides three levels of geographic granularity: continent, country, and state, and Geo DNalso provides a global record which is served in cases where an end user’s location doesn’t match any of the specific Geo DNrecords you have created. You can also combine Geo DNwith other routing types, such as Latency Based Routing and DNFailover, to enable a variety of low-latency and fault-tolerant architectures. For information on how to configure various routing types, please see the Amazon Route 53 documentation. /route53/faqs/;How do I get started using Amazon Route 53's Geo DNS feature?;You can start using Amazon Route 53’s Geo DNfeature quickly and easily by using either the AWS Management Console or the Route 53 API. You simply create a record set and specify the applicable values for that type of record set, mark that record set as a Geo DNS-enabled Record Set, and select the geographic region (global, continent, country, or state) that you want the record to apply to. You can learn more about how to use Geo DNin the Amazon Route 53 Developer Guide. /route53/faqs/;"When using Geo DNS, do I need a ""global"" record? When would Route 53 return this record?";Yes, we strongly recommend that you configure a global record, to ensure that Route 53 can provide a response to DNqueries from all possible locations—even if you have created specific records for each continent, country, or state where you expect your end users will be located. Route 53 will return the value contained in your global record in the following cases: /route53/faqs/;Can I have a Geo DNS record for a continent and different Geo DNS records for countries within that continent? Or a Geo DNS record for a country and Geo DNS records for states within that country?;"Yes, you can have Geo DNrecords for overlapping geographic regions (e.g., a continent and countries within that continent, or a country and states within that country). For each end user’s location, Route 53 will return the most specific Geo DNrecord that includes that location. In other words, for a given end user’s location, Route 53 will first return a state record; if no state record is found, Route 53 will return a country record; if no country record is found, Route 53 will return a continent record; and finally, if no continent record is found, Route 53 will return the global record." /route53/faqs/;What is the price for Route 53's Geo DNS feature?;Like all AWS services, there are no upfront fees or long term commitments to use Amazon Route 53 and Geo DNS. Customers simply pay for the hosted zones and queries they actually use. Please visit the Amazon Route 53 pricing page for details on pricing for Geo DNqueries. /route53/faqs/;What is the difference between Latency Based Routing and Geo DNS?;"Geo DNbases routing decisions on the geographic location of the requests. In some cases, geography is a good proxy for latency; but there are certainly situations where it is not. LatencyBased Routing utilizes latency measurements between viewer networks and AWS datacenters. These measurements are used to determine which endpoint to direct users toward." /route53/faqs/;Does Amazon Route 53 support multiple values in response to DNS queries?;Route 53 now supports multivalue answers in response to DNqueries. While not a substitute for a load balancer, the ability to return multiple health-checkable IP addresses in response to DNqueries is a way to use DNto improve availability and load balancing. If you want to route traffic randomly to multiple resources, such as web servers, you can create one multivalue answer record for each resource and, optionally, associate an Amazon Route 53 health check with each record. Amazon Route 53 supports up to eight healthy records in response to each DNquery. /route53/faqs/;What is Amazon Route 53 Traffic Flow?;Amazon Route 53 Traffic Flow is an easy-to-use and cost-effective global traffic management service. With Amazon Route 53 Traffic Flow, you can improve the performance and availability of your application for your end users by running multiple endpoints around the world, using Amazon Route 53 Traffic Flow to connect your users to the best endpoint based on latency, geography, and endpoint health. Amazon Route 53 Traffic Flow makes it easy for developers to create policies that route traffic based on the constraints they care most about, including latency, endpoint health, load, geoproximity and geography. Customers can customize these templates or build policies from scratch using a simple visual policy builder in the AWS Management Console. /route53/faqs/;What is the difference between a traffic policy and a policy record?;A traffic policy is the set of rules that you define to route end users’ requests to one of your application’s endpoints. You can create a traffic policy using the visual policy builder in the Amazon Route 53 Traffic Flow section of the Amazon Route 53 console. You can also create traffic policies as JSON-formatted text files and upload these policies using the Route 53 API, the AWS CLI, or the various AWS SDKs. /route53/faqs/;Can I use the same policy to manage routing for more than one DNS name?;Yes. You can reuse a policy to manage more than one DNname in one of two ways. First, you can create additional policy records using the policy. Note that there is an additional charge for using this method because you are billed for each policy record that you create. /route53/faqs/;Can I create an Alias record pointing to a DNS name that is managed by a traffic policy?;Yes, it is possible to create an Alias record pointing to a DNname that is being managed by a traffic policy. /route53/faqs/;Is there a charge for traffic policies that don’t have a policy record?;"No. We only charge for policy records; there is no charge for creating the traffic policy itself." /route53/faqs/;How am I billed for using Amazon Route 53 Traffic Flow?;You are billed per policy record. A policy record represents the application of a Traffic Flow policy to a specific DNname (such as www.example.com) in order to use the traffic policy to manage how requests for that DNname are answered. Billing is monthly and is prorated for partial months. There is no charge for traffic policies that are not associated with a DNname via a policy record. For details on pricing, see the Amazon Route 53 pricing page. /route53/faqs/;What are the advanced query types supported in Amazon Route 53 Traffic Flow?;"Traffic Flow supports all Amazon Route 53 DNRouting policies including latency, endpoint health, multivalue; answers, weighted round robin, and geo. In addition to these, Traffic Flow also supports geoproximity based routing with traffic biasing." /route53/faqs/;How does a traffic policy using geoproximity rule route DNS traffic?;When you create a traffic flow policy, you can specify either an AWS region (if you're using AWS resources) or the latitude and longitude for each endpoint. For example, suppose you have EC2 instances in the AWS US East (Ohio) region and in the US West (Oregon) region. When an user in Seattle visits your website, geoproximity routing will route the DNquery to the EC2 instances in the US West (Oregon) region because it's closer geographically. For more information please see the documentation on geoproximity routing. /route53/faqs/;How does the geoproximity bias value of an endpoint affect DNS traffic routing to other endpoints?;Changing the geoproximity bias value on an endpoint either expands or shrinks the area from which Route 53 routes traffic to a resource. The geoproximity bias can't accurately predict the load factor, though, because a small shift in the size of geographic areas might include or exclude major metropolitan areas that generate large numbers of queries. For more information please refer to our documentation. /route53/faqs/;Can I use bias for other Traffic Flow rules?;As of today, bias can only be applied to geoproximity rules. /route53/faqs/;What is Private DNS?;Private DNis a Route 53 feature that lets you have authoritative DNwithin your VPCs without exposing your DNrecords (including the name of the resource and its IP address(es) to the Internet. /route53/faqs/;Can I use Amazon Route 53 to manage my organization’s private IP addresses?;Yes, you can manage private IP addresses within Virtual Private Clouds (VPCs) using Amazon Route 53’s Private DNfeature. With Private DNS, you can create a private hosted zone, and Route 53 will only return these records when queried from within the VPC(s) that you have associated with your private hosted zone. For more details, see the Amazon Route 53 Documentation. /route53/faqs/;How do I set up Private DNS?;You can set up Private DNby creating a hosted zone in Route 53, selecting the option to make the hosted zone “private”, and associating the hosted zone with one of your VPCs. After creating the hosted zone, you can associate it with additional VPCs. See the Amazon Route 53 Documentation for full details on how to configure Private DNS. /route53/faqs/;Do I need connectivity to the outside Internet in order to use Private DNS?;You can resolve internal DNnames from resources within your VPC that do not have Internet connectivity. However, to update the configuration for your Private DNhosted zone, you need Internet connectivity to access the Route 53 API endpoint, which is outside of VPC. /route53/faqs/;Can I still use Private DNS if I’m not using VPC?;No. Route 53 Private DNuses VPC to manage visibility and provide DNresolution for private DNhosted zones. To take advantage of Route 53 Private DNS, you must configure a VPC and migrate your resources into it. /route53/faqs/;Can I use the same private Route 53 hosted zone for multiple VPCs?;Yes, you can associate multiple VPCs with a single hosted zone. /route53/faqs/;Can I associate VPCs and private hosted zones that I created under different AWS accounts?;Yes, you can associate VPCs belonging to different accounts with a single hosted zone. You can see more details here. /route53/faqs/;Will Private DNS work across AWS regions?;Yes. DNanswers will be available within every VPC that you associate with the private hosted zone. Note that you will need to ensure that the VPCs in each region have connectivity with each other in order for resources in one region to be able to reach resources in another region. Route 53 Private DNis supported today in the US East (Northern Virginia), US West (Northern California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), and South America (Sao Paulo) regions. /route53/faqs/;Can I configure DNS Failover for Private DNS hosted zones?;Yes, it is possible to configure DNFailover by associating health checks with resource record sets within a Private DNhosted zone. If your endpoints are within a Virtual Private Cloud (VPC), you have several options to configure health checks against these endpoints. If the endpoints have public IP addresses, then you can create a standard health check against the public IP address of each endpoint. If your endpoints only have private IP addresses, then you cannot create standard health checks against these endpoints. However, you can create metric based health checks, which function like standard Amazon Route 53 health checks except that they use an existing Amazon CloudWatch metric as the source of endpoint health information instead of making requests against the endpoint from external locations. /route53/faqs/;Can I use Private DNS to block domains and DNS names that I don’t want to be reached from within my VPC?;Yes, you can block domains and specific DNnames by creating these names in one or more Private DNhosted zones and pointing these names to your own server (or another location that you manage). /route53/faqs/;What is DNS Failover?;DNFailover consists of two components: health checks and failover. Health checks are automated requests sent over the Internet to your application to verify that your application is reachable, available, and functional. You can configure the health checks to be similar to the typical requests made by your users, such as requesting a web page from a specific URL. With DNfailover, Route 53 only returns answers for resources that are healthy and reachable from the outside world, so that your end users are routed away from a failed or unhealthy part of your application. /route53/faqs/;How do I get started with DNS Failover?;Visit the Amazon Route 53 Developer Guide for details on getting started. You can also configure DNFailover from within the Route 53 Console. /route53/faqs/;Does DNS Failover support Elastic Load Balancers (ELBs) as endpoints?;Yes, you can configure DNFailover for Elastic Load Balancers (ELBs). To enable DNFailover for an ELB endpoint, create an Alias record pointing to the ELB and set the “Evaluate Target Health” parameter to true. Route 53 creates and manages the health checks for your ELB automatically. You do not need to create your own Route 53 health check of the ELB. You also do not need to associate your resource record set for the ELB with your own health check, because Route 53 automatically associates it with the health checks that Route 53 manages on your behalf. The ELB health check will also inherit the health of your backend instances behind that ELB. For more details on using DNFailover with ELB endpoints, please consult the Route 53 Developer Guide. /route53/faqs/;Can I configure a backup site to be used only when a health check fails?;Yes, you can use DNFailover to maintain a backup site (for example, a static site running on an Amazon S3 website bucket) and fail over to this site in the event that your primary site becomes unreachable. /route53/faqs/;What DNS record types can I associate with Route 53 health checks?;You can associate any record type supported by Route 53 except SOA and Nrecords. /route53/faqs/;Can I health check an endpoint if I don’t know its IP address?;"Yes. You can configure DNFailover for Elastic Load Balancers and Amazon S3 website buckets via the Amazon Route 53 Console without needing to create a health check of your own. For these endpoint types, Route 53 automatically creates and manages health checks on your behalf which are used when you create an Alias record pointing to the ELB or S3 website bucket and enable the ""Evaluate Target Health"" parameter on the Alias record." /route53/faqs/;One of my endpoints is outside AWS. Can I set up DNS Failover on this endpoint?;Yes. Just like you can create a Route 53 resource record that points to an address outside AWS, you can set up health checks for parts of your application running outside AWS, and you can fail over to any endpoint that you choose, regardless of location. For example, you may have a legacy application running in a datacenter outside AWS and a backup instance of that application running within AWS. You can set up health checks of your legacy application running outside AWS, and if the application fails the health checks, you can fail over automatically to the backup instance in AWS. /route53/faqs/;If failover occurs and I have multiple healthy endpoints remaining, will Route 53 consider the load on my healthy endpoints when determining where to send traffic from the failed endpoint?;No, Route 53 does not make routing decisions based on the load or available traffic capacity of your endpoints. You will need to ensure that you have available capacity at your other endpoints, or the ability to scale at those endpoints, in order to handle the traffic that had been flowing to your failed endpoint. /route53/faqs/;How many consecutive health check observations does an endpoint need to fail to be considered “failed”?;The default is a threshold of three health check observations: when an endpoint has failed three consecutive observations, Route 53 will consider it failed. However, Route 53 will continue to perform health check observations on the endpoint and will resume sending traffic to it once it passes three consecutive observations. You can change this threshold to any value between 1 and 10 observations. For more details, see the Amazon Route 53 Developer Guide. /route53/faqs/;When my failed endpoint becomes healthy again, how is the DNS failover reversed?;After a failed endpoint passes the number of consecutive health check observations that you specify when creating the health check (the default threshold is three observations), Route 53 will restore its DNrecords automatically, and traffic to that endpoint will resume with no action required on your part. /route53/faqs/;What is the interval between health check observations?;By default, health check observations are conducted at an interval of 30 seconds. You can optionally select a fast interval of 10 seconds between observations. /route53/faqs/;How much load should I expect a health check to generate on my endpoint (for example, a web server)?;"Each health check is conducted from multiple locations around the world. The number and set of locations is configurable; you can modify the number of locations from which each of your health checks is conducted using the Amazon Route 53 console or API. Each location checks the endpoint independently at the interval that you select: the default interval of 30 seconds, or an optional fast interval of 10 seconds. Based on the current default number of health checking locations, you should expect your endpoint to receive one request every 2-3 seconds on average for standard interval health checks and one or more requests per second for fast-interval health checks." /route53/faqs/;Do Route 53 health checks follow HTTP redirects?;No. Route 53 health checks consider an HTTP 3xx code to be a successful response, so they don’t follow the redirect. This may cause unexpected results for string-matching health checks. The health check searches for the specified string in the body of the redirect. Because the health check doesn’t follow the redirect, it never sends a request to the location that the redirect points to and never gets a response from that location. For string matching health checks, we recommend that you avoid pointing the health check at a location that returns an HTTP redirect. /route53/faqs/;What is the sequence of events when failover happens?;In simplest terms, the following events will take place if a health check fails and failover occurs: /route53/faqs/;Do I need to adjust the TTL for my records in order to use DNS Failover?;"The time for which a DNresolver caches a response is set by a value called the time to live (TTL) associated with every record. We recommend a TTL of 60 seconds or less when using DNFailover, to minimize the amount of time it takes for traffic to stop being routed to your failed endpoint. In order to configure DNFailover for ELB and S3 Website endpoints, you need to use Alias records which have fixed TTL of 60 seconds; for these endpoint types, you do not need to adjust TTLs in order to use DNFailover." /route53/faqs/;What happens if all of my endpoints are unhealthy?;Route 53 can only fail over to an endpoint that is healthy. If there are no healthy endpoints remaining in a resource record set, Route 53 will behave as if all health checks are passing. /route53/faqs/;Can I use DNS Failover without using Latency Based Routing (LBR)?;Yes. You can configure DNFailover without using LBR. In particular, you can use DNfailover to configure a simple failover scenario where Route 53 monitors your primary website and fails over to a backup site in the event that your primary site is unavailable. /route53/faqs/;Can I configure a health check on a site accessible only via HTTPS?;Yes. Route 53 supports health checks over HTTPS, HTTP or TCP. /route53/faqs/;Do HTTPS health checks validate the endpoint’s SSL certificate?;No, HTTPS health checks test whether it’s possible to connect with the endpoint over SSL and whether the endpoint returns a valid HTTP response code. However, they do not validate the SSL certificate returned by the endpoint. /route53/faqs/;Do HTTPS health checks support Server Name Indication (SNI)?;Yes, HTTPS health checks support SNI. /route53/faqs/;How can I use health checks to verify that my web server is returning the correct content?;You can use Route 53 health checks to check for the presence of a designated string in a server response by selecting the “Enable String Matching” option. This option can be used to check a web server to verify that the HTML it serves contains an expected string. Or, you can create a dedicated status page and use it to check the health of the server from an internal or operational perspective. For more details, see the Amazon Route 53 Developer Guide. /route53/faqs/;How do I see the status of a health check that I’ve created?;You can view the current status of a health check, as well as details on why it has failed, in the Amazon Route 53 console and via the Route 53 API. /route53/faqs/;How can I measure the performance of my application’s endpoints using Amazon Route 53?;Amazon Route 53 health checks include an optional latency measurement feature which provides data on how long it takes your endpoint to respond to a request. When you enable the latency measurement feature, the Amazon Route 53 health check will generate additional Amazon CloudWatch metrics showing the time required for Amazon Route 53’s health checkers to establish a connection and to begin receiving data. Amazon Route 53 provides a separate set of latency metrics for each AWS region where Amazon Route 53 health checks are conducted. /route53/faqs/;How can I be notified if one of my endpoints starts failing its health check?;Because each Route 53 health check publishes its results as a CloudWatch metric, you can configure the full range of CloudWatch notifications and automated actions which can be triggered when the health check value changes beyond a threshold that you specify. First, in either the Route 53 or CloudWatch console, configure a CloudWatch alarm on the health check metric. Then add a notification action and specify the email or SNtopic that you want to publish your notification to. Please consult the Route 53 Developer Guide for full details. /route53/faqs/;I created an alarm for my health check, but I need to re-send the confirmation email for the alarm's SNS topic. How can I re-send this email?;"Confirmation emails can be re-sent from the SNconsole. To find the name of the SNtopic associated with the alarm, click the alarm name within the Route 53 console and looking in the box labeled ""Send notification to.""" /route53/faqs/;I’m using DNS Failover with Elastic Load Balancers (ELBs) as endpoints. How can I see the status of these endpoints?;"The recommended method for setting up DNFailover with ELB endpoints is to use Alias records with the ""Evaluate Target Health"" option. Because you don't create your own health checks for ELB endpoints when using this option, there are no specific CloudWatch metrics generated by Route 53 for these endpoints." /route53/faqs/;For Alias records pointing to Amazon S3 Website buckets, what is being health checked when I set Evaluate Target Health to “true”?;"Amazon Route 53 performs health checks of the Amazon S3 service itself in each AWS region. When you enable Evaluate Target Health on an Alias record pointing to an Amazon S3 Website bucket, Amazon Route 53 will take into account the health of the Amazon S3 service in the AWS region where your bucket is located. Amazon Route 53 does not check whether a specific bucket exists or contains valid website content; Amazon Route 53 will only fail over to another location if the Amazon S3 service itself is unavailable in the AWS region where your bucket is located." /route53/faqs/;What is the cost to use CloudWatch metrics for my Route 53 health checks?;CloudWatch metrics for Route 53 health checks are available free of charge. /route53/faqs/;Can I configure DNS Failover based on internal health metrics, such as CPU load, network, or memory?;Yes. Amazon Route 53’s metric based health checks let you perform DNfailover based on any metric that is available within Amazon CloudWatch, including AWS-provided metrics and custom metrics from your own application. When you create a metric based health check within Amazon Route 53, the health check becomes unhealthy whenever its associated Amazon CloudWatch metric enters an alarm state. /route53/faqs/;My web server is receiving requests from a Route 53 health check that I did not create. How can I stop these requests?;Occasionally, Amazon Route 53 customers create health checks that specify an IP address or domain name that does not belong to them. If your web server is getting unwanted HTTP(s) requests that you have traced to Amazon Route 53 health checks, please provide information on the unwanted health check using this form, and we will work with our customer to fix the problem. /route53/faqs/;If I specify a domain name as my health check target, will Amazon Route 53 check over IPv4 or IPv6?;"If you specify a domain name as the endpoint of an Amazon Route 53 health check, Amazon Route 53 will look up the IPv4 address of that domain name and will connect to the endpoint using IPv4. Amazon Route 53 will not attempt to look up the IPv6 address for an endpoint that is specified by domain name. If you want to perform a health check over IPv6 instead of IPv4, select ""IP address"" instead of ""domain name"" as your endpoint type, and enter the IPv6 address in the “IP address” field." /route53/faqs/;Where can I find the IPv6 address ranges for Amazon Route 53’s DNS servers and health checkers?;AWS now publishes its current IP address ranges in JSON format. To view the current ranges, download the .json file using the following link. If you access this file programmatically, ensure that the application downloads the file only after successfully verifying the TLS certificate that is returned by the AWS server. /route53/faqs/;Can I register domain names with Amazon Route 53?;Yes. You can use the AWS Management Console or API to register new domain names with Route 53. You can also request to transfer in existing domain names from other registrars to be managed by Route 53. Domain name registration services are provided under our Domain Name Registration Agreement. /route53/faqs/;What Top Level Domains (“TLDs”) do you offer?;Route 53 offers a wide selection of both generic Top Level Domains (“gTLDs”: for example, .com and .net) and country-code Top Level Domains (“ccTLDs”: for example, .de and .fr). For the complete list, please see the Route 53 Domain Registration Price List. /route53/faqs/;How can I register a domain name with Route 53?;To get started, log into your account and click on “Domains”. Then, click the big blue “Register Domain” button and complete the registration process. /route53/faqs/;How long does it take to register a domain name?;Depending on the TLD you’ve selected, registration can take from a few minutes to several hours. Once the domain is successfully registered, it will show up in your account. /route53/faqs/;How long is my domain name registered for?;The initial registration period is typically one year, although the registries for some top-level domains (TLDs) have longer registration periods. When you register a domain with Amazon Route 53 or you transfer domain registration to Amazon Route 53, we configure the domain to renew automatically. For more information, see Renewing Registration for a Domain in the Amazon Route 53 Developer Guide. /route53/faqs/;What information do I need to provide to register a domain name?;In order to register a domain name, you need to provide contact information for the registrant of the domain, including name, address, phone number, and email address. If the administrative and technical contacts are different, you need to provide that contact information, too. /route53/faqs/;Why do I need to provide personal information to register a domain?;ICANNthe governing body for domain registration, requires that registrars provide contact information, including name, address, and phone number, for every domain name registration, and that registrars make this information publicly available via a Whois database. For domain names that you register as an individual (i.e., not as a company or organization), Route 53 provides privacy protection, which hides your personal phone number, email address, and physical address, free of charge. Instead, the Whois contains the registrar’s name and mailing address, along with a registrar-generated forwarding email address that third parties may use if they wish to contact you. /route53/faqs/;Does Route 53 offer privacy protection for domain names I have registered?;Yes, Route 53 provides privacy protection at no additional charge. The privacy protection hides your phone number, email address, and physical address. Your first and last name will be hidden if the TLD registry and registrar allow it. When you enable privacy protection, a Whois query for the domain will contain the registrar’s mailing address in place of your physical address, and the registrar’s name in place of your name (if allowed). Your email address will be a registrar-generated forwarding email address that third parties may use if they wish to contact you. Domain names registered by companies or organizations are eligible for privacy protection if the TLD registry and registrar allow it. /route53/faqs/;Where can I find the requirements for specific TLDs?;For a list of TLDs please see the price list and for the specific registration requirements for each, please see the Amazon Route 53 Developer Guide and our Domain Name Registration Agreement. /route53/faqs/;What name servers are used to register my domain name?;When your domain name is created we automatically associate your domain with four unique Route 53 name servers, known as a delegation set. You can view the delegation set for your domain in the Amazon Route 53 console. They're listed in the hosted zone that we create for you automatically when you register a domain. /route53/faqs/;Will I be charged for my name servers?;You will be charged for the hosted zone that Route 53 creates for your domain name, as well as for the DNqueries against this hosted zone that Route 53 serves on your behalf. If you do not wish to be charged for Route 53’s DNservice, you can delete your Route 53 hosted zone. Please note that some TLDs require you to have valid name servers as part of your domain name registration. For a domain name under one of these TLDs, you will need to procure DNservice from another provider and enter that provider’s name server addresses before you can safely delete your Route 53 hosted zone for that domain name. /route53/faqs/;What is Amazon Registrar, Inc. and what is a registrar of record?;AWS resells domain names that are registered with ICANN-accredited registrars. Amazon Registrar, Inc. is an Amazon company that is accredited by ICANto register domains. The registrar of record is the “Sponsoring Registrar” listed in the WHOIS record for your domain to indicate which registrar your domain is registered with. /route53/faqs/;Who is Gandi?;Amazon is a reseller of the registrar Gandi. As the registrar of record, Gandi is required by ICANto contact the registrant to verify their contact information at the time of initial registration. You MUST verify your contact information if requested by Gandi within the first 15 days of registration in order to prevent your domain name from being suspended. Gandi also sends out reminder notices before the domain comes up for renewal. /route53/faqs/;Which top-level domains does Amazon Route 53 register through Amazon Registrar and which ones does it register through Gandi?;See our documentation for a list of the domains that you can currently register using Amazon Route 53. This list includes information about which registrar is the current registrar of record for each TLD that we sell. /route53/faqs/;Can I transfer my .com and .net domain registrations from Gandi to Amazon?;No. We plan to add this functionality soon. /route53/faqs/;What is Whois? Why is my information shown in Whois?;Whois is a publicly available database for domain names that lists the contact information and the name servers that are associated with a domain name. Anyone can access the Whois database by using the WHOIS command, which is widely available. It's included in many operating systems, and it's also available as a web application on many websites. The Internet Corporation for Assigned Names and Numbers (ICANNrequires that all domain names have publicly available contact information in case someone needs to get in contact with the domain name holder. /route53/faqs/;How do I transfer my domain name to Route 53?;To get started, log into your account and click on “Domains”. Then, click the “Transfer Domain” button at the top of the screen and complete the transfer process. Please make sure before you start the transfer process, (1) your domain name is unlocked at your current registrar, (2) you have disabled privacy protection on your domain name (if applicable), and (3) that you have obtained the valid Authorization Code, or “authcode”, from your current registrar which you will need to enter as part of the transfer process. /route53/faqs/;How do I transfer my existing domain name registration to Amazon Route 53 without disrupting my existing web traffic?;First, you need to get a list of the DNrecord data for your domain name, generally available in the form of a “zone file” that you can get from your existing DNprovider. With the DNrecord data in hand, you can use Route 53’s Management Console or simple web-services interface to create a hosted zone that can store the DNrecords for your domain name and follow its transfer process, which will include such steps as updating the name servers for your domain name to the ones associated with your hosted zone. To complete the domain name transfer process, contact the registrar with whom you registered your domain name and follow its transfer process, which will include steps such as updating the name servers for your domain name to the ones associated with your hosted zone. As soon as your registrar propagates the new name server delegations, the DNqueries from your end users will start to get answered by the Route 53 DNservers. /route53/faqs/;How do I check on the status of my transfer request?;You can view the status of domain name transfers in the “Alerts” section on the homepage of the Route 53 console. /route53/faqs/;What do I do if my transfer wasn’t successful?;You will need to contact your current registrar in order to determine why your transfer failed. Once they have resolved the issue, you can resubmit your transfer request. /route53/faqs/;How do I transfer my domain name to a different registrar?;In order to move your domain name away from Route 53, you need to initiate a transfer request with your new registrar. They will request the domain name be moved to their management. /route53/faqs/;Is there a limit to the number of domains I can manage using Amazon Route 53?;Each new Amazon Route 53 account is limited to a maximum of 50 domains. Complete our request form for a higher limit and we will respond to your request within two business days. /route53/faqs/;Does Amazon Route 53 DNS support DNSSEC?;Yes. You can enable DNSSEC signing for existing and new public hosted zones. /route53/faqs/;How do I transfer a domain registration that has DNSSEC enabled to Amazon Route 53?;See our documentation for a step-by-step guide on transferring your DNSSEC-enabled domain to Amazon Route 53. /route53/faqs/;What is Amazon Route 53 Resolver?;Route 53 Resolver is a regional DNservice that provides recursive DNlookups for names hosted in EC2 as well as public names on the internet. This functionality is available by default in every Amazon Virtual Private Cloud (VPC). For hybrid cloud scenarios you can configure conditional forwarding rules and DNendpoints to enable DNresolution across AWS Direct Connect and AWS Managed VPN. /route53/faqs/;What is recursive DNS?;Amazon Route 53 is both an Authoritative DNservice and Recursive DNservice. Authoritative DNcontains the final answer to a DNquery, generally an IP address. Clients (such as mobile devices, applications running in the cloud, or servers in your datacenter) don’t actually talk directly to authoritative DNservices, except in very rare cases. Instead, clients talk to recursive DNservices (also known as DNresolvers) which find the correct authoritative answer for any DNquery. Route 53 Resolver is a recursive DNservice. /route53/faqs/;What are conditional forwarding rules?;Conditional forwarding rules allow Resolver to forward queries for specified domains to the target IP address of your choice, typically an on-premises DNresolver. Rules are applied at the VPC level and can be managed from one account and shared across multiple accounts. /route53/faqs/;What are DNS endpoints?;A DNendpoint includes one or more elastic network interfaces (ENI) that attach to your Amazon Virtual Private Cloud (VPC). Each ENis assigned an IP address from the subnet space of the VPC where it is located. This IP address can then serve as a forwarding target for on-premises DNservers to forward queries. Endpoints are required both for DNquery traffic that you're forwarding from VPCs to your network and from your network to your VPCs over AWS Direct Connect and Managed VPN. /route53/faqs/;How do I share rules across accounts?;Route 53 Resolver is integrated with AWS Resource Access Manager (RAM) which provides customers with a simple way to share their resources across AWS accounts or within their AWS Organization. Rules can be created in one primary account and then shared across multiple accounts using RAM. Once shared, the rules still need to be applied to VPCs in those accounts before they can take effect. For more information, see the AWS RAM documentation. /route53/faqs/;What happens if I decide to stop sharing rules with other accounts?;Those rules will no longer be usable by the accounts you previously shared them with. This means that if those rules were associated to VPCs in those accounts, they will be disassociated from those VPCs. /route53/faqs/;What regions are available for Route 53 Resolver?;Visit our AWS Region Table to see which regions Route 53 Resolver has launched in. /route53/faqs/;Does regional support for Route 53 Resolver mean that all of Amazon Route 53 is now regional?;No. Amazon Route 53 public and private DNS, traffic flow, health checks, and domain name registration are all global services. /route53/faqs/;How do I get started with Route 53 Resolver?;Visit the Amazon Route 53 developer guide for details on getting started. You can also configure Resolver from within the Amazon Route 53 console. /route53/faqs/;What is Amazon Route 53 Resolver DNS Firewall?;Amazon Route 53 Resolver DNFirewall is a feature that allows you to quickly deploy DNprotections across all of your Amazon Virtual Private Clouds (VPCs). The Route 53 Resolver DNFirewall allows you to block queries made for known malicious domains (i.e. create “denylists”) and to allow queries for trusted domains (create “allowlists”) when using the Route 53 Resolver for recursive DNResolution. You can also quickly get started with protections against common DNthreats by using AWS Managed Domain Lists. Amazon Route 53 Resolver DNFirewall works together with AWS Firewall Manager so you can build policies based on DNFirewall rules, and then centrally apply those policies across your VPCs and accounts. /route53/faqs/;When should I use Route 53 Resolver DNS Firewall?;If you want to be able to filter the domain names that can be queried over DNfrom within your VPCs, then DNFirewall is for you. It gives you flexibility in choosing the configuration that works best for your organization’s security posture in two ways: (1) If you have strict DNexfiltration requirements and want to deny all outbound DNqueries for domains that aren’t on your lists of approved domains, you can create such rules for a “walled-garden” approach to DNsecurity. (2) If your organization prefers to allow all outbound DNlookups within your accounts by default and only requires the ability to block DNrequests for known malicious domains, you can use DNFirewall to create denylists, which include all the malicious domain names your organization is aware of. DNFirewall also comes with AWS Managed Domain Lists that help you protect against suspicious domains and Command-and-Control (C&C) bots. /route53/faqs/;How does Amazon Route 53 Resolver DNS Firewall differ from other firewall offerings on AWS and the AWS Marketplace?;Route 53 Resolver DNFirewall complements existing network and application security services on AWS by providing control and visibility to Route 53 Resolver DNtraffic (e.g. AmazonProvidedDNS) for your entire VPC. Depending on your use case, you may choose to implement DNFirewall along your existing security controls, such as AWS Network Firewall, Amazon VPC Security Groups, AWS Web Application Firewall rules, or AWS Marketplace appliances. /route53/faqs/;Can Amazon Route 53 Resolver DNS Firewall manage security across multiple AWS accounts?;Yes. Route 53 Resolver DNFirewall is a regional feature and secures Route 53 Resolver DNnetwork traffic at an organization and account level. For maintaining policy and governance across multiple accounts, you should use AWS Firewall Manager. /route53/faqs/;How much does Amazon Route 53 Resolver DNS Firewall cost?;Pricing is based on the number of domain names stored within your firewall and the number of DNqueries inspected. Please visit Amazon Route 53 Pricing for more information. /route53/faqs/;Which AWS tools can I use to log and monitor my Amazon Route 53 Resolver DNS Firewall activity?;You can log your DNFirewall activity to an Amazon S3 bucket or Amazon CloudWatch log groups for further analysis and investigation. You can also use Amazon Kinesis Firehose to send your logs to a third-party provider. /route53/faqs/;How do Amazon Route 53 Resolver 53 DNS Firewall and AWS Network Firewall differ in protection against malicious DNS query threats?;Amazon Route 53 Resolver DNFirewall and AWS Network Firewall both offer protection against outbound DNquery threats but for different deployment models. Amazon Route 53 Resolver DNFirewall is designed to deliver granular control to block DNrequests to malicious or compromised domains if you are using Amazon Route 53 Resolver for DNresolution. AWS Network Firewall offers similar capabilities to filter/block outbound DNqueries to known malicious domains if you are using an external DNservice to resolve DNrequests. /api-gateway/faqs/;What is Amazon API Gateway?;Amazon API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as applications running on Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS) or AWS Elastic Beanstalk, code running on AWS Lambda, or any web application. Amazon API Gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. Amazon API Gateway has no minimum fees or startup costs. For HTTP APIs and REST APIs, you pay only for the API calls you receive and the amount of data transferred out. For WebSocket APIs, you pay only for messages sent and received and for the time a user/device is connected to the WebSocket API. /api-gateway/faqs/;Why use Amazon API Gateway?;Amazon API Gateway provides developers with a simple, flexible, fully managed, pay-as-you-go service that handles all aspects of creating and operating robust APIs for application back ends. With API Gateway, you can launch new services faster and with reduced investment so you can focus on building your core business services. API Gateway was built to help you with several aspects of creating and managing APIs: /api-gateway/faqs/;What API types are supported by Amazon API Gateway?;Amazon API Gateway offers two options to create RESTful APIs, HTTP APIs and REST APIs, as well as an option to create WebSocket APIs. /api-gateway/faqs/;How do I get started with HTTP APIs in API Gateway?;To get started with HTTP APIs, you can use the Amazon API Gateway console, the AWS CLI, AWS SDKs, or AWS CloudFormation. To learn more about getting started with HTTP APIs, visit our documentation. /api-gateway/faqs/;How do I get started with REST APIs in API Gateway?;To get started with REST APIs, you can use the Amazon API Gateway console, the AWS CLI, or AWS SDKs. To learn more about getting started with REST APIs, visit our documentation. /api-gateway/faqs/;When creating RESTful APIs, when should I use HTTP APIs and when should I use REST APIs?;You can build RESTful APIs using both HTTP APIs and REST APIs in Amazon API Gateway. /api-gateway/faqs/;Which features come standard with HTTP APIs from API Gateway?;HTTP APIs come standard with CORS support, OIDC and OAuth2 support for authentication and authorization, and automatic deployments on stages. /api-gateway/faqs/;Can I import an OpenAPI definition to create a HTTP API?;Yes, you can import an API definition using OpenAPI 3. It will result in the creation of routes, integrations, and API models. For more information on importing OpenAPI definitions, see our documentation. /api-gateway/faqs/;How can I migrate from my current REST API to a HTTP API?;To migrate from your current REST API to a HTTP API in Amazon API Gateway, do the following: /api-gateway/faqs/;How do I know if my current REST API will work as a HTTP API?;First, go to your REST and export the OpenAPI definition from your REST API. Then, go to your HTTP API and import the OpenAPI definition from the previous step. While your API might work, you may notice some missing features. To identify any missing features, review the Info, Warning, and Error fields from the Import operation. The AWS CLI will return information about your API within your info and warning fields. For more, read our documentation. /api-gateway/faqs/;How do I get started with WebSocket APIs in Amazon API Gateway?;To get started, you can create a WebSocket API using the AWS Management Console, AWS CLI, or AWS SDKs. You can then set WebSocket routing to indicate the backend services such as AWS Lambda, Amazon Kinesis, or your HTTP endpoint to be invoked based on the message content. Refer to the documentation for getting started with WebSocket APIs in API Gateway. /api-gateway/faqs/;Can I create HTTPS endpoints?;Yes, all of the APIs created with Amazon API Gateway expose HTTPS endpoints only. Amazon API Gateway does not support unencrypted (HTTP) endpoints. By default, Amazon API Gateway assigns an internal domain to the API that automatically uses the Amazon API Gateway certificate. When configuring your APIs to run under a custom domain name, you can provide your own certificate for the domain. /api-gateway/faqs/;What data types can I use with Amazon API Gateway ?;APIs built on Amazon API Gateway can accept any payloads sent over HTTPS for HTTP APIs, REST APIs, and WebSocket APIs. Typical data formats include JSONXML, query string parameters, and request headers. You can declare any content type for your APIs responses, and then use the transform templates to change the back-end response into your desired format. /api-gateway/faqs/;With what backends can Amazon API Gateway communicate?;Amazon API Gateway can execute AWS Lambda functions in your account, start AWS Step Functions state machines, or call HTTP endpoints hosted on AWS Elastic Beanstalk, Amazon EC2, and also non-AWS hosted HTTP based operations that are accessible via the public Internet.API Gateway also allows you to specify a mapping template to generate static content to be returned, helping you mock your APIs before the backend is ready. You can also integrate API Gateway with other AWS services directly – for example, you could expose an API method in API Gateway that sends data directly to Amazon Kinesis. /api-gateway/faqs/;For which client platforms can Amazon API Gateway generate SDKs?;API Gateway generates custom SDKs for mobile app development with Android and iOS (Swift and Objective-C), and for web app development with JavaScript. API Gateway also supports generating SDKs for Ruby and Java. Once an API and its models are defined in API Gateway, you can use the AWS console or the API Gateway APIs to generate and download a client SDK. Client SDKs are only generated for REST APIs in Amazon API Gateway. /api-gateway/faqs/;In which AWS regions is Amazon API Gateway available?;To see where HTTP APIs, REST APIs, WebSocket APIs are available, view the AWS region table here. /api-gateway/faqs/;What can I manage through the Amazon API Gateway console?;Through the Amazon API Gateway console, you can define the REST API and its associated resources and methods, manage the API lifecycle, generate client SDKs and view API metrics. You can also use the API Gateway console to define your APIs’ usage plans, manage developers’ API keys, and configure throttling and quota limits. All of the same actions are available through the API Gateway APIs. /api-gateway/faqs/;What is a resource?;A resource is a typed object that is part of your API’s domain. Each resource may have associated a data model, relationships to other resources, and can respond to different methods.You can also define resources as variables to intercept requests to multiple child resources. /api-gateway/faqs/;What is a method?;Each resource within a REST API can support one or more of the standard HTTP methods. You define which verbs should be supported for each resource (GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS) and their implementation. For example, a GET to the cars resource should return a list of cars. To connect all methods within a resource to a single backend endpoint, API Gateway also supports a special “ANY” method. /api-gateway/faqs/;What is the Amazon API Gateway API lifecycle?;With Amazon API Gateway, each REST API can have multiple stages. Stages are meant to help with the development lifecycle of an API -- for example, after you’ve built your APIs and you deploy them to a development stage, or when you are ready for production, you can deploy them to a production stage. /api-gateway/faqs/;What is a stage?;In Amazon API Gateway, stages are similar to tags. They define the path through which the deployment is accessible. For example, you can define a development stage and deploy your cars API to it. The resource will be accessible at https://www.myapi.com/dev/cars. You can also set up custom domain names to point directly to a stage, so that you don’t have to use the additional path parameter. For example, if you pointed myapi.com directly to the development stage, you could access your cars resource at https://www.myapi.com/cars. Stages can be configured using variables that can be accessed from your API configuration or mapping templates. /api-gateway/faqs/;What are stage variables?;Stage variables let you define key/value pairs of configuration values associated with a stage. These values, similarly to environment variables, can be used in your API configuration. For example, you could define the HTTP endpoint for your method integration as a stage variable, and use the variable in your API configuration instead of hardcoding the endpoint – this allows you to use a different endpoint for each stage (e.g. dev, beta, prod) with the same API configuration. Stage variables are also accessible in the mapping templates and can be used to pass configuration parameters to your Lambda or HTTP backend. /api-gateway/faqs/;What is a Resource Policy?;A Resource Policy is a JSON policy document that you attach to an API to control whether a specified principal (typically an IAM user or role) can invoke the API. You can use a Resource Policy to enable users from a different AWS account to securely access your API or to allow the API to be invoked only from specified source IP address ranges or CIDR blocks. Resource Policies can be used with REST APIs in Amazon API Gateway. /api-gateway/faqs/;What if I mistakenly deployed to a stage?;Amazon API Gateway saves the history of your deployments. At any point, using the Amazon API Gateway APIs or the console, you can roll back a stage to a previous deployment. /api-gateway/faqs/;Can I use my Swagger API definitions?;Yes. You can use our open source Swagger importer tool to import your Swagger API definitions into Amazon API Gateway. With the Swagger importer tool you can create and deploy new APIs as well as update existing ones. /api-gateway/faqs/;How do I monetize my APIs on API Gateway?;You can monetize your APIs on API Gateway by publishing them as products in AWS Marketplace. You will first need to register as a seller in AWS Marketplace, and submit your usage plans on API Gateway as products. Read here to learn more about API Monetization. /api-gateway/faqs/;How do I document my API on Amazon API Gateway?;API Gateway offers the ability to create, update, and delete documentation associated with each portion of your API, such as methods and resources. You can access documentation-related APIs through the AWS SDKs, CLI, via RESTful calls, or by editing the documentation strings directly in the API Gateway console. Documentation can also be imported as a Swagger file, either as part of the API or separately, allowing you to add or update the documentation without disturbing the API definition. API Gateway conforms to the Open API specification for documentation imported from, or exported to, Swagger files. Documentation is supported for REST APIs in API Gateway. /api-gateway/faqs/;How can I avoid creating redundant copies of error messages and other documentation that recurs frequently in my API?;In addition to offering standards-conformant API documentation support, API Gateway additionally supports documentation inheritance, making it simple to define a documentation string once and then use it in multiple places. Inheritance simplifies the process of defining API documentation, and can be converted to the standard representation when exporting the API as a Swagger file. /api-gateway/faqs/;Can I restrict access to private APIs to a specific Amazon VPC or VPC endpoint?;Yes, you can apply a Resource Policy to an API to restrict access to a specific Amazon VPC or VPC endpoint. You can also give an Amazon VPC or VPC endpoint from a different account access to the Private API using a Resource Policy. /api-gateway/faqs/;How do I authorize access to my APIs?;With Amazon API Gateway, you can optionally set your API methods to require authorization. When setting up a method to require authorization you can leverage AWS Signature Version 4 or Lambda authorizers to support your own bearer token auth strategy. /api-gateway/faqs/;How does AWS Signature Version 4 work?;You can use AWS credentials - access and secret keys - to sign requests to your service and authorize access like other AWS services. The signing of an Amazon API Gateway API request is managed by the custom API Gateway SDK generated for your service. You can retrieve temporary credentials associated with a role in your AWS account using Amazon Cognito. /api-gateway/faqs/;What is a Lambda authorizer?;Lambda authorizers are AWS Lambda functions. With custom request authorizers, you will be able to authorize access to APIs using a bearer token auth strategy such as OAuth. When an API is called, API Gateway checks if a Lambda authorizer is configured, API Gateway then calls the Lambda function with the incoming authorization token. You can use Lambda to implement various authorization strategies (e.g. JWT verification, OAuth provider callout) that return IAM policies which are used to authorize the request. If the policy returned by the authorizer is valid, API Gateway will cache the policy associated with the incoming token for up to 1 hour. /api-gateway/faqs/;Can Amazon API Gateway generate API keys for distribution to third-party developers?;Yes. API Gateway can generate API keys and associate them with an usage plan. Calls received from each API key are monitored and included in the Amazon CloudWatch Logs you can enable for each stage. However, we do not recommend you use API keys for authorization. You should use API keys to monitor usage by third-party developers and leverage a stronger mechanism for authorization, such as signed API calls or OAuth. /api-gateway/faqs/;How can I address or prevent API threats or abuse?;API Gateway supports throttling settings for each method or route in your APIs. You can set a standard rate limit and a burst rate limit per second for each method in your REST APIs and each route in WebSocket APIs. Further, API Gateway automatically protects your backend systems from distributed denial-of-service (DDoS) attacks, whether attacked with counterfeit requests (Layer 7) or SYN floods (Layer 3). /api-gateway/faqs/;Can I verify that it is API Gateway calling my backend?;Yes. Amazon API Gateway can generate a client-side SSL certificate and make the public key of that certificate available to you. Calls to your backend can be made with the generated certificate, and you can verify calls originating from Amazon API Gateway using the public key of the certificate. /api-gateway/faqs/;Can I use AWS CloudTrail with Amazon API Gateway?;Yes. Amazon API Gateway is integrated with AWS CloudTrail to give you a full auditable history of the changes to your REST APIs. All API calls made to the Amazon API Gateway APIs to create, modify, delete, or deploy REST APIs are logged to CloudTrail in your AWS account. /api-gateway/faqs/;Can I restrict access to private APIs to a specific Amazon VPC or VPC endpoint?;Yes, you can apply a Resource Policy to an API to restrict access to a specific Amazon VPC or VPC endpoint. You can also give an Amazon VPC or VPC endpoint from a different account access to the Private API using a Resource Policy. /api-gateway/faqs/;Can I configure my REST APIs in API Gateway to use TLS 1.1 or higher ?;If you’re using REST APIs, you can set up a CloudFront distribution with custom SSL certificate in your account and use it with Regional APIs in API Gateway. You can then configure the Security Policy for the CloudFront distribution with TLS 1.1 or higher based on your security and compliance requirements. /api-gateway/faqs/;Can I set up alarms on the Amazon API Gateway metrics?;Yes, Amazon API Gateway sends logging information and metrics to Amazon CloudWatch. You can utilize the Amazon CloudWatch console to set up custom alarms. /api-gateway/faqs/;How can I set up metrics for Amazon API Gateway?;By default, Amazon API Gateway monitors traffic at a REST API level. Optionally, you can enable detailed metrics for each method in your REST API from the deployment configuration APIs or console screen. Detailed metrics are also logged to Amazon CloudWatch and will be charged at the CloudWatch rates. /api-gateway/faqs/;Can I determine which version of the API my customers are using?;Yes. Metric details are specified by REST API and stage. Additionally, you can enable metrics for each method in your REST API. /api-gateway/faqs/;Does Amazon API Gateway provide logging support?;Yes. Amazon API Gateway integrates with Amazon CloudWatch Logs. You can optionally enable logging for each stage in your API. For each method in your REST APIs, you can set the verbosity of the logging, and if full request and response data should be logged. /api-gateway/faqs/;How quickly are logs available?;Logs, alarms, error rates and other metrics are stored in Amazon CloudWatch and are available near real time. /api-gateway/faqs/;How can I protect my backend systems and applications from traffic spikes?;Amazon API Gateway provides throttling at multiple levels including global and by service call. Throttling limits can be set for standard rates and bursts. For example, API owners can set a rate limit of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000 requests per second for a few seconds. Amazon API Gateway tracks the number of requests per second. Any requests over the limit will receive a 429 HTTP response. The client SDKs (except Javascript) generated by Amazon API Gateway retry calls automatically when met with this response. /api-gateway/faqs/;Can I throttle individual developers calling my APIs?;Yes. With usage plans you can set throttling limits for individual API keys. /api-gateway/faqs/;How does throttling help me?;Throttling ensures that API traffic is controlled to help your backend services maintain performance and availability. /api-gateway/faqs/;At which levels can Amazon API Gateway throttle inbound API traffic?;Throttling rate limits can be set at the method level. You can edit the throttling limits in your method settings through the Amazon API Gateway APIs or in the Amazon API Gateway console. /api-gateway/faqs/;How are throttling rules applied?;API Gateway throttling related settings are applied in the following order: 1) per-client per-method throttling limits that you set for an API stage in a usage plan, 2) per-client throttling limits that you set in a usage plan, 3) default per-method limits and individual per-method limits that you set in API stage settings, 4) account-level throttling per region. /api-gateway/faqs/;Does Amazon API Gateway provide API result caching?;Yes. You can add caching to API calls by provisioning an API Gateway cache and specifying its size in gigabytes. The cache is provisioned for a specific stage of your APIs. This improves performance and reduces the traffic sent to your back end. Cache settings allow you to control the way the cache key is built and the time-to-live (TTL) of the data stored for each method. API Gateway also exposes management APIs that help you invalidate the cache for each stage. Caching is available for REST APIs in API Gateway. /api-gateway/faqs/;What happens if a large number of end users try to invoke my API simultaneously?;If caching is not enabled and throttling limits have not been applied, then all requests will pass through to your backend service until the account level throttling limits are reached. If throttling limits are in place, then Amazon API Gateway will shed the necessary amount of requests and send only the defined limit to your back-end service. If a cache is configured, then Amazon API Gateway will return a cached response for duplicate requests for a customizable time, but only if under configured throttling limits. This balance between the backend and client ensures optimal performance of the APIs for the applications that it supports. Requests that are throttled will be automatically retried by the client-side SDKs generated by Amazon API Gateway. By default, Amazon API Gateway does not set any cache on your API methods. /api-gateway/faqs/;How do APIs scale?;Amazon API Gateway acts as a proxy to the backend operations that you have configured. Amazon API Gateway will automatically scale to handle the amount of traffic your API receives. Amazon API Gateway does not arbitrarily limit or throttle invocations to your backend operations and all requests that are not intercepted by throttling and caching settings in the Amazon API Gateway console are sent to your backend operations. /api-gateway/faqs/;How am I charged for using Amazon API Gateway?;Amazon API Gateway bills per million API calls, plus the cost of data transfer out, in gigabytes. If you choose to provision a cache for your API, hourly rates apply. For WebSocket APIs, API Gateway bills based on messages sent and received and the number of minutes a client is connected to the API. Please see the API Gateway pricing page for details on API calls, data transfer, and caching costs per region. /api-gateway/faqs/;Who pays for Amazon API Gateway API calls generated by third-party developers?;The API owner is charged for the calls to their APIs on API Gateway. /api-gateway/faqs/;If an API response is served by cached data, is it still considered an API call for billing purposes?;Yes. API calls are counted equally for billing purposes whether the response is handled by your backend operations or the Amazon API Gateway caching operation. /api-gateway/faqs/;What is WebSocket routing in Amazon API Gateway?;WebSocket routing in Amazon API Gateway is used to correctly route the messages to a specific integration. You specify a routing key and integration backend to invoke when defining your WebSocket API. The routing key is an attribute in the message body. A default integration can also be set for non-matching routing keys. Refer to documentation to learn more about routing. /api-gateway/faqs/;How can I send messages to connected clients from the backend service?;When a new client is connected to the WebSocket API, a unique URL, called the callback URL, is created for that client. You can use this callback URL to send messages to the client from the backend service. /api-gateway/faqs/;How can I authorize access to my WebSocket API in Amazon API Gateway?;With Amazon API Gateway, you can either use IAM roles and policies or AWS Lambda Authorizers to authorize access to your WebSocket APIs. /api-gateway/faqs/;How does my backend service know when a client is connected or disconnected from the WebSocket connection in Amazon API Gateway?;When a client is connected or disconnected, a message will be sent from the Amazon API Gateway service to your backend AWS Lambda function or your HTTP endpoint using the $connect and $disconnect routes. You can take appropriate actions like adding or removing the client for the list of connected users. /api-gateway/faqs/;How can my backend service identify if the client is still connected to the WebSocket connection??;You can use the callback URL GET method on the connection to identify if the client is connected to the WebSocket connection. Refer to documentation about using a callback URL. /api-gateway/faqs/;Can I disconnect a client from my backend service?;Yes, you can disconnect the connected client from your backend service using the callback URL. /api-gateway/faqs/;What is the maximum message size supported for WebSocket APIs?;The maximum supported message size is 128 KB. Refer to the documentation for other limits around WebSocket APIs. /api-gateway/faqs/;How am I charged for using WebSocket APIs on Amazon API Gateway?;You will be charged based on 2 metrics: Connection minutes and messages. /api-gateway/faqs/;If messages on the WebSocket connection fail authentication or authorization, do they still count toward my API usage bill?;No, if messages on the WebSocket connection fail authentication or authorization, they do not count toward your API usage bill. /app-mesh/faqs/;What is AWS App Mesh?;AWS App Mesh makes it easy to monitor, control, and debug the communications between services. App Mesh uses Envoy, an open source service mesh proxy which is deployed alongside your microservice containers. App Mesh is integrated with AWS services for monitoring and tracing, and it works with many popular third-party tools. App Mesh can be used with microservice containers managed by Amazon ECS, Amazon EKS, AWS Fargate, Kubernetes running on AWS, and services running on Amazon EC2. /app-mesh/faqs/;Why should I use App Mesh?;App Mesh makes it easier to get visibility, security, and control over the communications between your services without writing new code or running additional AWS infrastructure. Using App Mesh, you can standardize how services communicate, implement rules for communications between services, and capture metrics, logs, and traces directly into AWS services and third-party tools of your choice. /app-mesh/faqs/;How does App Mesh work?;App Mesh sets up and manages a service mesh for your services. To do this, you run the open source Envoy proxy alongside each service, and App Mesh configures the proxy to handle all communications into and out of each container. App Mesh collects metrics, such as error rates, and connections per second, which can be exported to Amazon CloudWatch using a statsd collector. Using App Mesh APIs, you can route traffic based on path or weights to specific service versions. /app-mesh/faqs/;What is a service mesh?;A service mesh is a new software layer that handles all of the communications between services. It provides new features to connect and manage connections between services and is independent of each service’s code, allowing it to work across network boundaries and with multiple service management systems. /app-mesh/faqs/;How does App Mesh work with Amazon Elastic Container Services (ECS) and AWS Fargate?;App Mesh provides new communication, observation, and management capabilities to applications managed by Amazon ECS and AWS Fargate. You add the Envoy proxy image to the task definition. App Mesh manages Envoy configuration to provide service mesh capabilities. App Mesh exports metrics, logs, and traces to the endpoints specified in the Envoy bootstrap configuration provided. App Mesh provides an API to configure traffic routes and other controls between microservices that are mesh-enabled. /app-mesh/faqs/;How does App Mesh work with Amazon Elastic Container Service for Kubernetes (EKS)?;Use the open source AWS App Mesh controller and mutating webhook admission controller. These controllers connect your Kubernetes services to App Mesh and ensure that the Envoy proxy is injected into your pods. App Mesh exports metrics, logs, and traces to the endpoints specified in the Envoy bootstrap configuration provided. App Mesh provides an API to configure traffic routes and other controls between microservices that are mesh-enabled. /app-mesh/faqs/;How does App Mesh work with services running on Amazon EC2?;Run the Envoy proxy as a container or process on your EC2 instance. Use the AWS-provided container proxy init container, or run your own script, to redirect network traffic on the instance through the proxy. App Mesh manages Envoy configuration to provide service mesh capabilities. App Mesh exports metrics, logs, and traces to the endpoints specified in the Envoy bootstrap configuration provided. App Mesh provides an API to configure traffic routes and other controls between microservices that are mesh-enabled. /app-mesh/faqs/;Why should I use App Mesh instead of AWS Elastic Load Balancers?;We recommend using AWS Elastic Load Balancing to handle all internet traffic and traffic from clients that are not within your trust boundary. For internal services that connect to other services within an AWS region, App Mesh provides flexibility, consistency, and a greater degree of control and monitoring for services communications. /app-mesh/faqs/;What type of monitoring capabilities does App Mesh provide?;With App Mesh, you get consistent metrics and logs for every hop between services. These logs and metrics include metadata such as service-names and request identifiers. With these, you can aggregate, filter, a see graphical dashboards of service-to-service communications using tools like Amazon CloudWatch. Common dashboards might include error rates and error codes between your service and dependent services. App Mesh automatically collects traces for each service and makes it easy to visualize a service map with details of all service API calls. These capabilities make it easier to debug and identify the root cause of communication issues between your microservices. /app-mesh/faqs/;How does App Mesh support application identity?;Mutual TLS (mTLS) provides a way to enforce application identity at transport layer and allow or deny client connections based on the certificate they present. AWS App Mesh has support for enforcing client application identity with X.509 certificates, called mutual transport layer security, or mTLS. In order to configure mTLS, you need to set up the client to provide a certificate to the server service during request initiation, as part of the TLS session negotiation. This certificate is used by the server to identify and authenticate the client, checking the certificate is valid and was issued by a trusted certificate authority (CA), and identifying who the client is by using the Subject Alternative Name (SANon the certificate. /app-mesh/faqs/;Why should I use mTLS with AWS App Mesh?;Microservices also have particular security needs, including end-to-end traffic encryption and flexible service access control, which can be addressed with a service mesh. The AWS App Mesh mTLS implementation enables your client applications to verify the servers and provides traffic encryption, and mutual TLS offers peer authentication that is used for service-to-service authentication. It adds a layer of security over TLS that allows your services to verify the client making the connection. Breaking down a monolithic application into microservices and running them in a service mesh offers various benefits, including better visibility and smart traffic routing. /directconnect/faqs/;;AWS Direct Connect is a networking service that provides an alternative to using the internet to connect to AWS. Using AWS Direct Connect, data that would have previously been transported over the internet is delivered through a private network connection between your facilities and AWS. In many circumstances, private network connections can reduce costs, increase bandwidth, and provide a more consistent network experience than internet-based connections. All AWS services, including Amazon Elastic Compute Cloud (EC2), Amazon Virtual Private Cloud (VPC), Amazon Simple Storage Service (S3), and Amazon DynamoDB can be used with AWS Direct Connect. /directconnect/faqs/;;A complete list of AWS Direct Connect locations is available on the AWS Direct Connect locations page. When using AWS Direct Connect, you can connect to VPCs deployed in any AWS Region and Availability Zone. You can also connect to AWS Local Zones. /directconnect/faqs/;;A dedicated connection is made through a 1 Gbps, 10 Gbps, or 100 Gbps Ethernet port dedicated to a single customer. Hosted connections are sourced from a AWS Direct Connect Partner that has a network link between themselves and AWS. /directconnect/faqs/;;Use the AWS Direct Connect tab on the AWS Management Console to create a new connection. When requesting a connection, you will be asked to select a AWS Direct Connect location, the number of ports, and the port speed. You work with a Direct Connect Partner if you need assistance extending your office or data center network to a AWS Direct Connect location. /directconnect/faqs/;;Yes. AWS Direct Connect Partners can help you extend your preexisting data center or office network to a AWS Direct Connect location. Please see AWS Direct Connect Partners for more information. With AWS Direct Connect Gateway, you can access any AWS Region from any AWS Direct Connect Location (excluding China). /directconnect/faqs/;;No, you need to make connections between the local service providers used at your on-premises locations, or work with an AWS Direct Connect Delivery Partner, to connect to AWS Direct Connect locations. /directconnect/faqs/;;After you have downloaded your Letter of Authorization and Connecting Facility Assignment (LOA-CFA), you must complete your cross-network connection. If you already have equipment located in an AWS Direct Connect location, contact the appropriate provider to complete the cross connect. For specific instructions for each provider and cross connect pricing, refer to the AWS Direct Connect documentation: Requesting cross connects at AWS Direct Connect locations. /directconnect/faqs/;;An AWS Direct Connect gateway is a grouping of virtual private gateways (VGWs) and private virtual interfaces (VIFs). An AWS Direct Connect gateway is a globally available resource. You can create the AWS Direct Connect gateway in any Region and access it from all other Regions. /directconnect/faqs/;;A virtual interface (VIF) is necessary to access AWS services, and is either public or private. A public virtual interface enables access to public services, such as Amazon S3. A private virtual interface enables access to your VPC. For more information, see AWS Direct Connect virtual interfaces. /directconnect/faqs/;;A virtual private gateway (VGW) is part of a VPC that provides edge routing for AWS managed VPN connections and AWS Direct Connect connections. You associate an AWS Direct Connect gateway with the virtual private gateway for the VPC. For more details, refer to this documentation. /directconnect/faqs/;;A link aggregation group (LAG) is a logical interface that uses the link aggregation control protocol (LACP) to aggregate multiple dedicated connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection. LAGs streamline configuration because the LAG configuration applies to all connections in the group. For details on creating, updating, associating/disassociating, and deleting a LAG refer to the AWS Direct Connect documentation: Link aggregation groups - AWS Direct Connect. /directconnect/faqs/;;The AWS Direct Connect Resiliency Toolkit provides a connection wizard that helps you choose between multiple resiliency models. These models help you to determine—then place an order for—the number of dedicated connections to achieve your SLA objective. You select a resiliency model, and then the AWS Direct Connect Resiliency Toolkit guides you through the dedicated connection ordering process. The resiliency models are designed to ensure that you have the appropriate number of dedicated connections in multiple locations. /directconnect/faqs/;;The AWS Direct Connect Failover Testing feature allows you to test the resiliency of your AWS Direct Connect connection by disabling the Border Gateway Protocol session between your on-premises networks and AWS. You can use the AWS Management Console or AWS Direct Connect application programming interface (API). Please refer to this document to learn more about this feature. It is supported in all commercial AWS Regions (except GovCloud (US). /directconnect/faqs/;;The location preference communities for private and transit virtual interfaces provides a feature to let you influence the return path for traffic sources from VPC(s). /directconnect/faqs/;;A configurable private autonomous system number (ASNmakes it possible to set the ASN on the AWS side of the Border Gateway Protocol (BGP) session for private or transit VIFs on any newly created AWS Direct Connect Gateway. This is available in all commercial AWS Regions (except AWS China Region) and AWS GovCloud (US). /directconnect/faqs/;;A transit virtual interface is a type of virtual interface you can create on any AWS Direct Connect connection. A transit virtual interface can only be attached to an AWS Direct Connect gateway. You can use an AWS Direct Connect gateway attached with one or more transit virtual interfaces to interface with up to three AWS Transit Gateways in any supported AWS Regions. Similar to the private virtual interface, you can establish one IPv4 BGP session and one IPv6 BGP session over a single transit virtual interface. /directconnect/faqs/;;Multi-account support for AWS Direct Connect gateway is a feature that allows you to associate up to 10 Amazon Virtual Private Clouds (Amazon VPCs) or up to three AWS Transit Gateways from multiple AWS accounts with an AWS Direct Connect gateway. /directconnect/faqs/;;802.1AE MAC Security (MACsec) is an IEEE standard that provides data confidentiality, data integrity, and data origin authenticity. You can use AWS Direct Connect connections that support MACsec to encrypt your data from your on-premises network or collocated device to your chosen AWS Direct Connect point of presence. /directconnect/faqs/;;When the AWS Direct Connect SiteLink feature is enabled at two or more AWS Direct Connect locations, you can send data between those locations, bypassing AWS Regions. AWS Direct Connect SiteLink works with both hosted and dedicated connections. /directconnect/faqs/;;No, you need to make connections between the local service providers used at your on-premises locations to connect to AWS. /directconnect/faqs/;Does having a link aggregation group (LAG) make my connection more resilient?;No, a LAG doesn't make your connectivity to AWS more resilient. If you have more than one link in your LAG, and if your minimum links are set to one, your LAG will let you protect against single link failure. However, it will not protect against a single device failure at AWS where your LAG is terminating. /directconnect/faqs/;How do I order connections to AWS Direct Connect for high availability?;We recommend following the resiliency best practices detailed in the AWS Direct Connect Resiliency Recommendations page to determine the best resiliency model for your use case. After selecting a resiliency model, the AWS Direct Connect Resiliency Toolkit can guide you through the process of ordering redundant connections. AWS also encourages you to use the Resiliency Toolkit failover test feature to test your configurations before going live. /directconnect/faqs/;Does AWS Direct Connect offer a Service Level Agreement (SLA)?;Yes, AWS Direct Connect offers an SLA. Details are here. /directconnect/faqs/;When using the failover test feature, can I configure the duration of the test or cancel the test while it's running?;Yes, you can configure the duration of the test by setting the minimum and maximum duration for the test to be 1 minute and 180 minutes, respectively. You can cancel the test while it is running. When the test is cancelled, we restore the Border Gateway Protocol session, and your test history reflects that the test was canceled. /directconnect/faqs/;Can I see my past test history when using the failover test feature? How long do you keep the test history?;Yes, you can review your test history using the AWS Management Console or through AWS CloudTrail. We preserve your test history for 365 days. If you delete the virtual interface, your test history is also deleted. /directconnect/faqs/;;After the configured test duration, we restore the Border Gateway Protocol session between your on-premises networks and AWS using the Border Gateway Protocol session parameters negotiated before starting the test. /directconnect/faqs/;Who can initiate a failover test using the AWS Direct Connect Resiliency Toolkit?;Only the owner of the AWS account that includes the virtual interface can initiate the test. /directconnect/faqs/;Can I delete the virtual interface while the failover test for the same virtual interface is in progress?;Yes, you can delete the virtual interface while a test for the same virtual interface is in progress. /directconnect/faqs/;Can I run failover tests for any type of virtual interface?;Yes, you can run tests for the Border Gateway Protocol session(s) established using any type of virtual interface. /directconnect/faqs/;If I have established IPv4 and IPv6 Border Gateway Protocol sessions, can I run this test for each Border Gateway Protocol session?;Yes, you can initiate a test for one or both Border Gateway Protocol sessions. /directconnect/faqs/;Where and how do I configure AWS Direct Connect SiteLink?;You enable and disable AWS Direct Connect SiteLink when configuring virtual interfaces (VIFs). To establish a connection using AWS Direct Connect SiteLink, you must enable AWS Direct Connect SiteLink at two or more VIFs at two or more AWS Direct Connect locations. You must attach all locations to the same AWS Direct Connect gateway. You can configure your VIF to enable or disable AWS Direct Connect SiteLink using the AWS Management Console, AWS Command Line Interface, or APIs. AWS Direct Connect SiteLink is integrated with AWS CloudWatch so you can monitor traffic sent over this link. /directconnect/faqs/;Does AWS Direct Connect SiteLink require an AWS Direct Connect gateway connection?;Yes. To use AWS Direct Connect SiteLink, you must connect AWS Direct Connect SiteLink-enabled virtual interfaces (VIFs) to an AWS Direct Connect gateway. The VIF type can be private or transit. /directconnect/faqs/;How can I tell what I’m being charged for AWS Direct Connect SiteLink?;On billing statements, charges related to AWS Direct Connect SiteLink will appear on a separate line from other AWS Direct Connect-related charges. /directconnect/faqs/;What does a simple two-site network architecture look like with AWS Direct Connect SiteLink?;To build a simple network, configure a private virtual interface (VIF) and enable AWS Direct Connect SiteLink on that VIF at each site. Then create an AWS Direct Connect gateway and associate each of your AWS Direct Connect SiteLink-enabled VIFs with it in order to create a network. /directconnect/faqs/;How do I implement a hub-and-spoke architecture with AWS Direct Connect SiteLink?;To create a hub-and-spoke architecture, create an AWS Direct Connect gateway and associate it with all AWS Direct Connect SiteLink-enabled private VIFs. /directconnect/faqs/;How do I create a segmented network architecture with AWS Direct Connect SiteLink?;Bring up multiple AWS Direct Connect gateways, and associate subsets of AWS Direct Connect SiteLink-enabled private virtual interfaces (VIFs) with each. AWS Direct Connect SiteLink-enabled VIFs on an AWS Direct Connect gateway cannot communicate with AWS Direct Connect SiteLink-enabled VIFs on another AWS Direct Connect gateway, creating a segmented network. /directconnect/faqs/;What types of virtual interfaces (VIFs) are supported by AWS Direct Connect SiteLink?;AWS Direct Connect SiteLink is supported on private and transit VIFs. However, you cannot attach an AWS Direct Connect gateway (DXGW) to an AWS Transit Gateway when the AWS Direct Connect gateway was previously associated with a virtual private gateway, or is attached to a private virtual interface. /directconnect/faqs/;Does AWS Direct Connect SiteLink require BGP?;Yes. AWS Direct Connect SiteLink requires BGP. /directconnect/faqs/;Does AWS Direct Connect SiteLink support IPv6?;Yes. AWS Direct Connect SiteLink supports IPv6. /directconnect/faqs/;Does AWS Direct Connect SiteLink support MACsec?;Yes. AWS Direct Connect SiteLink supports MACsec provided the port and PoP location support MACsec encryption. /directconnect/faqs/;Is Quality of Service (QoS) supported on AWS Direct Connect SiteLink-enabled virtual interfaces (VIFs)?;AWS Direct Connect does not provide managed QoS functionality. When you configure QoS on devices that are connected using AWS Direct Connect SiteLink, DSCP markings will be preserved on forwarded traffic. /directconnect/faqs/;Does AWS Direct Connect SiteLink support local preference BGP communities?;Yes. You can use existing AWS Direct Connect local preference tags with AWS Direct Connect SiteLink. The following local preference BGP community tags are supported: 7224:7100 - Low preference 7224:7200 - Medium preference 7224:7300 - High preference /directconnect/faqs/;When should I use AWS Direct Connect SiteLink and when should I use AWS Cloud WAN?;Depending on your use case, you might choose one, the other, or both. Cloud WANcurrently in preview, can create and manage networks of VPCs across multiple Regions. AWS Direct Connect SiteLink, on the other hand, connects DX locations together, bypassing AWS Regions to improve performance. AWS Direct Connect is one of multiple connectivity options that you will be able to use with a Cloud WAN network in the future. /directconnect/faqs/;;Yes, when using AWS Direct Connect, you can connect to VPCs deployed in AWS Local Zones. Your data travels directly to and from AWS Local Zones over an AWS Direct Connect connection, without traversing an AWS Region. This improves performance and can reduce latency. /directconnect/faqs/;How do I configure AWS Local Zones to work with AWS Direct Connect?;An AWS Direct Connect link to AWS Local Zones works the same way as connecting to a Region. /directconnect/faqs/;Are there any differences in how AWS Direct Connect connects to an AWS Local Zone compared to a Region?;Yes, there are differences. AWS Local Zones do not support AWS Transit Gateway at this time. If you are connecting to an AWS Local Zone subnet through an AWS Transit Gateway, your traffic enters the parent Region, is processed by your AWS Transit Gateway, is sent to the AWS Local Zone, then returns (or hairpins) from the Region. Second, ingress routing destinations do not route directly to AWS Local Zones. Traffic will ingress to the parent Region first before connecting back to your AWS Local Zones. Third, unlike the Region where the maximum MTU size is 9001, the maximum MTU size for the packet connecting to Local Zones is 1468. Path MTU discovery is supported and recommended. Last, the Single flow limit (5-tuple) for connectivity to an AWS Local Zone is approximately 2.5 Gbps at maximum MTU (1468) compared to 5 Gbps at the Region. NOTE: The limitations on MTU size and single flow do not apply to AWS Direct Connect connectivity to the AWS Local Zone in Los Angeles. /directconnect/faqs/;Can I use AWS Site-to-Site VPN as a backup for my AWS Direct Connect link to an AWS Local Zone?;No. Unlike connectivity to a Region, you cannot use an AWS Site-to-Site VPN as a backup to your AWS Direct Connect connection to an AWS Local Zone. For redundancy, you must use two or more AWS Direct Connect connections. /directconnect/faqs/;Can I use my current AWS Direct Connect Gateway (DXGW) to associate the Virtual Gateway (VGW)?;Yes, provided the current AWS Direct Connect Gateway is not associated with an AWS Transit Gateway. Because AWS Transit Gateway is not supported in AWS Local Zones—and a DXGW that is associated with an AWS Transit Gateway cannot be associated with a VGW—you cannot associate a DXGW associated with an AWS Transit Gateway. You must create a new DXGW and associate it with the VGW. /directconnect/faqs/;Can I use the same private network connection with Amazon Virtual Private Cloud (VPC) and other AWS services simultaneously?;Yes. Each AWS Direct Connect connection can be configured with one or more virtual interfaces. Virtual interfaces may be configured to access AWS services such as Amazon EC2 and Amazon S3 using public IP space, or resources in a VPC using private IP space. /directconnect/faqs/;If I’m using Amazon CloudFront and my origin is in my own data center, can I use AWS Direct Connect to transfer the objects stored in my own data center?;Yes. Amazon CloudFront supports custom origins including origins you run outside of AWS. The access to the CloudFront edge locations will be restricted to the geographically nearest AWS Region, with the exception of the North America Regions which currently allow access to all North American Region's on-net CloudFront origins. You can access this using public virtual interfaces on AWS Direct Connect connection. With AWS Direct Connect, you will pay AWS Direct Connect data transfer rates for origin transfer. /directconnect/faqs/;Can I order a port for AWS GovCloud (US) in the AWS Management Console?;To order a port to connect to AWS GovCloud (US) you must use the AWS GovCloud (US) Management Console. /directconnect/faqs/;Do AWS Global Accelerator (AGA) public endpoint prefixes get advertised from AWS to on-prem over Direct Connect public virtual interfaces ?;Yes. The Direct Connect public virtual interface will advertise the AnyCast prefixes used by AGA public endpoints. /directconnect/faqs/;What’s the max number of links I can have in a LAG group?;The maximum number of links is 4x in a LAG group. /directconnect/faqs/;Are link aggregation groups (LAG) in active/active or active/passive mode?;They are in active/active. In other words, AWS ports send Link Aggregation Control Protocol Data Units (LACPDUs) continuously. /directconnect/faqs/;Can the maximum transmission unit of a LAG change?;The maximum transmission unit of the LAG can be changed. Please refer to Jumbo Frame documentation here to know more. /directconnect/faqs/;Can I have my ports configured for active/passive instead of active/active?;The LAG at your endpoint can be configured with LACP active or passive modes. The AWS side is always configured as active mode LACP. /directconnect/faqs/;Can I mix interface types and have a few 1 G ports and a few 10 G ports in the same LAG?;No, you can create LAG using the same type of ports (either 1 G or 10 G). /directconnect/faqs/;What ports types will this be available on?;It will be available for 1 G, 10 G, and 100 G Dedicated Connection ports. /directconnect/faqs/;Can I LAG hosted connections as well?;No. It will only be available for 1 G, 10 G, and 100 G Dedicated Connections. It will not be available for hosted connections. /directconnect/faqs/;Can I create a LAG out of my existing ports?;Yes, if your ports are on the same AWS Direct Connect device. Please note this will cause your ports to go down for a moment while they are reconfigured as a LAG. They will not come back up until LAG is configured on your side. /directconnect/faqs/;Can I have a LAG that spans multiple AWS Direct Connect devices?;LAG will only include ports on the same AWS Direct Connect devices. We don’t support multi-chassis LAG. /directconnect/faqs/;How do I add links to my LAG once it’s set up?;You must request another port for your LAG. If no ports are available in the same device, you must order a new LAG and migrate your connections. For example, if you have 3x 1 G links, and would like to add a fourth, and we do not have a port available on that device, you must order a new LAG of 4x 1 G ports. /directconnect/faqs/;You’re out of ports and I have to order a new LAG, but I have Virtual Interfaces (VIFs) configured. How do I move those?;You can have multiple VIFs attached to a VGW at once, and you can configure VIFs on a connection even when it’s down. We suggest you create the new VIFs on your new LAG, and then move the connections over to the new LAG once you’ve created all of your VIFs. Remember to delete the old connections so we stop billing you for them. /directconnect/faqs/;Can I delete a single port from my LAG?;Yes, but only if your minimum links are set to lower than the remaining ports. For example, if you have four ports and minimum links set to four, you won’t be able to delete a port from the LAG. If minimum links are set to three, you can then delete a port from the LAG. We will return a notification with the specific panel/port you’ve deleted and a reminder to disconnect the cross connect and circuit from AWS. /directconnect/faqs/;Can I delete my LAG all at once?;Yes, but just like a regular connection you won’t be able to delete it if you have VIFs configured. /directconnect/faqs/;Can I order a LAG with only one port?;Yes. Please note we can’t promise that ports will be available on the same chassis if you want to add more ports in the future. /directconnect/faqs/;Can I convert a LAG back to individual ports?;Yes. This can be done with the DisassociateConnectionWithLag API call. /directconnect/faqs/;Can you create a tool to move my virtual interfaces (VIFs) for me?;You can use AssociateVirtualInterface API or console to do this operation. /directconnect/faqs/;Does the LAG show as a single connection or a collection of connections?;It will show as a single dxlag and we’ll list the connection id’s under it. /directconnect/faqs/;What does minimum links mean, and why do I have a check box for it when I order my bundle?;Minimum links is a feature in LACP where you can set the minimum number of links that must be active in a bundle for that bundle to be active and pass traffic. If, for example, you have four ports, your minimum links is set to three, and you only have two active ports, your bundle will not be active. If you have three or more then the bundle is active and will pass traffic if you have a VIF configured. /directconnect/faqs/;When I associate my existing AWS Direct Connect connection with a LAG, what happens with virtual interfaces (VIFs) already created with a connection?;When an AWS Direct Connect connection with existing Virtual Interfaces (VIFs) is associated to a LAG, Virtual Interfaces are migrated to the LAG. Please note that certain parameters associated with VIFs must be unique, such as VLAN numbers, to be moved to LAG. /directconnect/faqs/;Can I set link priority on a specific link?;We treat all links as equal, so we won’t set “link priority” on any specific link. /directconnect/faqs/;Can I have a 40 GE interface on my side that connects to 4x 10 GE on the AWS side?;To do this, you need 4x 10 GE interfaces on your router to connect to AWS. A single 40 GE interface connecting to a 4x 10 GE LACP is not supported. /directconnect/faqs/;Are there any setup charges or a minimum service term commitment required to use AWS Direct Connect?;There are no setup charges, and you may cancel at any time. Services provided by AWS Direct Connect Partners may have other terms or restrictions that apply. /directconnect/faqs/;How will I be charged and billed for my use of AWS Direct Connect?;AWS Direct Connect has two separate charges: port hours and data transfer. Pricing is per port-hour consumed for each port type. Partial port hours consumed are billed as full hours. The account that owns the port will be charged the port hour charges. /directconnect/faqs/;Will Regional data transfer be billed at the AWS Direct Connect rate?;No, data transfer between Availability Zones in a Region will be billed at the regular Regional data transfer rate in the same month in which the usage occurred. /directconnect/faqs/;What defines billable port-hours for Hosted Connections?;Port hours are billed once you have accepted the Hosted Connection. Port charges will continue to be billed as long as the Hosted Connection is provisioned for your use. If you no longer want to be charged for your Hosted Connection, work with your AWS Direct Connect Partner to cancel the Hosted Connection. /directconnect/faqs/;What is the format for Hosted Connection port-hour charges?;All Hosted Connection port-hour charges at an AWS Direct Connect location are grouped by capacity. /directconnect/faqs/;Which AWS account gets charged for the Data Transfer Out performed over a public virtual interface?;For publicly addressable AWS resources (for example, Amazon S3 buckets, Classic EC2 instances, or EC2 traffic that goes through an internet gateway), if the outbound traffic is destined for public prefixes owned by the same AWS payer account and actively advertised to AWS through an AWS Direct Connect public virtual Interface, the Data Transfer Out (DTO) usage is metered toward the resource owner at the AWS Direct Connect data transfer rate. /directconnect/faqs/;What connection speeds are available?;For Dedicated Connections, 1 Gbps, 10 Gbps, and 100 Gbps ports are available. For Hosted Connections, connection speeds of 50 Mbps, 100 Mbps, 200 Mbps, 300 Mbps, 400 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps and 10 Gbps may be ordered from approved AWS Direct Connect Partners. See AWS Direct Connect Partners for more information. /directconnect/faqs/;Are there limits on the amount of data that I can transfer using AWS Direct Connect?;No. You may transfer any amount of data up to the limit of your selected port capacity. /directconnect/faqs/;Are there limits on the number of routes I can advertise towards AWS using AWS Direct Connect?;Yes, you can advertise up to 100 routes over each Border Gateway Protocol session using AWS Direct Connect. Learn more about AWS Direct Connect limits. /directconnect/faqs/;What happens if I advertise more than 100 routes over a Border Gateway Protocol session?;Your Border Gateway Protocol session will go down if you advertise over 100 routes over a Border Gateway Protocol session. This will prevent all network traffic flowing over that virtual interface until you reduce the number of routes to less than 100. /directconnect/faqs/;What are the technical requirements for the connection?;AWS Direct Connect supports 1000BASE-LX, 10GBASE-LR, or 100GBASE-LR4 connections over single mode fiber using Ethernet transport. Your device must support 802.1Q VLANs. See the AWS Direct Connect User Guide for more detailed requirements information. /directconnect/faqs/;Can I extend one of my VLANs to the AWS Cloud using AWS Direct Connect?;No, VLANare used in AWS Direct Connect only to separate traffic between virtual interfaces. /directconnect/faqs/;What are the technical requirements for virtual interfaces to public AWS services, such as Amazon EC2 and Amazon S3?;This connection requires the use of the Border Gateway Protocol (BGP) with an Autonomous System Number (ASNand IP Prefixes. You will need the following information to complete the connection: A public or private ASNIf you are using a public ASNyou must own it. If you are using a private ASNit must be in the 64512 to 65535 range. A new unused VLAN tag you select. Public IPs (/31 or /30) allocated to the BGP session. RFC 3021 (Using 31-Bit Prefixes on IPv4 Point-to-Point Links) is supported on all Direct Connect virtual interface types. By default, Amazon will advertise global public IP prefixes via BGP. You must advertise public IP prefixes (/31 or smaller) that you own or are AWS-provided via BGP. For more details, consult the AWS Direct Connect User Guide. See the information that follows below for more details on AWS Direct Connect, Bring Your Own ASN. /directconnect/faqs/;What IP address will be assigned to each end of a virtual interface?;If you are configuring a virtual interface to the public AWS Cloud, the IP addresses for both ends of the connection must be allocated from public IP space that you own. If the virtual interface is connected to a VPC, and you choose to have AWS automatically generate the peer IP CIDR, the IP address space for both ends of the connection is allocated by AWS and is in the 169.254.0.0/16 range. /directconnect/faqs/;Can I locate my hardware next to the equipment that powers AWS Direct Connect?;You can purchase rack space within the facility housing the AWS Direct Connect location and deploy your equipment nearby. However, due to security practices, your equipment cannot be placed within AWS Direct Connect rack or cage areas. For more information, contact the operator of your facility. Once deployed, you can connect your equipment to AWS Direct Connect using a cross-connect. /directconnect/faqs/;How do I enable BFD on my AWS Direct Connect connection?;Asynchronous BFD is automatically enabled for each AWS Direct Connect virtual interface, but will not take effect until it's configured on your router. AWS has set the BFD liveness detection minimum interval to 300, and the BFD liveness detection multiplier to 3. /directconnect/faqs/;How do I set up AWS Direct Connect for the AWS GovCloud (US) Region?;See the AWS GovCloud (US) User Guide for detailed instructions on setting up AWS Direct Connect for use with the AWS GovCloud (US) Region. /directconnect/faqs/;What are the technical requirements for virtual interfaces (VIF) to VPCs?;AWS Direct Connect requires Border Gateway Protocol (BGP). To complete the connection, you will need: /directconnect/faqs/;Can I establish a Layer 2 connection between VPC and my network?;No, Layer 2 connections are not supported. /directconnect/faqs/;How does AWS Direct Connect differ from an IPsec VPN Connection?;"VPN connections use IPsec to establish encrypted network connectivity between your intranet and an Amazon VPC over the public internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability of internet-based connectivity. AWS Direct Connect bypasses the internet; instead, it uses dedicated, private network connections between your network and AWS." /directconnect/faqs/;Can I use AWS Direct Connect and a VPN Connection to the same VPC simultaneously?;Yes, but only for failover. The AWS Direct Connect path will always be preferred, when established, regardless of AS path prepending. Make sure your VPN connections can handle the failover traffic from AWS Direct Connect. /directconnect/faqs/;Is there any difference to the BGP configuration/setup details outlined for AWS Direct Connect?;VPN BGP will work the same as AWS Direct Connect. /directconnect/faqs/;Which AWS Regions offer AWS Direct Connect support for AWS Transit Gateway?;Support for AWS Transit Gateway is available in all commercial AWS Regions. /directconnect/faqs/;How do I create transit virtual interface?;You can use the AWS Management Console or API operations to create transit virtual interface. /directconnect/faqs/;Can I allocate a transit virtual interface in another AWS account?;Yes, you can allocate transit virtual interface in any AWS account. /directconnect/faqs/;Can I attach a transit virtual interface to my Virtual Private Gateway?;No, you cannot attach transit virtual interface to your Virtual Private Gateway. /directconnect/faqs/;Can I attach a private virtual interface to my AWS Transit Gateway?;No, you cannot attach private virtual interface to your AWS Transit Gateway. /directconnect/faqs/;What are the quotas associated with a transit virtual interface?;Please refer to AWS Direct Connect quotas page to learn more about the limits associated with transit virtual interface. /directconnect/faqs/;Can I add more transit virtual interfaces to the connection?;No, you can create only one transit virtual interface for any AWS Direct Connect connection. /directconnect/faqs/;I have an existing AWS Direct Connect gateway attached to a private virtual interface, can I attach a transit virtual interface to this AWS Direct Connect gateway?;No, an AWS Direct Connect Gateway can only have one type of virtual interface attached. /directconnect/faqs/;Can I associate my AWS Transit Gateway to the AWS Direct Connect gateway attached to a private virtual interface?;No, an AWS Transit Gateway can only be associated with the AWS Direct Connect gateway attached to transit virtual interface. /directconnect/faqs/;How long does it take to establish an association between AWS Transit Gateway and AWS Direct Connect gateway?;It can take up to 40 minutes to establish an association between AWS Transit Gateway and AWS Direct Connect gateway. /directconnect/faqs/;How many total virtual interfaces can I create per 1 Gbps, 10 Gbps, or 100 Gbps dedicated connection?;You can create up to 51 virtual interfaces per 1 Gbps, 10 Gbps, or 100 Gbps dedicated connection inclusive of the transit virtual interface. /directconnect/faqs/;Can I create a transit virtual interface on a hosted connection of any speed?;Yes. /directconnect/faqs/;I have 4x10 Gbps LAG, how many transit virtual interfaces can I create on this link aggregation group (LAG)?;You can create one transit virtual interface on the 4x10G LAG. /directconnect/faqs/;Does a transit virtual interface support jumbo frames?;Yes, a transit virtual interface will support jumbo frames. Maximum transmission unit (MTU) size will be limited to 8,500. /directconnect/faqs/;Do you support all the Border Gateway Protocol (BGP) attributes that you support on the private virtual interface for the transit virtual interface?;Yes, you can continue to use supported BGP attributes (AS_PATH, Local Pref, NO_EXPORT) on the transit virtual interface. /directconnect/faqs/;Why is an AWS Direct Connect gateway necessary?;An AWS Direct Connect gateway performs several functions: /directconnect/faqs/;Can I associate more than one AWS Transit Gateway with an AWS Direct Connect gateway?;You can associate up to three Transit Gateway to an AWS Direct Connect gateway as long as the IP CIDR blocks announced from your Transit Gateways do not overlap. /directconnect/faqs/;;Yes, you can associate VPCs owned by any AWS account with an AWS Direct Connect gateway owned by any AWS account. /directconnect/faqs/;Can I associate AWS Transit Gateway that are owned by any AWS account with an AWS Direct Connect gateway that is owned by any AWS account?;Yes, you can associate a Transit Gateway owned by any AWS account with an AWS Direct Connect gateway owned by any AWS account. /directconnect/faqs/;If I use an AWS Direct Connect gateway, does my traffic to the desired AWS Region go by way of the associated home AWS Region?;No. When using AWS Direct Connect gateway, your traffic will take the shortest path to and from your AWS Direct Connect location to the destination AWS Region, regardless of the associated home AWS Region of the AWS Direct Connect location where you are connected. /directconnect/faqs/;Are there additional fees when using AWS Direct Connect gateway and working with remote AWS Regions?;There are no charges for using an AWS Direct Connect gateway. You will pay applicable egress data charges based on the source remote AWS Region and port hour charges. See the AWS Direct Connect pricing page for details. /directconnect/faqs/;Do I need to use the same AWS account with my private/transit virtual interfaces(s), AWS Direct Connect gateway, Virtual Private Gateway, or AWS Transit Gateways in order to use an AWS Direct Connect gateway?;Private virtual interfaces and AWS Direct Connect gateways must be in the same AWS account. Similarly, transit virtual interfaces and AWS Direct Connect gateways must be in the same AWS account. Virtual private gateway(s) and AWS Transit Gateway(s) can be in different AWS accounts than the account that owns the AWS Direct Connect gateway. /directconnect/faqs/;I am working with an AWS Direct Connect Partner to get private virtual interface (VIF) provisioned for my account, can I use an AWS Direct Connect gateway?;Yes, you can associate a provisioned private virtual interface (VIF) with your AWS Direct Connect gateway when you confirm that you are provisioned as private in your AWS account. /directconnect/faqs/;Can I connect to VPCs in my local Region?;You can continue to attach your virtual interfaces (VIFs) to virtual private gateways (VGWs). You will still have intra-Region VPC connectivity, and will be charged the egress rate for the related geographic Regions. /directconnect/faqs/;What are the quotas associated with an AWS Direct Connect gateway?;Please refer to the AWS Direct Connect quotas page for information on this topic. /directconnect/faqs/;Can virtual private gateways (VGWs, associated with a VPC) be part of more than one AWS Direct Connect gateway?;No, a VGW-VPC pair cannot be part of more than one AWS Direct Connect gateway. /directconnect/faqs/;Can you attach a private virtual interface (VIF) to more than one AWS Direct Connect gateway?;No, one private virtual interface can only attach to one AWS Direct Connect gateway OR one Virtual Private Gateway. We recommend that you follow AWS Direct Connect resiliency recommendations and attach more than one private virtual interface. /directconnect/faqs/;Does AWS Direct Connect gateway break existing AWS VPN CloudHub functionality?;No, AWS Direct Connect gateway does not break AWS VPN CloudHub. AWS Direct Connect gateway enables connectivity between on-premises networks and VPCs in any AWS Region. AWS VPN CloudHub enables connectivity between on-premises networks using AWS Direct Connect or a VPN within the same Region. The VIF is associated with the VGW directly. Existing AWS VPN CloudHub functionality will continue to be supported. You can attach an AWS Direct Connect virtual interface (VIF) directly to a virtual private gateway (VGW) to support intra-Region AWS VPN CloudHub. /directconnect/faqs/;What type of traffic is, and is not, supported by AWS Direct Connect gateway?;Please refer to AWS Direct Connect User Guide to review supported and not supported traffic patterns. /directconnect/faqs/;I currently have a VPN in us-east-1 attached to a virtual private gateway (VGW). I want to use AWS VPN CloudHub in us-east-1 between the VPN and a new VIF. Can I do this with AWS Direct Connect gateway?;No, you cannot do this with an AWS Direct Connect gateway, but the option to attach a VIF directly to a VGW is available to use the VPN AWS Direct Connect AWS VPN CloudHub use case. /directconnect/faqs/;I have an existing private virtual interface associated with virtual private gateway (VGW), can I associate my existing private virtual interface with an AWS Direct Connect gateway?;No, an existing private virtual interface associated with VGW cannot be associated with an AWS Direct Connect gateway. To do this, you must create a new private virtual interface, and at the time of creation, associate it with your AWS Direct Connect gateway. /directconnect/faqs/;If I have a virtual private gateway (VGW) attached to a VPN and an AWS Direct Connect gateway, and my AWS Direct Connect circuit goes down, will my VPC traffic route out to the VPN?;Yes, as long as the VPC route table has routes to the virtual private gateway (VGW) towards the VPN. /directconnect/faqs/;Can I attach a virtual private gateway (VGW) to an AWS Direct Connect gateway if it is not attached to a VPC?;No, you cannot associate an unattached VGW to AWS Direct Connect gateway. /directconnect/faqs/;I have created an AWS Direct Connect gateway with one AWS Direct Connect Private VIF, and three non-overlapping virtual private gateways (VGWs) -- each associated with a VPC. What happens if I detach one of the VGW from the VPC?;Traffic from your on-premises network to the detached VPC will stop, and VGW's association with the AWS Direct Connect gateway will be deleted. /directconnect/faqs/;I have created an AWS Direct Connect gateway with one AWS Direct Connect VIF, and three non-overlapping VGW-VPC pairs, what happens if I detach one of the virtual private gateways (VGW) from the AWS Direct Connect gateway?;Traffic from your on-premises network to the detached VGW (associated with a VPC) will stop. /directconnect/faqs/;Can I send traffic from a VPC that is associated with an AWS Direct Connect gateway to another VPC associated to the same AWS Direct Connect gateway?;No, AWS Direct Connect gateway's only support routing traffic from AWS Direct Connect VIFs to VGW (associated with VPC). In order to send traffic between two VPCs, you must configure a VPC peering connection. /directconnect/faqs/;I currently have a VPN in us-east-1 that is attached to a virtual private gateway (VGW). If I associate this VGW to an AWS Direct Connect gateway, can I send traffic from my VPN to a VIF attached to the AWS Direct Connect gateway in a different AWS Region?;No, an AWS Direct Connect gateway will not route traffic between a VPN and an AWS Direct Connect VIF. To enable this use case, you must create a VPN in the AWS Region of the VIF and attach the VIF and the VPN to the same VGW. /directconnect/faqs/;Can I resize a VPC that is associated with an AWS Direct Connect gateway?;Yes, you can resize the VPC. If you resize your VPC, you must resend the proposal with the resized VPC CIDR to the AWS Direct Connect gateway owner. Once the AWS Direct Connect gateway owner approves the new proposal, the resized VPC CIDR will be advertised towards your on-premises network. /directconnect/faqs/;Is there a way to configure an AWS Direct Connect gateway to selectively propagate prefixes to/from VPCs?;Yes, AWS Direct Connect gateway offers a way for you to selectively announce prefixes towards your on-premises networks. For prefixes that are advertised from your on-premises networks, each VPC associated with an AWS Direct Connect gateway receives all prefixes announced from your on-premises networks. If you want to limit traffic to and from any specific VPC, you should consider using Access Control Lists (ACLs) for each VPC. /directconnect/faqs/;Can I use this feature for my existing EBGP sessions?;Yes, all existing BGP sessions on private virtual interfaces support the use of local preference communities. /directconnect/faqs/;Will this feature be available on both Public and Private Virtual Interfaces?;No, this feature is currently available for private and transit virtual interfaces only. /directconnect/faqs/;Will this feature work with an AWS Direct Connect gateway?;Yes, this feature will work with private virtual interfaces attached with AWS Direct Connect gateway. /directconnect/faqs/;Can I verify that communities are being received by AWS?;No, at this time we do not provide such monitoring features. /directconnect/faqs/;What are the supported local preference communities for an AWS Direct Connect private virtual interface?;The following communities are supported for private virtual interface and are evaluated in order of lowest to highest preference. Communities are mutually exclusive. Prefixes marked with the same communities, and bearing identical MED*, AS_PATH attributes are candidates for multi-pathing. /directconnect/faqs/;What is the default behavior, in case I do not use the supported communities?;If you do not specify Local Preference communities for your private VIF, the default local preference is based on the distance to the AWS Direct Connect Locations from the local Region. In such situation, egress behavior across multiple VIFs from multiple AWS Direct Connect Locations may be arbitrary. /directconnect/faqs/;"I have two private VIFs on a physical connection at an AWS Direct Connect location; can I use supported communities to influence egress behavior across these two private VIFs?";Yes, you can use this feature to influence egress traffic behavior between two VIFs on the same physical connection. /directconnect/faqs/;Does the local preference communities feature support failover?;"Yes. This can be accomplished by advertising prefixes over the primary/active virtual interface with a community for higher local preference than prefixes advertised over the backup/passive virtual interface. This feature is backward compatible with pre-existing methods for achieving failover; if your connection is currently configured for failover, no additional changes are necessary." /directconnect/faqs/;I have already configured my routers with AS_PATH, do I need to change the configuration to use community tags and disrupt my network?;No, we will continue to respect AS_PATH attribute. This feature is an additional knob you can use to get better control over the incoming traffic from AWS. AWS Direct Connect follows the standard approach for path selection. Bear in mind that local preference is evaluated before the AS_PATH attribute. /directconnect/faqs/;I have two AWS Direct Connect connections, one is 1 Gbps and another is 10 Gbps, and both are advertising the same prefix. I would like to receive all traffic for this destination across the 10 Gbps AWS Direct Connect connection, but still be able to failover to the 1 Gbps connection. Can local preference communities be used to balance traffic in this scenario?;Yes. By marking the prefix advertised over the 10 Gbps AWS Direct Connection with a community of a higher local preference, it will be the preferred path. If the 10 Gbps fails or the prefix is withdrawn, the 1 Gbps interface becomes the return path. /directconnect/faqs/;How wide will AWS multipath traffic to my network?;We will multipath per prefix at up to 16 next-hops wide, where each next-hop is a unique AWS endpoint. /directconnect/faqs/;Can I have v4 and v6 BGP sessions running over a single VPN tunnel?;At this time, we only allow v4 BGP session running single VPN tunnel with IPv4 address. /directconnect/faqs/;Is there any difference to the BGP configuration/setup details outlined for AWS Direct Connect?;VPN BGP will work the same as AWS Direct Connect. /directconnect/faqs/;Can I terminate my tunnel to an endpoint with an IPv6 address?;At this time, we only support IPv4 endpoint address for VPN. /directconnect/faqs/;Can I terminate my tunnel to an IPv4 address and run IPv6 BGP sessions over the tunnel?;At this time, we only allow v4 BGP session running single VPN tunnel with IPv4 address. /directconnect/faqs/;;Configurable Private Autonomous System Number (ASN). This allows customers to set the ASN on the AWS side of the BGP session for private VIFs on any newly created AWS Direct Connect Gateway. /directconnect/faqs/;Where are these features available?;All commercial AWS Regions (except AWS China Region) and AWS GovCloud (US). /directconnect/faqs/;How can I configure/assign my ASN to be advertised as the AWS side ASN?;You can configure/assign an ASN to be advertised as the AWS side ASN during creation of the new AWS Direct Connect gateway. You can create an AWS Direct Connect gateway using the AWS Management Console or a CreateDirectConnectGateway API operation. /directconnect/faqs/;Can I use any ASN - public and private?;You can assign any private ASN to the AWS side. You cannot assign any other public ASN. /directconnect/faqs/;Why can't I assign a public ASN for the AWS half of the BGP session?;AWS is not validating ownership of the ASNs, therefore we're limiting the AWS side ASN to private ASNs. We want to protect customers from BGP spoofing. /directconnect/faqs/;What ASN can I choose?;You can choose any private ASNRanges for 16-bit private ASNinclude 64512 to 65534. You can also provide 32-bit ASNbetween 4200000000 and 4294967294. /directconnect/faqs/;;We will ask you to re-enter a private ASN once you attempt to create the AWS Direct Connect gateway. /directconnect/faqs/;If I don't provide an ASN for the AWS half of the BGP session, what ASN can I expect from AWS?;AWS will provide an ASN of 64512 for the AWS Direct Connect gateway if you don't choose one. /directconnect/faqs/;Where can I view the AWS side ASN?;You can view the AWS side ASN in the AWS Direct Connect console and in the response of the DescribeDirectConnectGateways or DescribeVirtualInterfaces API operations. /directconnect/faqs/;If I have a public ASN, will it work with a private ASN on the AWS side?;Yes, you can configure the AWS side of the BGP session with a private ASN and your side with a public ASN. /directconnect/faqs/;I have private VIFs already configured and want to set a different AWS side ASN for the BGP session on an existing VIF. How can I make this change?;You must create a new AWS Direct Connect gateway with desired ASNand create a new VIF with the newly created AWS Direct Connect gateway. Your device configuration also must change appropriately. /directconnect/faqs/;I'm attaching multiple private VIFs to a single AWS Direct Connect gateway. Can each VIF have a separate AWS side ASN?;No, you can assign/configure separate AWS side ASN for each AWS Direct Connect gateway, not each VIF. The AWS side ASN for VIF is inherited from the AWS side ASN of the attached AWS Direct Connect gateway. /directconnect/faqs/;Can I use different private ASNs for my AWS Direct Connect Gateway and Virtual Private Gateway?;Yes, you can use different private ASNfor your AWS Direct Connect Gateway and Virtual Private Gateway. The AWS side ASN you receive depends on your private virtual interface association. /directconnect/faqs/;;Yes, you can use same private ASNfor your AWS Direct Connect Gateway and Virtual Private Gateway. The AWS side ASN you receive depends on your private virtual interface association. /directconnect/faqs/;;AWS Direct Connect Gateway private ASN will be used as the AWS side ASN for the Border Gateway Protocol (BGP) session between your network and AWS. /directconnect/faqs/;;You can select your own private ASN in the AWS Direct Connect gateway console. Once the AWS Direct Connect gateway is configured with an AWS side ASNthe private virtual interfaces associated with the AWS Direct Connect gateway use your configured ASN as the AWS side ASN. /directconnect/faqs/;;You will not have to make any changes. /directconnect/faqs/;;We support 32-bit ASNfrom 4200000000 to 4294967294. /directconnect/faqs/;;No, you cannot modify the AWS side ASN after creation. You can delete the AWS Direct Connect gateway and recreate a new AWS Direct Connect gateway with the desired private ASN. /directconnect/faqs/;;MACsec is not intended as a replacement for any specific encryption technology. For simplicity, and for defense in depth, you should continue to use any encryption technologies that you already use. We offer MACsec as an encryption option you can integrate into your network in addition to other encryption technologies you currently use. /directconnect/faqs/;;MACsec is supported on 10 Gbps and 100 Gbps dedicated AWS Direct Connect connections at selected points of presence. For MACsec to work, your dedicated connection must be transparent to Layer 2 traffic and the device terminating the Layer 2 adjacency must support MACsec. If you are using a last-mile connectivity partner, check that your last-mile connection can support MACsec. MACsec is not supported on 1 Gbps dedicated connections or any hosted connections. /directconnect/faqs/;;Yes. You will need a MACsec-capable device on your end of the Ethernet connection to an AWS Direct Connect location. Refer to the MAC Security section of our user guide to verify supported operation modes and required MACsec features. /directconnect/faqs/;;MACsec requires that your connection is terminated on a MACsec-capable device on the AWS Direct Connect side of the connection. You can check if your existing connection is MACsec-capable through the AWS Management Console or by using the DescribeConnections AWS Direct Connect API. If your existing MACsec connection is not terminated on a MACsec-capable device, you can request a new MACsec-capable connection using the AWS Management Console or the CreateConnection API. /directconnect/faqs/;;For 100Gbps connections we support the GCM-AES-XPN-256 cipher suite. For 10Gbps connections we support GCM-AES-256 and GCM-AES-XPN-256. /directconnect/faqs/;;We support only 256-bit MACsec keys to provide the latest advanced data protection. /directconnect/faqs/;;We require the use of XPN for 100Gbps connections. For 10Gbps connections we support both GCM-AES-256 and GCM-AES-XPN-256. High-speed connections, such as 100 Gbps dedicated connections, can quickly exhaust MACsec’s original 32-bit packet numbering space, which would require you to rotate your encryption keys every few minutes to establish a new Connectivity Association. To avoid this situation, the IEEE Std 802.1AEbw-2013 amendment introduced extended packet numbering, increasing the numbering space to 64-bits, easing the timeliness requirement for key rotation. /directconnect/faqs/;;Yes. We require SCI to be on. This setting cannot be changed. /directconnect/faqs/;Do you support IEEE 802.1Q (Dot1q/VLAN) tag offset/dot1q-in-clear?;No, we do not support moving the VLAN tag outside of the encrypted payload. /directconnect/faqs/;;No, there is no additional charge for MACsec. /elasticloadbalancing/faqs/;How do I decide which load balancer to select for my application?;Elastic Load Balancing (ELB) supports four types of load balancers. You can select the appropriate load balancer based on your application needs. If you need to load balance HTTP requests, we recommend you use the Application Load Balancer (ALB). For network/transport protocols (layer4 – TCP, UDP) load balancing, and for extreme performance/low latency applications we recommend using Network Load Balancer. If your application is built within the Amazon Elastic Compute Cloud (Amazon EC2) Classic network, you should use Classic Load Balancer. If you need to deploy and run third-party virtual appliances, you can use Gateway Load Balancer. /elasticloadbalancing/faqs/;Can I privately access Elastic Load Balancing APIs from my Amazon Virtual Private Cloud (VPC) without using public IPs?;Yes, you can privately access Elastic Load Balancing APIs from your Amazon Virtual Private Cloud (VPC) by creating VPC endpoints. With VPC endpoints, the routing between the VPC and Elastic Load Balancing APIs is handled by the AWS network without the need for an Internet gateway, network address translation (NAT) gateway, or virtual private network (VPNconnection. The latest generation of VPC Endpoints used by Elastic Load Balancing are powered by AWS PrivateLink, an AWS technology enabling the private connectivity between AWS services using Elastic Network Interfaces (ENI) with private IPs in your VPCs. To learn more about AWS PrivateLink, visit the AWS PrivateLink documentation. /elasticloadbalancing/faqs/;Is there an SLA for load balancers?;Yes, Elastic Load Balancing guarantees a monthly availability of at least 99.99% for your load balancers (Classic, Application or Network). To learn more about the SLA and know if you are qualified for a credit, visit here. /elasticloadbalancing/faqs/;Which operating systems does an Application Load Balancer support?;An Application Load Balancer supports targets with any operating system currently supported by the Amazon EC2 service. /elasticloadbalancing/faqs/;Which protocols does an Application Load Balancer support?;An Application Load Balancer supports load balancing of applications using HTTP and HTTPS (Secure HTTP) protocols. /elasticloadbalancing/faqs/;Is HTTP/2 Supported on an Application Load Balancer?;Yes. HTTP/2 support is enabled natively on an Application Load Balancer. Clients supporting HTTP/2 can connect to an Application Load Balancer over TLS. /elasticloadbalancing/faqs/;How can I use static IP or PrivateLink on my Application Load Balancer?;You can forward traffic from your Network Load Balancer, which provides support for PrivateLink and a static IP address per Availability Zone, to your Application Load Balancer. Create an Application Load Balancer-type target group, register your Application Load Balancer to it, and configure your Network Load Balancer to forward traffic to the Application Load Balancer-type target group. /elasticloadbalancing/faqs/;What TCP ports can I use to load balance?;You can perform load balancing for the following TCP ports: 1-65535 /elasticloadbalancing/faqs/;Is WebSockets supported on an Application Load Balancer?;Yes. WebSockets and Secure WebSockets support is available natively and ready for use on an Application Load Balancer. /elasticloadbalancing/faqs/;Is Request tracing supported on an Application Load Balancer?;Yes. Request tracing is enabled by default on your Application Load Balancer. /elasticloadbalancing/faqs/;Does a Classic Load Balancer have the same features and benefits as an Application Load Balancer?;While there is some overlap, there is no feature parity between the two types of load balancers. Application Load Balancers are the foundation of our application layer load-balancing platform for the future. /elasticloadbalancing/faqs/;Can I configure my Amazon EC2 instances to accept traffic only from my Application Load Balancers?;Yes. /elasticloadbalancing/faqs/;Can I configure a security group for the front end of an Application Load Balancer?;Yes. /elasticloadbalancing/faqs/;Can I use the existing APIs that I use with my Classic Load Balancer with an Application Load Balancer?;No. Application Load Balancers require a new set of application programming interfaces (APIs). /elasticloadbalancing/faqs/;How do I manage both Application and Classic Load Balancers simultaneously?;The ELB Console will allow you to manage Application and Classic Load Balancers from the same interface. If you are using the command-line interface (CLI) or a software development kit (SDK), you will use a different ‘service’ for Application Load Balancers. For example, in the CLI you will describe your Classic Load Balancers using `aws elb describe-load-balancers` and your Application Load Balancers using `aws elbv2 describe-load-balancers`. /elasticloadbalancing/faqs/;Can I convert my Classic Load Balancer to an Application Load Balancer (and vice-versa)?;No, you cannot convert one load balancer type into another. /elasticloadbalancing/faqs/;Can I migrate to Application Load Balancer from Classic Load Balancer?;Yes. You can migrate to Application Load Balancer from Classic Load Balancer using one of the options listed in this document. /elasticloadbalancing/faqs/;Can I use an Application Load Balancer as a Layer-4 load balancer?;No. If you need Layer-4 features, you should use Network Load Balancer. /elasticloadbalancing/faqs/;Can I use a single Application Load Balancer for handling HTTP and HTTPS requests?;Yes, you can add listeners for HTTP port 80 and HTTPS port 443 to a single Application Load Balancer. /elasticloadbalancing/faqs/;Can I get a history of Application Load Balancing API calls made on my account for security analysis and operational troubleshooting purposes?;Yes. To receive a history of Application Load Balancing API calls made on your account, use AWS CloudTrail. /elasticloadbalancing/faqs/;Does an Application Load Balancer support HTTPS termination?;Yes, you can terminate HTTPS connection on the Application Load Balancer. You must install a Secure Sockets Layer (SSL) certificate on your load balancer. The load balancer uses this certificate to terminate the connection and then decrypt requests from clients before sending them to targets. /elasticloadbalancing/faqs/;What are the steps to get a SSL certificate?;You can either use AWS Certificate Manager to provision an SSL/TLS certificate or you can obtain the certificate from other sources by creating the certificate request, getting the certificate request signed by a CA, and then uploading the certificate either using AWS Certification Manager or the AWS Identity and Access Management (IAM) service. /elasticloadbalancing/faqs/;How does an Application Load Balancer integrate with AWS Certificate Manager (ACM)?;An Application Load Balancer is integrated with AWS Certificate Management (ACM). Integration with ACM simplifies binding a certificate to the load balancer, thereby streamlining the entire SSL offload process. Purchasing, uploading, and renewing SSL/TLS certificates is a complex, manual, and time-consuming process. With ACM integration with Application Load Balancer, this whole process has been shortened to simply requesting a trusted SSL/TLS certificate and selecting the ACM certificate to provision it with the load balancer. /elasticloadbalancing/faqs/;Is back-end server authentication supported with an Application Load Balancer?;No, only encryption is supported to the back-ends with an Application Load Balancer. /elasticloadbalancing/faqs/;How can I enable Server Name Indication (SNI) for my Application Load Balancer?;SNis automatically enabled when you associate more than one TLS certificate with the same secure listener on a load balancer. Similarly, SNmode for a secure listener is automatically disabled when you have only one certificate associated to a secure listener. /elasticloadbalancing/faqs/;Can I associate multiple certificates for the same domain to a secure listener?;Yes, you can associate multiple certificates for the same domain to a secure listener. For example, you can associate: /elasticloadbalancing/faqs/;Is IPv6 supported with an Application Load Balancer?;Yes, IPv6 is supported with an Application Load Balancer. /elasticloadbalancing/faqs/;How do you set up rules on an Application Load Balancer?;You can configure rules for each of the listeners on the load balancer. The rules include conditions and corresponding actions if the conditions are satisfied. The supported conditions are Host header, path, HTTP headers, methods, query parameters, and source IP classless inter-domain routing (CIDR). The supported actions are redirect, fixed response, authenticate, and forward. Once you have set this up, the load balancer will use the rules to determine how a particular HTTP request should be routed. You can use multiple conditions and actions in a rule, and in each condition can specify a match on multiple values. /elasticloadbalancing/faqs/;Are there limits on the resources for an Application Load Balancer?;Your AWS account has these limits for an Application Load Balancer. /elasticloadbalancing/faqs/;How can I protect my web applications behind a load balancer from web attacks?;You can integrate your Application Load Balancer with AWS Web Application Firewall (WAF), a web application firewall that helps protect web applications from attacks by allowing you to configure rules based on IP addresses, HTTP headers, and custom uniform resource identifier (URI) strings. Using these rules, AWS WAF can block, allow, or monitor (count) web requests for your web application. Please see AWS WAF developer guide for more information. /elasticloadbalancing/faqs/;Can I load balance to any arbitrary IP address?;You can use any IP address from the load balancer’s VPC CIDR for targets within load balancer’s VPC, and any IP address from RFC 1918 ranges (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16) or RFC 6598 range (100.64.0.0/10) for targets located outside the load balancer’s VPC (for example, targets in Peered VPC, Amazon EC2 Classic, and on-premises locations reachable over AWS Direct Connect or VPN connection). /elasticloadbalancing/faqs/;How can I load balance applications distributed across a VPC and on-premises location?;There are various ways to achieve hybrid load balancing. If an application runs on targets distributed between a VPC and an on-premises location, you can add them to the same target group using their IP addresses. To migrate to AWS without impacting your application, gradually add VPC targets to the target group and remove on-premises targets from the target group. /elasticloadbalancing/faqs/;How can I load balance to EC2-Classic instances?;You cannot load balance to EC2-Classic Instances when registering their Instance IDs as targets. However if you link these EC2-Classic instances to the load balancer's VPC using ClassicLink and use the private IPs of these EC2-Classic instances as targets, then you can load balance to the EC2-Classic instances. If you are using EC2 Classic instances today with a Classic Load Balancer, you can easily migrate to an Application Load Balancer. /elasticloadbalancing/faqs/;How do I enable cross-zone load balancing in Application Load Balancer?;Cross-zone load balancing is already enabled by default in Application Load Balancer. /elasticloadbalancing/faqs/;When should I authenticate users using the Application Load Balancer’s integration with Amazon Cognito vs. the Application Load Balancers’ native support for OpenID Connect (IODC) identity providers (IdPs)?;You should use authentication through Amazon Cognito if: /elasticloadbalancing/faqs/;What type of redirects does Application Load Balancer support?;The following three types of redirects are supported. /elasticloadbalancing/faqs/;What content types does ALB support for the message body of fixed-response action?;The following content types are supported: text/plain, text/css, text/html, application/javascript, application/json. /elasticloadbalancing/faqs/;How does AWS Lambda invocation via Application Load Balancer work?;HTTP(S) requests received by a load balancer are processed by the content-based routing rules. If the request content matches the rule—with an action to forward it to a target group through a Lambda function as a target—then the corresponding Lambda function is invoked. The content of the request (including headers and body) is passed on to the Lambda function in JavaScript object notation (JSONformat. The response from the Lambda function should be in JSON format. The response from the Lambda function is transformed into an HTTP response and sent to the client. The load balancer invokes your Lambda function using the AWS Lambda Invoke API, and requires that you provide invoke permissions for your Lambda function to the Elastic Load Balancing service. /elasticloadbalancing/faqs/;Does Lambda invocation via Application Load Balancer support requests over both HTTP and HTTPS protocol?;Yes. Application Load Balancer supports Lambda invocation for requests over both HTTP and HTTPS protocol. /elasticloadbalancing/faqs/;In which AWS Regions can I use Lambda functions as targets with the Application Load Balancer?;You can use Lambda as a target with the Application Load Balancer in US East (NVirginia), US East (Ohio), US West (Northern California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada ( Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), South America (São Paulo), and GovCloud (US-West) AWS Regions. /elasticloadbalancing/faqs/;Is the Application Load Balancer available in AWS Local Zones?;Yes, Application Load Balancer is available in the Local Zone in Los Angeles. Within the Los Angeles Local Zone, Application Load Balancer will operate in a single subnet and scale automatically to meet varying levels of application load without manual intervention. /elasticloadbalancing/faqs/;How does Application Load Balancer pricing work?;You are charged for each hour or partial hour that an Application Load Balancer is running and the number of Load Balancer Capacity Units (LCU) used per hour. /elasticloadbalancing/faqs/;What is a Load Balancer Capacity Unit (LCU)?;An LCU is a new metric for determining how you pay for an Application Load Balancer. An LCU defines the maximum resource consumed in any one of the dimensions (new connections, active connections, bandwidth and rule evaluations) the Application Load Balancer processes your traffic. /elasticloadbalancing/faqs/;Will I be billed on Classic Load Balancers by LCU?;No, Classic Load Balancers will continue to be billed for bandwidth and hourly usage. /elasticloadbalancing/faqs/;How do I know the number of LCUs an Application Load Balancer is using?;We expose the usage of all four dimensions that constitute an LCU via Amazon CloudWatch. /elasticloadbalancing/faqs/;Will I be billed on all the dimensions in an LCU?;No. The number of LCUs per hour will be determined based on maximum resource consumed amongst the four dimensions that constitutes a LCU. /elasticloadbalancing/faqs/;Will I be billed on partial LCUs?;Yes. /elasticloadbalancing/faqs/;Is a free tier offered on an Application Load Balancer for new AWS accounts?;Yes. For new AWS accounts, a free tier for an Application Load Balancer offers 750 hours and 15 LCUs. This free tier offer is only available to new AWS customers, and is available for 12 months following your AWS sign-up date. /elasticloadbalancing/faqs/;Can I use a combination of Application Load Balancer and Classic Load Balancer as part of my free tier?;Yes. You can use both Classic and Application Load Balancers for 15 GB and 15 LCUs respectively. The 750 load balancer hours are shared between both Classic and Application Load Balancers. /elasticloadbalancing/faqs/;What are rule evaluations?;Rule evaluations are defined as the product of number of rules processed and the request rate averaged over an hour. /elasticloadbalancing/faqs/;How does the LCU billing work with different certificate types and key sizes?;Certificate key size affects only the number of new connections per second in the LCU computation for billing. The following table lists the value of this dimension for different key sizes for RSA and ECDSA certificates. /elasticloadbalancing/faqs/;Am I charged for regional AWS data transfer when enabling cross-zone load balancing in Application Load Balancer?;No. Since cross-zone load balancing is always on with Application Load Balancer, you are not charged for this type of regional data transfer. /elasticloadbalancing/faqs/;Is user authentication in Application Load Balancer charged separately?;No. There is no separate charge for enabling the authentication functionality in Application Load Balancer. When using Amazon Cognito with Application Load Balancer, Amazon Cognito pricing will apply. /elasticloadbalancing/faqs/;How do you charge for Application Load Balancer usage with AWS Lambda targets?;You are charged as usual for each hour or partial hour that an Application Load Balancer is running and the number of Load Balancer Capacity Units (LCU) used per hour. For Lambda targets, each LCU offers 0.4 GB processed bytes per hour, 25 new connections per second, 3,000 active connections per minute, and 1,000 rule evaluations per second. For the processed bytes dimension, each LCU provides 0.4 GB per hour for Lambda targets versus 1 GB per hour for all other target types like Amazon EC2 instances, containers, and IP addresses. Please note that usual AWS Lambda charges apply to Lambda invocations by Application Load Balancer. /elasticloadbalancing/faqs/;How can I differentiate the bytes processed by Lambda targets versus bytes processed by other targets (Amazon EC2, containers, and on-premises servers)?;Applications Load Balancers emit two new CloudWatch metrics. LambdaTargetProcessedBytes metric indicates the bytes processed by Lambda targets, and the StandardProcessedBytes metric indicates bytes processed by all other target types. /elasticloadbalancing/faqs/;Can I create a TCP or UDP (Layer 4) listener for my Network Load Balancer?;Yes. Network Load Balancers support both TCP, UDP, and TCP+UDP (Layer 4) listeners, as well as TLS listeners. /elasticloadbalancing/faqs/;What are the key features available with the Network Load Balancer?;Network Load Balancer provides both TCP and UDP (Layer 4) load balancing. It is architected to handle millions of requests per second and sudden volatile traffic patterns, and provides extremely low latencies. In addition, Network Load Balancer also supports TLS termination, preserves the source IP of the clients, and provides stable IP support and zonal isolation. It also supports long-running connections that are useful for WebSocket type applications. /elasticloadbalancing/faqs/;Can Network Load Balancer process both TCP and UDP protocol traffic on the same port?;Yes. To achieve this, you can use a TCP+UDP listener. For example, for a DNservice using both TCP and UDP, you can create a TCP+UDP listener on port 53, and the load balancer will process traffic for both UDP and TCP requests on that port. You must associate a TCP+UDP listener with a TCP+UDP target group. /elasticloadbalancing/faqs/;How does Network Load Balancer compare to what I get with the TCP listener on a Classic Load Balancer?;Network Load Balancer preserves the source IP of the client, which is not preserved in the Classic Load Balancer. Customers can use proxy protocol with Classic Load Balancer to get the source IP. Network Load Balancer automatically provides a static IP per Availability Zone (AZ) to the load balancer and also enables assigning an Elastic IP to the load balancer per AZ. This is not supported with Classic Load Balancer. /elasticloadbalancing/faqs/;Can I migrate to Network Load Balancer from Classic Load Balancer?;Yes. You can migrate to Network Load Balancer from Classic Load Balancer using one of the options listed in this document. /elasticloadbalancing/faqs/;Are there limits on the resources for my Network Load Balancer?;Yes, please refer to Network Load Balancer limits documentation for more information. /elasticloadbalancing/faqs/;Can I use the AWS Management Console to set up my Network Load Balancer?;Yes, you can use the AWS Management Console, AWS CLI, or the API to set up a Network Load Balancer. /elasticloadbalancing/faqs/;Can I use the existing API for Classic Load Balancers for my Network Load Balancers?;No. To create a Classic Load Balancer, use the 2012-06-01 API. To create a Network Load Balancer or an Application Load Balancer, use the 2015-12-01 API. /elasticloadbalancing/faqs/;Can I create my Network Load Balancer in a single Availability Zone?;Yes, you can create your Network Load Balancer in a single AZ by providing a single subnet when you create the load balancer. /elasticloadbalancing/faqs/;Does Network Load Balancer support DNS regional and zonal fail-over?;Yes, you can use Amazon Route 53 health checking and DNfailover features to enhance the availability of the applications running behind Network Load Balancers. Using Route 53 DNfailover, you can run applications in multiple AWS Availability zones and designate alternate load balancers for failover across regions. /elasticloadbalancing/faqs/;Can I have a Network Load Balancer with a mix of ELB-provided IPs and Elastic IPs or assigned private IPs?;No. A Network Load Balancer’s addresses must be completely controlled by you, or completely controlled by ELB. This is to ensure that when using Elastic IPs with a Network Load Balancer, all addresses known to your clients do not change. /elasticloadbalancing/faqs/;Can I assign more than one EIP to my Network Load Balancer in each subnet?;No. For each associated subnet a Network Load Balancer is in, the Network Load Balancer can only support a single public/internet facing IP address. /elasticloadbalancing/faqs/;If I remove/delete a Network Load Balancer what will happen to the Elastic IP addresses that were associated with it?;The Elastic IP Addresses that were associated with your load balancer will return to your allocated pool and be available for future use. /elasticloadbalancing/faqs/;Does Network Load Balancer support internal load balancers?;Network Load Balancer can be set up as an internet-facing load balancer or an internal load balancer, similar to what is possible with Application Load Balancer and Classic Load Balancer. /elasticloadbalancing/faqs/;Can the internal Network Load balancer support more than one private IP in each subnet?;No. For each associated subnet that a load balancer is in, the Network Load Balancer can only support a single private IP. /elasticloadbalancing/faqs/;Can I set up Websockets with my Network Load Balancer?;Yes, configure TCP listeners that route the traffic to the targets that implement WebSockets protocol (https://tools.ietf.org/html/rfc6455 ). Because WebSockets is a layer 7 protocol and Network Load Balancer is operating at layer 4, no special handling exists in Network Load Balancer for WebSockets or other higher level protocols. /elasticloadbalancing/faqs/;Can I load balance to any arbitrary IP address?;Yes. You can use any IP address from the load balancer’s VPC CIDR for targets within load balancer’s VPC and any IP address from RFC 1918 ranges (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16) or RFC 6598 range (100.64.0.0/10) for targets located outside the load balancer’s VPC (EC2-Classic and on-premises locations reachable over AWS Direct Connect). Load balancing to IP address target type is supported for TCP listeners only, and is currently not supported for UDP listeners. /elasticloadbalancing/faqs/;Can I use Network Load Balancer to setup AWS PrivateLink?;Yes, Network Load Balancers with TCP and TLS Listeners can be used to setup AWS PrivateLink. You cannot set up PrivateLink with UDP listeners on Network Load Balancers. /elasticloadbalancing/faqs/;What is a UDP flow?;While user datagram protocol (UDP) is connectionless, the load balancer maintains UDP flow state based on 5-tuple hash, ensuring that packets sent in the same context are consistently forwarded to the same target. The flow is considered active as long as traffic is flowing and until the idle timeout is reached. Once the timeout threshold is reached, the load balancer will forget the affinity, and the incoming UDP packet will be considered a new flow and load-balanced to a new target. /elasticloadbalancing/faqs/;What is the idle timeout supported by Network Load Balancer?;Network Load Balancer idle timeout for TCP connections is 350 seconds. The idle timeout for UDP flows is 120 seconds. /elasticloadbalancing/faqs/;What is the benefit of targeting containers behind a load balancer with IP addresses instead of instance IDs?;Each container on an instance can now have its own security group, and does not need to share security rules with other containers. You can attach security groups to an ENI, and each ENon an instance can have a different security group. You can map a container to the IP address of a particular ENto associate security group(s) per container. Load balancing using IP addresses also allows multiple containers running on an instance use the same port (say port 80). The ability to use the same port across containers allows containers on an instance to communicate with each other through well-known ports instead of random ports. /elasticloadbalancing/faqs/;How can I load balance applications distributed across a VPC and on-premises location?;There are various ways to achieve hybrid load balancing. If an application runs on targets distributed between a VPC and an on-premises location, you can add them to the same target group using their IP addresses. To migrate to AWS without impacting your application, gradually add VPC targets to the target group and remove on-premises targets from the target group. You can also use separate load balancers for VPC and on-premises targets and use DNweighting to achieve weighted load balancing between VPC and on-premises targets. /elasticloadbalancing/faqs/;How can I load balance to EC2-Classic instances?;You cannot load balance to EC2-Classic Instances when registering their Instance IDs as targets. However if you link these EC2-Classic instances to the load balancer's VPC using ClassicLink and use the private IPs of these EC2-Classic instances as targets, then you can load balance to the EC2-Classic instances. If you are using EC2 Classic instances today with a Classic Load Balancer, you can easily migrate to a Network Load Balancer. /elasticloadbalancing/faqs/;How do I enable cross-zone load balancing in Network Load Balancer?;You can enable cross-zone loading balancing only after creating your Network Load Balancer. You achieve this by editing the load balancing attributes section and then selecting the cross-zone load balancing support checkbox. /elasticloadbalancing/faqs/;Am I charged for regional AWS data-transfer when I enable cross-zone load balancing in Network Load Balancer?;Yes, you will be charged for regional data transfer between Availability Zones with Network Load Balancer when cross-zone load balancing is enabled. Check the charges in the data transfer section of the Amazon EC2 On-Demand Pricing page. /elasticloadbalancing/faqs/;Is there any impact of cross-zone load balancing on Network Load Balancer limits?;Yes. Network Load Balancer currently supports 200 targets per Availability Zone. For example, if you are in two AZs, you can have up to 400 targets registered with Network Load Balancer. If cross-zone load balancing is on, then the maximum targets reduce from 200 per AZ to 200 per load balancer. So, in the example above: When cross-zone load balancing is on, even though your load balancer is in two AZs, you are limited to 200 targets that can be registered to the load balancer. /elasticloadbalancing/faqs/;Does Network Load Balancer support TLS termination?;Yes, you can terminate TLS connections on the Network Load Balancer. You must install an SSL certificate on your load balancer. The load balancer uses this certificate to terminate the connection and then decrypt requests from clients before sending them to targets. /elasticloadbalancing/faqs/;Is source IP is preserved when terminating TLS on Network Load Balancer?;Source IP continues to be preserved even if you terminate TLS on the Network Load Balancer. /elasticloadbalancing/faqs/;What are the steps to get a SSL certificate?;You can either use AWS Certificate Manager to provision an SSL/TLS certificate, or you can obtain the certificate from other sources by creating the certificate request, getting the certificate request signed by a certificate authority (CA), and then uploading the certificate either using AWS Certification Manager (ACM) or the AWS Identity and Access Management (IAM) service. /elasticloadbalancing/faqs/;How can I enable Server Name Indication (SNI) for my Network Load Balancer?;SNis automatically enabled when you associate more than one TLS certificate with the same secure listener on a load balancer. Similarly, SNmode for a secure listener is automatically disabled when you have only one certificate associated to a secure listener. /elasticloadbalancing/faqs/;How does the Network Load Balancer integrate with AWS Certificate Manager (ACM) or Identity Access Manager (IAM)?;Network Load Balancer is integrated with AWS Certificate Management (ACM). Integration with ACM makes it very simple to bind a certificate to the load balancer thereby making the entire SSL offload process very easy. Purchasing, uploading, and renewing SSL/TLS certificates is a time-consuming manual and complex process. With ACM integration with Network Load Balancer, this whole process has been shortened to simply requesting a trusted SSL/TLS certificate and selecting the ACM certificate to provision it with the load balancer. Once you create a Network Load balancer, you can now configure a TLS listener followed by an option to select a certificate from either ACM or Identity Access Manager (IAM). This experience is similar to what you have in Application Load Balancer or Classic Load Balancer. /elasticloadbalancing/faqs/;Is back-end server authentication supported with Network Load Balancer?;No, only encryption is supported to the back-ends with Network Load Balancer. /elasticloadbalancing/faqs/;What are the certificate types supported by Network Load Balancer?;Network Load Balancer only supports RSA certificates with 2K key size. We currently do not support RSA certificate key sizes greater than 2K or ECDSA certificates on the Network Load Balancer. /elasticloadbalancing/faqs/;In which AWS Regions is TLS Termination on Network Load Balancer supported?;You can use TLS Termination on Network Load Balancer in US East (NVirginia), US East (Ohio), US West (Northern California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), South America (São Paulo), and GovCloud (US-West) AWS Regions. /elasticloadbalancing/faqs/;How does Network Load Balancer pricing work?;You are charged for each hour or partial hour that a Network Load Balancer is running and the number of Load Balancer Capacity Units (LCU) used by Network Load Balancer per hour. /elasticloadbalancing/faqs/;What is a Load Balancer Capacity Unit (LCU)?;An LCU is a new metric for determining how you pay for a Network Load Balancer. An LCU defines the maximum resource consumed in any one of the dimensions (new connections/flows, active connections/flows, and bandwidth) the Network Load Balancer processes your traffic. /elasticloadbalancing/faqs/;What are the LCU metrics for TCP traffic on Network Load Balancer?;The LCU metrics for the TCP traffic are as follows: /elasticloadbalancing/faqs/;What are the LCU metrics for UDP traffic on Network Load Balancer?;The LCU metrics for the UDP traffic are as follows: /elasticloadbalancing/faqs/;What are the LCU metrics for TLS traffic on Network Load Balancer?;The LCU metrics for the TLS traffic are as follows: /elasticloadbalancing/faqs/;Is new connections/flows per sec same as requests/sec?;No. Multiple requests can be sent in a single connection. /elasticloadbalancing/faqs/;Will I be billed on Classic Load Balancers by LCU?;No. Classic Load Balancers will continue to be billed for bandwidth and hourly charge. /elasticloadbalancing/faqs/;How do I know the number of LCUs a Network Load Balancer is using?;We will expose the usage of all three dimensions that constitutes a LCU via Amazon CloudWatch. /elasticloadbalancing/faqs/;Will I be billed on all the dimensions in an LCU?;No. The number of LCUs per hour will be determined based on maximum resource consumed amongst the three dimensions that constitutes a LCU. /elasticloadbalancing/faqs/;Will I be billed on partial LCUs?;Yes. /elasticloadbalancing/faqs/;Is a free tier offered on a Network Load Balancer for new AWS accounts?;Yes. For new AWS accounts, a free tier for a Network Load Balancer offers 750 hours and 15 LCUs. This free tier offer is only available to new AWS customers, and is available for 12 months following your AWS sign-up date. /elasticloadbalancing/faqs/;Can I use a combination of Network Load Balancer, Application Load Balancer and Classic Load Balancer as part of my free tier?;Yes. You can use Application and Network each for 15 LCUs and Classic for 15 GB respectively. The 750 load balancer hours are shared between Application, Network, and Classic Load Balancers. /elasticloadbalancing/faqs/;When should I use Gateway Load Balancer, as opposed to Network Load Balancer or Application Load Balancer?;You should use Gateway Load Balancer when deploying inline virtual appliances where network traffic is not destined for the Gateway Load Balancer itself. Gateway Load Balancer transparently passes all Layer 3 traffic through third-party virtual appliances, and is invisible to the source and destination of the traffic. For more details on how these load balancers compare, see the features comparison page. /elasticloadbalancing/faqs/;Where is Gateway Load Balancer available?;Gateway Load Balancer is available in the following regions: AWS GovCloud (US-East), AWS GovCloud (US-West), US East (NVirginia) - except in zone us1-az3, US East (Ohio), US West (Oregon), US West (NCalifornia), Canada (Central), South America (Sao Paulo), EU (Ireland), EU (Frankfurt), EU (Stockholm), EU (London), EU (Paris), EU (Milan), Africa (Cape Town), Middle East (Bahrain), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Osaka), China (Beijing), China (Ningxia). /elasticloadbalancing/faqs/;Is Gateway Load Balancer deployed per Region or per Availability Zone (AZ)?;Gateway Load Balancer runs within one AZ. /elasticloadbalancing/faqs/;What are the key features available with the Gateway Load Balancer?;Gateway Load Balancer provides both Layer 3 gateway and Layer 4 load balancing capabilities. It is a transparent bump-in-the-wire device that does not change any part of the packet. It is architected to handle millions of requests/second, volatile traffic patterns, and introduces extremely low latency. See Gateway Load Balancer features in this table. /elasticloadbalancing/faqs/;Does Gateway Load Balancer perform TLS termination?;Gateway Load Balancer does not perform TLS termination and does not maintain any application state. These functions are performed by the third-party virtual appliances it directs traffic to, and receives traffic from. /elasticloadbalancing/faqs/;Does Gateway Load Balancer maintain application state?;Gateway Load Balancer does not maintain application state, but it maintains flow stickiness to a specific appliance using 5-tuple (for TCP/UDP flows) or 3-tuple (for non-TCP/UDP flows). /elasticloadbalancing/faqs/;How does Gateway Load Balancer define a flow?;By default, Gateway Load Balancer defines a flow as a combination of a 5-tuple that comprises of Source IP, Destination IP, Protocol, Source Port, and Destination Port. Using the default 5-tuple hash, Gateway Load Balancer makes sure that both directions of a flow (i.e., source to destination, and destination to source) are consistently forwarded to the same target. The flow is considered active as long as traffic is flowing and until the idle timeout is reached. Once the timeout threshold is reached, the load balancer will forget the affinity, and incoming traffic packet will be considered as a new flow and may be load-balanced to a new target. /elasticloadbalancing/faqs/;How does Gateway Load Balancer handle the failure of one virtual appliance instance in a single Availability Zone?;When a single virtual appliance instance fails, Gateway Load Balancer removes it from the routing list and reroutes traffic to a healthy appliance instance. /elasticloadbalancing/faqs/;How does Gateway Load Balancer handle the failure of all virtual appliances within a single AZ?;If all virtual appliances within an Availability Zone fail, Gateway Load Balancer will drop the network traffic. We recommend deploying Gateway Load Balancers in multiple AZs for greater availability. If all appliances fail in one AZ, scripts can be used to either add new appliances, or direct traffic to a Gateway Load Balancer in a different AZ. /elasticloadbalancing/faqs/;Can I configure an appliance to be a target for more than one Gateway Load Balancer?;Yes, multiple Gateway Load Balancers can point to same set of virtual appliances. /elasticloadbalancing/faqs/;What type of listener can I create for my Gateway Load Balancer?;Gateway Load Balancer is a transparent bump-in-the-wire device and listens to all types of IP traffic (including TCP, UDP, ICMP, GRE, ESP and others). Hence only IP listener is created on a Gateway Load Balancer. /elasticloadbalancing/faqs/;Are there limits on the resources for my Gateway Load Balancer?;Yes, please refer to Gateway Load Balancer limits documentation for more information. /elasticloadbalancing/faqs/;Can I use the AWS Management Console to set up my Gateway Load Balancer?;Yes, you can use the AWS Management Console, AWS CLI, or the API to set up a Gateway Load Balancer. /elasticloadbalancing/faqs/;Can I create my Gateway Load Balancer in a single Availability Zone?;Yes, you can create your Gateway Load Balancer in a single availability zone by providing a single subnet when you create the load balancer. However, we recommend using multiple availability zones for improved availability. You cannot add or remove availability zones for a Gateway Load Balancer after you create it. /elasticloadbalancing/faqs/;How do I enable cross-zone load balancing in Gateway Load Balancer?;By default, cross-zone load balancing is disabled. You can enable cross-zone loading balancing only after creating your Gateway Load Balancer. You achieve this by editing the load balancing attributes section and then by selecting the cross-zone load balancing support checkbox. /elasticloadbalancing/faqs/;Am I charged for AWS data-transfer when I enable cross-zone load balancing in Gateway Load Balancer?;Yes, you will be charged for data transfer between Availability Zones with Gateway Load Balancer when cross-zone load balancing is enabled. Check the charges in the data-transfer section at Amazon EC2 On-Demand Pricing page. /elasticloadbalancing/faqs/;Is there any impact of cross-zone load balancing on Gateway Load Balancer limits?;Yes. Gateway Load Balancer currently supports 300 targets per Availability Zone. For example, if you created Gateway Load Balancer in 3 Availability-Zones, you can have up to 900 targets registered. If cross-zone load balancing is on, then the maximum number of targets reduces from 300 per Availability Zone to 300 per Gateway Load Balancer. /elasticloadbalancing/faqs/;How does Gateway Load Balancer pricing work?;You are charged for each hour or partial hour that a Gateway Load Balancer is running and the number of Load Balancer Capacity Units (LCU) used by Gateway Load Balancer per hour. /elasticloadbalancing/faqs/;What is a Load Balancer Capacity Unit (LCU)?;An LCU is an Elastic Load Balancing metric for determining how you pay for a Gateway Load Balancer. An LCU defines the maximum resource consumed in any one of the dimensions (new connections/flows, active connections/flows, and bandwidth) the Gateway Load Balancer processes your traffic. /elasticloadbalancing/faqs/;What is the LCU metrics for the Gateway Load Balancer?;The LCU metrics for the TCP traffic is as follows: /elasticloadbalancing/faqs/;Why do I need a Gateway Load Balancer Endpoint?;In order to be valuable, virtual appliances need to introduce as little additional latency as possible, and traffic flowing to and from the virtual appliance must follow a secure connection. Gateway Load Balancer Endpoints create the secured, low-latency connections necessary to meet these requirements. /elasticloadbalancing/faqs/;How do Gateway Load Balancer Endpoints help with centralization?;Using a Gateway Load Balancer Endpoint, appliances can reside in different AWS accounts and VPCs. This allows appliances to be centralized in one location for easier management and reduced operational overhead. /elasticloadbalancing/faqs/;How do Gateway Load Balancer Endpoints work?;Gateway Load Balancer Endpoints are a new type of VPC endpoint that uses PrivateLink technology. As network traffic flows from a source (an Internet Gateway, a VPC, etc.) to the Gateway Load Balancer, and back, a Gateway Load Balancer Endpoint ensures private connectivity between the two. All traffic flows over the AWS network and data is never exposed to the internet, increasing both security and performance. /elasticloadbalancing/faqs/;How are PrivateLink Interface endpoints different than Gateway Load Balancer Endpoints?;A PrivateLink Interface endpoint is paired with a Network Load Balancer (NLB) in order to distribute TCP and UDP traffic that is destined for the web applications. In contrast, Gateway Load Balancer Endpoints are used with Gateway Load Balancers to connect the source and destination of traffic. Traffic flows from the Gateway Load Balancer Endpoint to the Gateway Load Balancer, through the virtual appliances, and back to the destination over secured PrivateLink connections. /elasticloadbalancing/faqs/;How many Gateway Load Balancer Endpoints can I connect to one Gateway Load Balancer?;Gateway Load Balancer Endpoint is a VPC Endpoint and there is no limit on how many VPC Endpoints can connect to a service that uses Gateway Load Balancer. However, we recommend connecting no more than 50 Gateway Load Balancer Endpoints per one Gateway Load Balancer to reduce the risk of broader impact in case of service failure. /elasticloadbalancing/faqs/;Which operating systems does the Classic Load Balancer support?;The Classic Load Balancer supports Amazon EC2 instances with any operating system currently supported by the Amazon EC2 service. /elasticloadbalancing/faqs/;Which protocols does the Classic Load Balancer support?;The Classic Load Balancer supports load balancing of applications using HTTP, HTTPS (Secure HTTP), SSL (Secure TCP) and TCP protocols. /elasticloadbalancing/faqs/;What TCP ports can I load balance?;You can perform load balancing for the following TCP ports: /elasticloadbalancing/faqs/;Does the Classic Load Balancer support IPv6 traffic?;Yes. Each Classic Load Balancer has an associated IPv4, IPv6, and dualstack (both IPv4 and IPv6) DNname. IPv6 is not supported in VPC. You can use an Application Load Balancer for native IPv6 support in VPC. /elasticloadbalancing/faqs/;Can I configure my Amazon EC2 instances to only accept traffic from Classic Load Balancers?;Yes. /elasticloadbalancing/faqs/;Can I configure a security group for the front-end of Classic Load Balancers?;If you are using Amazon Virtual Private Cloud, you can configure security groups for the front end of your Classic Load Balancers. /elasticloadbalancing/faqs/;Can I use a single Classic Load Balancer for handling HTTP and HTTPS requests?;Yes, you can map HTTP port 80 and HTTPS port 443 to a single Classic Load Balancer. /elasticloadbalancing/faqs/;How many connections will my load balanced Amazon EC2 instances need to accept from each Classic Load Balancer?;Classic Load Balancers do not cap the number of connections that they can attempt to establish with your load balanced Amazon EC2 instances. You can expect this number to scale with the number of concurrent HTTP, HTTPS, or SSL requests or the number of concurrent TCP connections that the Classic load balancers receive. /elasticloadbalancing/faqs/;Can I load balance Amazon EC2 instances launched using a Paid AMI?;You can load balance Amazon EC2 instances launched using a paid AMI from AWS Marketplace. However, Classic Load Balancers do not support instances launched using a paid AMI from Amazon DevPay site. /elasticloadbalancing/faqs/;Can I use Classic Load Balancers in Amazon Virtual Private Cloud?;Yes. See the Elastic Load Balancing web page. /elasticloadbalancing/faqs/;Can I get a history of Classic Load Balancer API calls made on my account for security analysis and operational troubleshooting purposes?;Yes. To receive a history of Classic Load Balancer API calls made on your account, simply turn on CloudTrail in the AWS Management Console. /elasticloadbalancing/faqs/;Do Classic Load Balancers support SSL termination?;Yes, you can terminate SSL on Classic Load Balancers. You must install an SSL certificate on each load balancer. The load balancers use this certificate to terminate the connection and then decrypt requests from clients before sending them to the back-end instances. /elasticloadbalancing/faqs/;What are the steps to get a SSL certificate?;You can either use AWS Certificate Manager to provision a SSL/TLS certificate or you can obtain the certificate from other sources by creating the certificate request, getting the certificate request signed by a CA, and then uploading the certificate using the AWS Identity and Access Management (IAM) service. /elasticloadbalancing/faqs/;How do Classic Load Balancers integrate with AWS Certificate Manager (ACM)?;Classic Load Balancers are now integrated with AWS Certificate Management (ACM). Integration with ACM makes it very simple to bind a certificate to each load balancer thereby making the entire SSL offload process very easy. Typically purchasing, uploading, and renewing SSL/TLS certificates is a time-consuming manual and complex process. With ACM integrated with Classic Load Balancers, this whole process has been shortened to simply requesting a trusted SSL/TLS certificate and selecting the ACM certificate to provision it with each load balancer. /elasticloadbalancing/faqs/;How do I enable cross-zone load balancing in Classic Load Balancer?;You can enable cross-zone load balancing using the console, the AWS CLI, or an AWS SDK. See Cross-Zone Load Balancing documentation for more details. /elasticloadbalancing/faqs/;Am I charged for regional AWS data-transfer when I enable cross-zone load balancing in Classic Load Balancer?;No, you are not charged for regional data transfer between Availability Zones when you enable cross-zone load balancing for your Classic Load Balancer. /codestar/faqs/;What is AWS CodeStar?;AWS CodeStar is a cloud‑based development service that provides the tools you need to quickly develop, build, and deploy applications on AWS. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. AWS CodeStar makes it easy for your whole team to work together securely, with built-in role-based policies that allow you to easily manage access and add owners, contributors, and viewers to your projects. Each AWS CodeStar project comes with a unified project dashboard and integration with Atlassian JIRA software, a third-party issue tracking and project management tool. With the AWS CodeStar project dashboard, you can easily track your entire software development process, from a backlog work item to production code deployment. /codestar/faqs/;Why should I use AWS CodeStar?;You should use CodeStar whenever you want to quickly set up a software development project on AWS, whether you’re starting with a full set of tools for a team-based project or only setting up a trial project with a source repository. AWS CodeStar can also be used by anyone interested in learning more about continuous delivery by starting with a full tool chain for a sample project. AWS CodeStar guides you through the setup experience with project templates that set up real applications and be modified at any point in the future to suit your needs. /codestar/faqs/;What can I do with AWS CodeStar?;Start developing on AWS in minutes. AWS CodeStar makes it easy for you to set up your entire development and continuous delivery toolchain for coding, building, testing, and deploying your application code. To start a project, you can choose from a variety of AWS CodeStar templates for Amazon EC2, AWS Lambda, and AWS Elastic Beanstalk. When you choose a project template, the underlying AWS services are provisioned in minutes, allowing you to quickly start coding and deploying your applications. /codestar/faqs/;How much does AWS CodeStar cost?;"There is no additional charge for AWS CodeStar. You pay for AWS resources (e.g. EC2 instances, Lambda executions or S3 buckets) used to in your CodeStar projects. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments." /codestar/faqs/;How do I get started with AWS CodeStar?;Getting started with AWS CodeStar can be done in a few minutes through the AWS CodeStar console. First, select one of the available CodeStar project templates, which will automatically provision all of the resources needed for your project. Once your project has been provisioned, you can see your running application from the “Application endpoints” tile. Use the steps in the CodeStar console to connect to the AWS CodeCommit source repository for your project and begin coding. You can use the project dashboard to track and manage changes in the release process and see the most recent project activity. /codestar/faqs/;What types of applications can I build with AWS CodeStar?;CodeStar can be used for building web applications, web services and more. The applications run on Amazon EC2, AWS Elastic Beanstalk or AWS Lambda. Project templates are available in several different programming languages including Java, Node.js (Javascript), PHP, Python and Ruby. /codestar/faqs/;How do I add, remove, or change users for my AWS CodeStar projects?;You can add, change or remove users for your CodeStar project through the “Team” section of the CodeStar console. You can choose to grant the users Owner, Contributor or Viewer permissions. You can also remove users or change their roles at any time. /codestar/faqs/;How do AWS CodeStar users relate to IAM users?;"CodeStar users are IAM users that are managed by CodeStar to provide pre-built, role-based access policies across your development environment; Because CodeStar users are built on IAM, you still get the administrative benefits of IAM. For example, if you add an existing IAM user to a CodeStar project, the existing global account policies in IAM are still enforced." /codestar/faqs/;Can I work on my AWS CodeStar projects directly from an IDE?;"Yes. By installing the AWS Toolkit for Eclipse or Visual Studio you gain the ability to easily configure your local development environment to work with CodeStar Projects; Once installed, developers can then select from a list of available CodeStar projects and have their development tooling automatically configured to clone and checkout their project’s source code, all from within their IDE." /codestar/faqs/;How do I configure my project dashboard?;"Project dashboards can be configured to show the tiles you want, where you want them; To add or remove tiles, click on the “Tiles” drop‑down on your project dashboard. To change the layout of your project dashboard, drag the tile to your desired position." /codestar/faqs/;Are there third party integrations that I can use with AWS CodeStar?;"AWS CodeStar works with Atlassian JIRA to integrate issue management with your projects; In addition, you can add partner actions to your project’s AWS CodePipeline. To see a list of the available CodePipeline actions, see the AWS CodePipeline integrations page." /codestar/faqs/;I am a third party tools vendors. Can I integrate with AWS CodeStar?;We are starting to build out an integration program for AWS Partner Network (APNmembers. If you are already an APN member and interested in learning more, please contact aws-codestar-request@amazon.com. /codestar/faqs/;Can I use AWS CodeStar to help manage my existing AWS applications?;No. AWS CodeStar helps customers quickly start new software projects on AWS. Each CodeStar project includes development tools, including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild and AWS CodeDeploy, that can be used on their own and with existing AWS applications. Customers who are interested in how these tools can help them with their existing AWS applications can visit the respective service pages to learn more. /codestar/faqs/;In what regions is AWS CodeStar available?;"See Regional Products and Services for details. The CodeStar console displays all of your development projects across all regions in a single, centralized view; Your CodeStar project will be saved to the region your console is set to." /codestar/faqs/;Can I use AWS CodeStar to launch applications in other regions?;No. CodeStar configures and manages Code services resources, like a CodeCommit repository, in the regions that you specify in your CodeStar project configuration. /codecommit/faqs/;What is AWS CodeCommit?;AWS CodeCommit is a secure, highly scalable, managed source control service that makes it easier for teams to collaborate on code. AWS CodeCommit eliminates the need for you to operate your own source control system or worry about scaling its infrastructure. You can use AWS CodeCommit to store anything from code to binaries, and it works seamlessly with your existing Git tools. /codecommit/faqs/;What is Git?;Git is an open-source distributed version control system. To work with AWS CodeCommit repositories, you use the Git command line interface (CLI) or any of the available Git clients. To learn more about Git, see the Git documentation. To learn more about using AWS CodeCommit with Git, see Getting Started with AWS CodeCommit. /codecommit/faqs/;Who should use AWS CodeCommit?;AWS CodeCommit is designed for software developers who need a secure, reliable, and scalable source control system to store and version their code. In addition, AWS CodeCommit can be used by anyone looking for an easy to use, fully managed data store that is version controlled. For example, IT administrators can use AWS CodeCommit to store their scripts and configurations. Web designers can use AWS CodeCommit to store HTML pages and images. /codecommit/faqs/;How does AWS CodeCommit compare to a versioned S3 bucket?;AWS CodeCommit is designed for collaborative software development. It manages batches of changes across multiple files, offers parallel branching, and includes version differencing (“diffing”). In comparison, Amazon S3 versioning supports recovering past versions of individual files but doesn’t support tracking batched changes that span multiple files or other features needed for collaborative software development. /codecommit/faqs/;How do I create a repository?;You can create a repository from the AWS Management Console or by using the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the AWS CodeCommit APIs. /codecommit/faqs/;How do I update files in my repository?;You can edit your files directly from the CodeCommit console or you can use Git to work with the repository. For example, Git commands, you can use the git clone command to make a local copy of the AWS CodeCommit repository. Make changes to the local files and use the git commit command when you’re ready to save the changes. Finally, use the git push command to upload the changes to the AWS CodeCommit repository. For step-by-step instructions, see Getting Started with AWS CodeCommit. /codecommit/faqs/;How do I import my existing repository to AWS CodeCommit?;You can use Git to import any existing Git repository to AWS CodeCommit. For other repositories, such as Subversion and Perforce, you can use a Git importer to first migrate it to a Git repository. For step by step instructions on importing Git repositories, see Migrate an Existing Repository to AWS CodeCommit. For step-by-step instructions on importing local or unversioned content, see the Git migration documentation. /codecommit/faqs/;Does AWS CodeCommit support Git submodules?;Yes. AWS CodeCommit can be used with Git repositories that include submodules. /codecommit/faqs/;What are the service limits when using AWS CodeCommit?;For information on the service limits, see Limits. /codecommit/faqs/;What is the maximum size for a single file that I can store in CodeCommit?;A single file in a repository cannot be more than 2 GB in size. /codecommit/faqs/;How do I backup my repository?;If you have a local copy of the repository from doing a full git clone, you can use that to restore data. If you want additional backups, there are multiple ways to do so. One way is to install Git on your backup server and run a scheduled job that uses the git clone command to take regular snapshots of your repository. You can use git pull instead of git clone if you want to copy only the incremental changes. Note that these operations may incur additional user and/or request charges based on how you set up the backup server and the polling frequency. /codecommit/faqs/;How do I restore a deleted AWS CodeCommit repository?;Deleting an AWS CodeCommit repository is a destructive one-way operation that cannot be undone. To restore a deleted repository, you will need to create the repository again and use either a backup or a local copy from a full clone to upload the data. We recommend using IAM policies along with MFA-protection to restrict users who can delete repositories. For more details, see the Can I use AWS Identity and Access Management (IAM) to manage access to AWS CodeCommit? question in the Security section of the FAQ. /codecommit/faqs/;How do I manage code reviews with AWS CodeCommit?;CodeCommit supports code reviews and enables you to set permissions on branches of your code. Please see our documentation for help with code reviews or branch-level permissions. /codecommit/faqs/;How do I integrate my continuous integration system with AWS CodeCommit?;Continuous Integration (CI) systems can be configured to use Git to pull code from AWS CodeCommit. For examples on using CI systems with AWS CodeCommit, see our blog post on integrating AWS CodeCommit with Jenkins. /codecommit/faqs/;How do I create webhooks using AWS CodeCommit?;In the Amazon SNconsole, you can create an SNtopic with an HTTP endpoint and the desired URL for the webhook. From the AWS CodeCommit console, you can then configure that SNtopic to a repository event using triggers. Additionally, customers using AWS Chatbot can configure notifications to be sent to their Slack Channels or Amazon Chime chat rooms. For more details please visit here. /codecommit/faqs/;Can I get a history of AWS CodeCommit Git operations and API calls made in my account for security analysis and operational troubleshooting purposes?;Yes. You can review recent CodeCommit events, including Git operations and API calls, in the AWS CloudTrail console. For an ongoing record of events you can create a trail and log events in an Amazon S3 bucket. For more information, see Logging AWS CodeCommit API Calls with AWS CloudTrail. /codecommit/faqs/;Can I use AWS Identity and Access Management (IAM) to manage access to AWS CodeCommit?;Yes. AWS CodeCommit supports resource-level permissions. For each AWS CodeCommit repository, you can specify which users can perform which actions. You can also specify AWS multi-factor authentication (MFA) for a CodeCommit action. This allows you to add an extra level of protection for destructive actions such as deleting repositories. In addition to the AWS CodeCommit APIs, you can also specify git pull and git push as actions to control access from Git clients. For example, you can create a read-only user for a repository by allowing that user access to git pull but not git push on the repository. For more information on using IAM with AWS CodeCommit, see Authentication and Access Control for AWS CodeCommit. For more information on authenticating API access using MFA, see Configuring MFA-Protected API Access. /codecommit/faqs/;What communication protocols are supported by AWS CodeCommit?;You can use either the HTTPS or SSH protocols or both to communicate with AWS CodeCommit. To use HTTPS, first install the AWS CLI. The AWS CLI installs a Git credential helper that can be configured with AWS credentials. It automatically signs all HTTPS requests to AWS CodeCommit using the Signature Version 4 signing specification. To use SSH, users create their own public-private key pairs and add their public keys to their IAM users. The private key encrypts the communication with AWS CodeCommit. For step-by-step instructions on setting up HTTPS and SSH access, see the Setting up AWS CodeCommit page. /codecommit/faqs/;What ports should I open in my firewall for access to AWS CodeCommit?;You will have to open outbound access to an AWS CodeCommit service endpoint on port 22 (SSH) or port 443 (HTTPS). /codecommit/faqs/;How do I encrypt my repository in AWS CodeCommit?;Repositories are automatically encrypted at rest. Ncustomer action is required. AWS CodeCommit uses AWS Key Management Service (KMS) to encrypt repositories. When you create your first repository, an AWS-managed CodeCommit key is created under your AWS account. For details, see Encryption for AWS CodeCommit Repositories. /codecommit/faqs/;Can I enable cross-account access to my repository?;Yes. You can create an IAM role in your AWS account to delegate access to a repository to IAM users in other AWS accounts. The IAM users can then configure their AWS CLI to use AWS Security Token Service (STS) and assume the role when running commands. For details see Assuming a Role in the AWS CLI documentation. /codecommit/faqs/;Which regions does AWS CodeCommit support?;Please refer to Regional Products and Services for details of CodeCommit availability by region. /codecommit/faqs/;How much does AWS CodeCommit cost?;AWS CodeCommit costs $1 per active user per month. For every active user, your account receives an additional allowance of 10 GB-month of storage and 2,000 Git requests for that month. Unused allowance for storage and Git requests does not carry over to later months. If you need more storage or Git requests for your users, additional usage will be charged at $0.06 per GB-month and $0.001 per Git request. Users may store as many Git repositories as they would like. Your usage is calculated each month across all regions and automatically applied to your bill. Please see the pricing page for more details. /codecommit/faqs/;What is the definition of an active user in AWS CodeCommit?;An active user is any unique AWS identity (IAM user/role, federated user, or root account) that accesses AWS CodeCommit repositories during the month, either through Git requests or by using the AWS Management Console. A server accessing CodeCommit using a unique AWS identity counts as an active user. /codecommit/faqs/;Which Git requests are considered towards the monthly allowance?;A Git request includes any push or pull that transmits repository objects. The request does not count towards your Git request allowance if there is no object transfer due to local and remote branches being up-to-date. /codebuild/faqs/;What is AWS CodeBuild?;AWS CodeBuild is a fully managed continuous integration service in the cloud. CodeBuild compiles source code, runs tests, and produces packages that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. CodeBuild automatically scales up and down and processes multiple builds concurrently, so your builds don’t have to wait in a queue. You can get started quickly by using CodeBuild prepackaged build environments, or you can use custom build environments to use your own build tools. With CodeBuild, you only pay by the minute. /codebuild/faqs/;Why should I use CodeBuild?;Instead of having to set up, patch, and maintain the build server software yourself, you can use CodeBuild’s fully managed experience. You submit your build jobs to CodeBuild, and it runs them in temporary compute containers that are created fresh on every build and then discarded when finished. You don’t need to manage build server hardware or software. CodeBuild also automatically scales to meet your build volume. It immediately processes each build you submit and can run separate builds concurrently, meaning your builds are never left waiting in a queue. /codebuild/faqs/;What is the pricing for CodeBuild?;See the AWS CodeBuild pricing page for details. /codebuild/faqs/;Can I use CodeBuild to automate my release process?;Yes. CodeBuild is integrated with AWS CodePipeline. You can add a build action and set up a continuous integration and continuous delivery process that runs in the cloud. You can learn how to set up and monitor your builds from the CodePipeline console here. /codebuild/faqs/;What is a build project?;A build project is used to define how CodeBuild will run a build. It includes information such as where to get the source code, which build environment to use, the build commands to run, and where to store the build output. A build environment is the combination of operating system, programming language runtime, and tools used by CodeBuild to run a build. /codebuild/faqs/;How do I configure a build project?;A build project can be configured through the console or the AWS CLI. You specify the source repository location, the runtime environment, the build commands, the IAM role assumed by the container, and the compute class required to run the build. Optionally, you can specify build commands in a buildspec.yml file. /codebuild/faqs/;Which source repositories does CodeBuild support?;CodeBuild can connect to AWS CodeCommit, S3, GitHub, and GitHub Enterprise and Bitbucket to pull source code for builds. /codebuild/faqs/;Which programming frameworks does CodeBuild support?;CodeBuild provides preconfigured environments for supported versions of Java, Ruby, Python, Go, Node.js, Android, .NET Core, PHP, and Docker. You can also customize your own environment by creating a Docker image and uploading it to the Amazon EC2 Container Registry or the Docker Hub registry. You can then reference this custom image in your build project. /codebuild/faqs/;Which preconfigured Windows build runtimes does CodeBuild provide?;CodeBuild provides a preconfigured Windows build environment for .NET Core 2.0. We would like to provide a preconfigured build environment for Microsoft .NET Framework customers, many of whom already have a license to use the Microsoft proprietary libraries. However, Microsoft has been unwilling to work with us in addressing these customer requests at this time. You can customize your environment yourself to support other build targets, such as .NET Framework, by creating a Docker image and uploading it to the Amazon EC2 Container Registry or the Docker Hub registry. You can then reference this custom image in your build project. /codebuild/faqs/;What happens when a build is run?;CodeBuild will create a temporary compute container of the class defined in the build project, load it with the specified runtime environment, download the source code, execute the commands configured in the project, upload the generated artifact to an S3 bucket, and then destroy the compute container. During the build, CodeBuild will stream the build output to the service console and Amazon CloudWatch. /codebuild/faqs/;How do I set up my first build?;Sign in to the AWS Management Console, create a build project, and then run a build. For an introduction to CodeBuild, see Getting Started, which includes a step-by-step tutorial. You can also use CodeBuild Local to test and debug your build locally. /codebuild/faqs/;Can I use CodeBuild with Jenkins?;Yes. The CodeBuild Plugin for Jenkins can be used to integrate CodeBuild into Jenkins jobs. The build jobs are sent to CodeBuild, eliminating the need for provisioning and managing the Jenkins worker nodes. /codebuild/faqs/;How can I view past build results?;You can access your past build results through the console, CloudWatch, or the API. The results include outcome (success or failure), build duration, output artifact location, and log location. With the CodeBuild dashboard, you can view metrics to understand build behavior over time. The dashboard displays number of builds attempted, succeeded, and failed, as well as build duration. You can also visit the CloudWatch console to view more detailed build metrics. To learn more about monitoring CodeBuild with CloudWatch, visit our documentation. /codebuild/faqs/;How can I debug a past build failure?;You can debug a build by inspecting the detailed logs generated during the build run or you can use CodeBuild Local to locally test and debug your builds. /codebuild/faqs/;Why is build.general1.small not supported for .NET Core for Windows build environments?;The .NET Core for Windows build environment requires more memory and processing power than is available in the build.general1.small compute instance type due to the size of the Windows Docker base container and additional libraries. Due to this limitation, there is no free tier for the .NET Core for Windows build environment. /codebuild/faqs/;How do I receive notifications or alerts for any events in AWS CodeBuild?;"You can create notifications for events impacting your build projects. Notifications will come in the form of Amazon SNnotifications. Each notification will include a status message as well as a link to the resources whose event generated that notification. Notifications has no additional cost; but, you may be charged for other AWS services utilized by notifications, such as Amazon SNS. To learn how to get started with notifications, see the notifications user guide. Additionally, customers using AWS Chatbot can configure notifications to be sent to their Slack Channels or Amazon Chime chat rooms. For more details please check here." /codebuild/faqs/;Can I encrypt the build artifacts stored by CodeBuild?;Yes. You can specify a key stored in the AWS Key Management Service (AWS KMS) to encrypt your artifacts. /codebuild/faqs/;How does CodeBuild isolate builds that belong to other customers?;CodeBuild runs your build in fresh environments isolated from other users and discards each build environment upon completion. CodeBuild provides security and separation at the infrastructure and execution levels. /codebuild/faqs/;Can I use AWS Identity and Access Management (IAM) to manage access to CodeBuild?;Yes. You can control access to your build projects through resource-level permissions in IAM policies. /codebuild/faqs/;Which regions does CodeBuild support?;See Regional Products and Services for details. /codedeploy/faqs/;What is AWS CodeDeploy?; AWS CodeDeploy is designed for developers and administrators who need to deploy applications to any instance, including Amazon EC2 instances and instances running on-premises. It is flexible and can also be used by anyone wanting to update software or run scripts on their instances. /codedeploy/faqs/;How is AWS CodeDeploy different from other AWS deployment and management services such as AWS Elastic Beanstalk and AWS OpsWorks?; Yes. AWS CodeDeploy supports any instance that can install the CodeDeploy agent and connect to AWS public endpoints. /codedeploy/faqs/;What are the parameters that I need to specify for a deployment?;Revision - Specifies what to deploy. Deployment group - Specifies where to deploy. Deployment configuration - An optional parameter that specifies how to deploy. /codedeploy/faqs/;What are the service limits when using AWS CodeDeploy?;For information on the service limits, see Limits. To increase your service limits, submit a request through the AWS Support Center. /codedeploy/faqs/;Can I use AWS CodeDeploy to deploy an application to Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC)?; Yes. AWS CodeDeploy supports resource-level permissions. For each AWS CodeDeploy resource, you can specify which user has access and to which actions. For example, you can set an IAM policy to let a user deploy a particular application but only list revisions for other applications. You can therefore prevent users from inadvertently making changes to the wrong application. For more information on using IAM with AWS CodeDeploy, see Access Permissions Reference. /codedeploy/faqs/;How do I deploy an AWS CodeDeploy application to multiple regions?;AWS CodeDeploy performs deployments with AWS resources located in the same region. To deploy an application to multiple regions, define the application in your target regions, copy the application bundle to an Amazon S3 bucket in each region, and then start the deployments using either a serial or parallel rollout across the regions. /codepipeline/faqs/;What is AWS CodePipeline?;AWS CodePipeline is a continuous delivery service that enables you to model, visualize, and automate the steps required to release your software. With AWS CodePipeline, you model the full release process for building your code, deploying to pre-production environments, testing your application and releasing it to production. AWS CodePipeline then builds, tests, and deploys your application according to the defined workflow every time there is a code change. You can integrate partner tools and your own custom tools into any stage of the release process to form an end-to-end continuous delivery solution. /codepipeline/faqs/;Why should I use AWS CodePipeline?;By automating your build, test, and release processes, AWS CodePipeline enables you to increase the speed and quality of your software updates by running all new changes through a consistent set of quality checks. /codepipeline/faqs/;What is continuous delivery?;Continuous delivery is a software development practice where code changes are automatically built, tested, and prepared for a release to production. AWS CodePipeline is a service that helps you practice continuous delivery. Learn more about continuous delivery here. /codepipeline/faqs/;What is a pipeline?;A pipeline is a workflow construct that describes how software changes go through a release process. You define the workflow with a sequence of stages and actions. /codepipeline/faqs/;What is a revision?;A revision is a change made to the source location defined for your pipeline. It can include source code, build output, configuration, or data. A pipeline can have multiple revisions flowing through it at the same time. /codepipeline/faqs/;What is a stage?;A stage is a group of one or more actions. A pipeline can have two or more stages. /codepipeline/faqs/;What is an action?;An action is a task performed on a revision. Pipeline actions occur in a specified order, in serial or in parallel, as determined in the configuration of the stage. For more information, see Edit a Pipeline and Action Structure Requirements in AWS CodePipeline. /codepipeline/faqs/;What is an artifact?;When an action runs, it acts upon a file or set of files. These files are called artifacts. These artifacts can be worked upon by later actions in the pipeline. For example, a source action will output the latest version of the code as a source artifact, which the build action will read in. Following the compilation, the build action will upload the build output as another artifact, which will be read by the later deployment actions. /codepipeline/faqs/;What is a transition?;The stages in a pipeline are connected by transitions, and are represented by arrows in the AWS CodePipeline console. Revisions that successfully complete the actions in a stage will be automatically sent on to the next stage as indicated by the transition arrow. Transitions can be disabled or enabled between stages. /codepipeline/faqs/;How do I get started with AWS CodePipeline?;You can sign in to the AWS Management Console, create a pipeline, and start using the service. If you want an introduction to AWS CodePipeline, see Getting Started, which includes step-by-step tutorials. /codepipeline/faqs/;Can I use AWS Identity and Access Management (IAM) to manage access to AWS CodePipeline?; Yes. You can create an IAM role in the AWS account that owns the pipeline to delegate access to the pipeline and any related resources to an IAM user in another account. For a walkthrough on enabling such a cross account access, see Walkthrough: Delegating Access Across AWS Accounts For Accounts You Own Using IAM Roles and Configure Cross-Account Access to a Pipeline. /cloud9/faqs/;Who should use AWS Cloud9?;Anybody who writes code can use AWS Cloud9. Those developing applications using Node.js (JavaScript), Python, PHP, Ruby, Go, and C++ can use Cloud9 and have immediate access to a fully configured development environment in their browsers with preinstalled runtimes, package managers, and debugging tools. With Cloud9, you are no longer tied to a single development machine and can access your development environment from any internet-connected computer. /cloud9/faqs/;Which programming languages are supported?;AWS Cloud9 supports over 40 programming languages, including Node.js (JavaScript), Python, PHP, Ruby, Go, and C++. It includes features such as syntax highlighting, outline view, code hinting, code completion, application runners, and step-through debugging for many popular programming languages. To learn more about the language features supported in Cloud9, please visit the Language Support topic of our user guide. /cloud9/faqs/;What web browsers can I use to access AWS Cloud9?;AWS Cloud9 is fully supported on the recent versions of Google Chrome, Safari, Firefox, and Microsoft Edge. /cloud9/faqs/;What is the pricing for AWS Cloud9?;There is no additional charge for AWS Cloud9. If you use an Amazon EC2 instance for your AWS Cloud9 development environment, you pay only for the compute and storage resources (i.e., an EC2 instance, an EBS volume) that are used to run and store your code. You can also connect your Cloud9 development environment to an existing Linux server (e.g., on-premises server) via SSH for no additional charge. See the AWS Cloud9 pricing page for more details. /cloud9/faqs/;What are the other IDEs supported by AWS?;AWS offers a broad selection of IDE support to facilitate development of applications for AWS. To learn more about the IDE toolkits supported by AWS, visit the IDE Toolkits section on the AWS Tools page. /cloud9/faqs/;What if I see an error when working with AWS Cloud9?;You can find some of the errors you might encounter and their possible solutions in the Troubleshooting topic of our user guide. /cloud9/faqs/;How do I get started with AWS Cloud9?;You can sign in to the AWS Management Console, and select AWS Cloud9. The console will guide you through the options to select the Linux server that you want to connect with Cloud9. You can either launch a new Amazon EC2 instance (AWS Cloud9 EC2 environment) or connect your existing Linux server (AWS Cloud9 SSH environment) in a few simple steps. Once you’ve created a Cloud9 environment, you can access your IDE and write code in a fully configured development environment. For more information, see our documentation about setting up AWS Cloud9 and then complete a basic tutorial. /cloud9/faqs/;What is an AWS Cloud9 development environment?;An AWS Cloud9 development environment is where the project code files are stored and the tools used to develop the application are run. Each environment has unique IDE settings stored with it. This enables you to easily create and switch between many different development environments, each one customized with the tools, runtimes, files, and IDE settings required for a specific project. /cloud9/faqs/;What are the types of AWS Cloud9 development environments?;There are two types of AWS Cloud9 environments that you can use. /cloud9/faqs/;Can I use my existing Amazon EC2 or Amazon Lightsail instance with AWS Cloud9?;Yes. You can use SSH environments to connect an existing Linux-based EC2 or Lightsail instance with AWS Cloud9. /cloud9/faqs/;How do I edit my code?;The AWS Cloud9 IDE has an advanced code editor with features such as auto-completion, code folding, hinting, syntax highlighting, and line manipulation. The code editor enables you to choose from over 30 color schemes that control syntax highlighting and the UI. You can also fully customize the Cloud9 UI by editing your stylesheet. /cloud9/faqs/;What tools and packages are preinstalled on AWS Cloud9 EC2 environments?;AWS Cloud9 EC2 environments come preinstalled with commonly used development tools such as Git and Docker. They also include language runtimes and package managers for many popular programming languages such as Node.js and Python. To view the full list of tools and packages preinstalled on Cloud9 EC2 environments, please visit our documentation. /cloud9/faqs/;How do I run my code?;The AWS Cloud9 IDE has a run button in the toolbar and built-in runners for over 10 different languages that will automatically start your application with the latest code changes. For full control over how you run your software, you can also customize existing runners, create your own runners, or run your code from the terminal. /cloud9/faqs/;How do I run CLI commands?;The AWS Cloud9 IDE has a built-in terminal window that can interactively run CLI commands. You also have full administrative privileges on the instance (sudo rights), allowing you to install any additional tools required for development or to host your application. /cloud9/faqs/;How do I connect to source control management systems?;You can open the terminal window within the IDE and access your source control system using the same command line tools that you would use on your local machine. AWS Cloud9 EC2 environments come preinstalled with Git to enable easy access to your source code. /cloud9/faqs/;Which AWS Regions does AWS Cloud9 support?;See Regional Products and Services for details. /cloud9/faqs/;Where does AWS Cloud9 store my code?;Any data that you store in your AWS Cloud9 environment such as code files, packages, or dependencies is always stored in your resources. If you use an EC2 environment, your data is stored in the associated Amazon Elastic Block Store (EBS) volume that exists in your AWS account. If you use an SSH environment, your data is stored in local storage on your Linux server. /cloud9/faqs/;What are the resources created by AWS Cloud9 for Amazon EC2 environments?;When you create an Amazon EC2 environment, AWS Cloud9 creates the required compute and storage resources in your AWS account. These resources include an Amazon EC2 instance, an 8-GB Amazon Elastic Block Store (EBS) volume, an Amazon EC2 security group, and an AWS CloudFormation stack. You have access to these resources through the individual AWS service consoles. When you delete your environment, Cloud9 automatically deletes these resources for you. /cloud9/faqs/;Does AWS Cloud9 manage resources created in AWS Cloud9 for Amazon EC2 environments?;In addition to creating and deleting your AWS Cloud9 EC2 environment resources on your behalf, Cloud9 can also automatically start and stop the EC2 instances to reduce your costs. You are responsible for all other administrative tasks on these resources, such as installing software patches on your EC2 instances and performing backup of your EBS volumes. /cloud9/faqs/;Are my Amazon EC2 instances in AWS Cloud9 environments always running?;No. AWS Cloud9 provides a default auto-hibernation setting of 30 minutes for your Amazon EC2 instances created through Cloud9. With this setting, your EC2 instances automatically stop 30 minutes after you close the IDE and restart only when you reopen the IDE. As a result, you typically only incur EC2 instance charges for when you are actively working. When your instance requires a restart, you lose any active terminal sessions in the IDE and can experience some wait time while opening your IDE. Depending on your use case, you can configure the auto-hibernation setting and even elect to keep your EC2 instance “always on”. /cloud9/faqs/;Can I change my Amazon EC2 instance type for an existing EC2 environment?;Yes. You can change the Amazon EC2 instance type that you initially selected with your AWS Cloud9 environment. To do this, you navigate to the instance in the EC2 console, locate your instance, and follow the instructions in Amazon EC2 documentation. /cloud9/faqs/;How do I share my AWS Cloud9 environment with other people?;You can share your AWS Cloud9 environment by clicking the Share button in the top right of your IDE. You are prompted for the AWS Identity and Access Management (IAM) user name and the desired access levels for the person you want to collaborate with. Once you enter these details, the environment is available to both the participants for real-time collaboration on IDE features and command line sessions. /cloud9/faqs/;Can I share an AWS Cloud9 environment with IAM users in a different AWS account?;No. AWS Cloud9 environments can currently be shared only with the IAM users within the same AWS account. If you want to invite a new user that doesn’t have an IAM user access, you can follow the link to create a new IAM user in the Share dialog box. /cloud9/faqs/;How can I develop serverless applications for AWS Lambda using AWS Cloud9?;You can access the built-in tools for AWS Lambda from the AWS Resources panel in the IDE. You can use these tools to import existing or create new Lambda functions in Node.js and Python. You can easily run, preview, debug, and deploy these functions directly from the IDE. AWS Cloud9 also provides support for the AWS Serverless Application Model (AWS SAM) framework. This enables you to easily manage multiple Lambda functions and serverless resources in your application. If you provisioned your project using AWS CodeStar, any changes committed to the application will be built and deployed directly to Lambda on git push. /cloud9/faqs/;Can I locally test my AWS Lambda functions using AWS Cloud9?;Yes. AWS Cloud9 can simulate the AWS Lambda execution environment for Node.js and Python to run your functions locally in the IDE. This enables you to test your serverless applications with step-through debugging without uploading your application changes to Lambda. Once tested, you can also deploy your application changes to Lambda directly from the IDE. /cloud9/faqs/;How do I use AWS Cloud9 with AWS CodeStar?;You can launch AWS Cloud9 environments directly from AWS CodeStar and immediately start editing and committing your CodeStar project code in the Cloud9 IDE. Any code changes that you commit to your project source repository from Cloud9 are automatically built and deployed using the tools provisioned by CodeStar. To learn more about using this integration, please visit the AWS CodeStar documentation. /xray/faqs/;What is AWS X-Ray?; Currently, if you build and run distributed applications, you have to rely on a per-service or per-resource process to track requests for your application as it travels across various components that make up your application. This problem is further complicated by the varying log formats and storage mediums across frameworks, services, and resources your application runs on or uses. This makes it difficult to correlate the various pieces of data and create an end-to-end picture of a request from the time it originates at the end-user or service to when a response is returned by your application. X-Ray provides a user-centric model, instead of service-centric or resource-centric model, for collecting data related to requests made to your application. This model enables you to create a user-centric picture of requests as they travel across services and resources. By correlating and aggregating data on your behalf, X-Ray enables you to focus on improving the experience for end-users of your application. /xray/faqs/;Why should I use X-Ray?; X-Ray makes it easy for you to: /xray/faqs/;What can I do with X-Ray?;Create a service map – By tracking requests made to your applications, X-Ray can create a map of services used by your application. This provides you with a view of connections among services in your application, and enables you to create a dependency tree, detect latency or errors when working across AWS Availability Zones or Regions, zero in on services not operating as expected, and so on. Identify errors and bugs – X-Ray can automatically highlight bugs or errors in your application code by analyzing the response code for each request made to your application. This enables easy debugging of application code without requiring you to reproduce the bug or error. Build your own analysis and visualization apps – X-Ray provides a set of query APIs you can use to build your own analysis and visualizations apps that use the data that X-Ray records. /xray/faqs/;What is a trace?; An X-Ray segment encapsulates all the data points for a single component (for example, authorization service) of the distributed application. Segments include system-defined and user-defined data in the form of annotations and are composed of one or more sub-segments that represent remote calls made from the service. For example, when your application makes a call to a database in response to a request, it creates a segment for that request with a sub-segment representing the database call and its result. The sub-segment can contain data such as the query, table used, timestamp, and error status. /xray/faqs/;What is a segment?; An X-Ray annotation is system-defined or user-defined data associated with a segment. A segment can contain multiple annotations. System-defined annotations include data added to the segment by AWS services, whereas user-defined annotations are metadata added to a segment by a developer. For example, a segment created by your application can automatically be injected with region data for AWS service calls, whereas you might choose to add region data yourself for calls made to non-AWS services. /xray/faqs/;What is an annotation?; X-Ray errors are system annotations associated with a segment for a call that results in an error response. The error includes the error message, stack trace, and any additional information (for example, version or commit ID) to associate the error with a source file. /xray/faqs/;What are errors?; To provide a performant and cost-effective experience, X-Ray does not collect data for every request that is sent to an application. Instead, it collects data for a statistically significant number of requests. X-Ray should not be used as an audit or compliance tool because it does not guarantee data completeness. /xray/faqs/;What is sampling?; The X-Ray agent collects data from log files and sends them to the X-Ray service for aggregation, analysis, and storage. The agent makes it easier for you to send data to the X-Ray service, instead of using the APIs directly, and is available for Amazon Linux AMI, Red Hat Enterprise Linux (RHEL), and Windows Server 2012 R2 or later operating systems. /xray/faqs/;How do I get started with X-Ray?; X-Ray can be used with distributed applications of any size to trace and debug both synchronous requests and asynchronous events. For example, X-Ray can be used to trace web requests made to a web application or asynchronous events that utilize Amazon SQS queues. /xray/faqs/;What types of applications can I use with X-Ray?; You can use X-Ray with applications running on EC2, ECS, Lambda, Amazon SQS, Amazon SNand Elastic Beanstalk. In addition, the X-Ray SDK automatically captures metadata for API calls made to AWS services using the AWS SDK. In addition, the X-Ray SDK provides add-ons for MySQL and PostgreSQL drivers. /xray/faqs/;Which AWS services can I use with X-Ray?; If you’re using Elastic Beanstalk, you will need to include the language-specific X-Ray libraries in your application code. For applications running on other AWS services, such as EC2 or ECS, you will need to install the X-Ray agent and instrument your application code. /xray/faqs/;What code changes do I need to make to my application to use X-Ray?; Yes, X-Ray provides a set of APIs for ingesting request data, querying traces, and configuring the service. You can use the X-Ray API to build analysis and visualization applications in addition to those provided by X-Ray. /xray/faqs/;In which regions is X-Ray available?; Yes, you can use X-Ray to track requests flowing through applications or services across multiple regions. X-Ray data is stored locally to the processed region but with enough information to enable client applications to combine the data and provide a global view of traces. Region annotation for AWS services will be added automatically, however, customers will need to instrument custom services to add the regional annotation to make use of the cross-region support. /xray/faqs/;How long does it take for trace data to be available in X-Ray?; X-Ray stores trace data for the last 30 days. This enables you to query trace data going back 30 days. /xray/faqs/;How far back can I query the trace data? How long does X-Ray store trace data for?; X-Ray makes the best effort to present complete trace information. However, in some situations (connectivity issues, delay in receiving segments, and so on) it is possible that trace information provided by the X-Ray APIs will be partial. In those situations, X-Ray tags traces as incomplete or partial. /xray/faqs/;Why do I sometimes see partial traces?; Yes, the X-Ray agent can assume a role to publish data into an account different from the one in which it is running. This enables you publish data from various components of your application into a central account. /cloudwatch/faqs/;What is Amazon CloudWatch?;Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, and set alarms. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly. /cloudwatch/faqs/;What can I use to access CloudWatch?;Amazon CloudWatch can be accessed via API, command-line interface, AWS SDKs, and the AWS Management Console. /cloudwatch/faqs/;Which operating systems does Amazon CloudWatch support?;Amazon CloudWatch receives and provides metrics for all Amazon EC2 instances and should work with any operating system currently supported by the Amazon EC2 service. /cloudwatch/faqs/;What access management policies can I implement for CloudWatch?;Amazon CloudWatch integrates with AWS Identity and Access Management (IAM) so that you can specify which CloudWatch actions a user in your AWS Account can perform. For example, you could create an IAM policy that gives only certain users in your organization permission to use GetMetricStatistics. They could then use the action to retrieve data about your cloud resources. /cloudwatch/faqs/;What is Amazon CloudWatch Logs?;Amazon CloudWatch Logs lets you monitor and troubleshoot your systems and applications using your existing system, application and custom log files. /cloudwatch/faqs/;What kinds of things can I do with CloudWatch Logs?;CloudWatch Logs is capable of monitoring and storing your logs to help you better understand and operate your systems and applications. You can use CloudWatch Logs in a number of ways. /cloudwatch/faqs/;What platforms does the CloudWatch Logs Agent support?;The CloudWatch Logs Agent is supported on Amazon Linux, Ubuntu, CentOS, Red Hat Enterprise Linux, and Windows. This agent will support the ability to monitor individual log files on the host. /cloudwatch/faqs/;Does the CloudWatch Logs Agent support IAM roles?;Yes. The CloudWatch Logs Agent is integrated with Identity and Access Management (IAM) and includes support for both access keys and IAM roles. /cloudwatch/faqs/;What is Amazon CloudWatch Logs Insights?;Amazon CloudWatch Logs Insights is an interactive, pay-as-you-go, and integrated log analytics capability for CloudWatch Logs. It helps developers, operators, and systems engineers understand, improve, and debug their applications, by allowing them to search and visualize their logs. Logs Insights is fully integrated with CloudWatch, enabling you to manage, explore, and analyze your logs. You can also leverage CloudWatch Metrics, Alarms and Dashboards with Logs to get full operational visibility into your applications. This empowers you to understand your applications, make improvements, and find and fix problems quickly, so that you can continue to innovate rapidly. You can write queries with aggregations, filters, and regular expressions to derive actionable insights from your logs. You can also visualize timeseries data, drill down into individual log events, and export your query results to CloudWatch Dashboards. /cloudwatch/faqs/;How can I get started with CloudWatch Logs Insights?;You can immediately start using Logs Insights to run queries on all your logs being sent to CloudWatch Logs. There is no setup required and no infrastructure to manage. You can access Logs Insights from the AWS Management Console or programmatically through your applications by using the AWS SDK. /cloudwatch/faqs/;What is Amazon CloudWatch Anomaly Detection?;Amazon CloudWatch Anomaly Detection applies machine-learning algorithms to continuously analyze single time series of systems and applications, determine a normal baseline, and surface anomalies with minimal user intervention. It allows you to create alarms that auto-adjust thresholds based on natural metric patterns, such as time of day, day of week, seasonality, or changing trends. You can also visualize metrics with anomaly detection bands on dashboards, monitoring, isolating, and troubleshooting unexpected changes in your metrics. /cloudwatch/faqs/;How can I get started with Amazon CloudWatch Anomaly Detection?;It is easy to get started with Anomaly Detection. In the CloudWatch console, go to Alarms in the navigation pane to create an alarm, or start with Metrics to overlay the metric’s expected values onto the graph as a band. You can also enable Anomaly Detection using the AWS CLI, AWS SDKs, or AWS CloudFormation templates. To learn more, please visit the CloudWatch Anomaly Detection documentation and pricing pages. /cloudwatch/faqs/;What is Amazon CloudWatch Contributor Insights?;Amazon CloudWatch now includes Contributor Insights, which analyzes time-series data to provide a view of the top contributors influencing system performance. Once set up, Contributor Insights runs continuously without needing additional user intervention. This helps developers and operators more quickly isolate, diagnose, and remediate issues during an operational event. /cloudwatch/faqs/;How can I get started with CloudWatch Contributor Insights?;In the CloudWatch console, go to Contributor Insights in the navigation pane to create a Contributor Insights rule. You can also enable Contributor Insights using the AWS CLI, AWS SDKs, or AWS CloudFormation templates. Contributor Insights is available in all commercial AWS Regions. To learn more, please visit the documentation on CloudWatch Contributor Insights. /cloudwatch/faqs/;What is Amazon CloudWatch ServiceLens?;Amazon CloudWatch ServiceLens is a feature that enables you to visualize and analyze the health, performance, and availability of your applications in a single place. CloudWatch ServiceLens ties together CloudWatch metrics and logs as well as traces from AWS X-Ray to give you a complete view of your applications and their dependencies. This enables you to quickly pinpoint performance bottlenecks, isolate root causes of application issues, and determine users impacted. CloudWatch ServiceLens enables you to gain visibility into your applications in three main areas: Infrastructure monitoring (using metrics and logs to understand the resources supporting your applications), transaction monitoring (using traces to understand dependencies between your resources), and end user monitoring (using canaries to monitor your endpoints and notify you when your end user experience has degraded). /cloudwatch/faqs/;How can I get started with CloudWatch ServiceLens?;If you already use AWS X-Ray, you can access CloudWatch ServiceLens on the CloudWatch console by default. If you do not yet use AWS X-Ray, you can get started by enabling AWS X-Ray on your applications using the X-Ray SDK. Amazon CloudWatch ServiceLens is available in all public AWS Regions where AWS-X-Ray is available. To learn more, visit the documentation on Amazon CloudWatch ServiceLens. /cloudwatch/faqs/;What is Amazon CloudWatch Synthetics?;Amazon CloudWatch Synthetics allows you to monitor application endpoints more easily. It runs tests on your endpoints every minute, 24x7, and alerts you as soon as your application endpoints don’t behave as expected. These tests can be customized to check for availability, latency, transactions, broken or dead links, step by step task completions, page load errors, load latencies for UI assets, complex wizard flows, or checkout flows in your applications. You can also use CloudWatch Synthetics to isolate alarming application endpoints and map them back to underlying infrastructure issues to reduce mean time to resolution. /cloudwatch/faqs/;How can I get started with CloudWatch Synthetics?;It's easy to get started with CloudWatch Synthetics. You can write your first passing canary in minutes. To learn more, visit the documentation on Amazon CloudWatch Synthetics. /cloudwatch/faqs/;How much does Amazon CloudWatch cost?;Please see our pricing page for the latest information. /cloudwatch/faqs/;Does the Amazon CloudWatch monitoring charge change depending on which type of Amazon EC2 instance I monitor?;All Amazon EC2 instance types automatically send key health and performance metrics to CloudWatch at no cost. If you enable EC2 Detailed Monitoring, you will be charged for custom metrics based on the number of metrics sent to CloudWatch for the instance. The number of metrics sent for an instance is dependent on the instance type - see available CloudWatch Metrics for Your Instances for details. /cloudwatch/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. Learn more. /cloudwatch/faqs/;Why does my AWS monthly bill for CloudWatch appear different between July 2017 and previous months?;Prior to July 2017, charges for CloudWatch were split under two different sections in your AWS bill and Cost and Usage Reports. For historical reasons, charges for CloudWatch Alarms, CloudWatch Metrics, and CloudWatch API usage were reported under the “Elastic Compute Cloud” (EC2) detail section of your bill, while charges for CloudWatch Logs and CloudWatch Dashboards are reported under the “CloudWatch” detail section. To help consolidate and simplify your monthly AWS CloudWatch usage and billing, we moved the charges for your CloudWatch Metrics, Alarms, and API usage from the “EC2” section of your bill to the “CloudWatch” section, effectively bringing together all of your CloudWatch monitoring charges under the “CloudWatch” section. Note that this has no impact to your total AWS bill amount. Your bill and Cost and Usage Reports will now simply display charges for CloudWatch under a single section. /cloudwatch/faqs/;How is CloudWatch Logs Insights priced?;Logs Insights is priced per query and charges based on the amount of ingested log data scanned by the query. For additional details about pricing, see CloudWatch pricing. /cloudwatch/faqs/;Does CloudWatch Logs Insights charge me for cancelled queries?;Yes, if you cancel a query manually, you are charged for the amount of ingested log data scanned up to the point at which you cancelled the query. /cloudwatch/faqs/;Does CloudWatch Logs Insights charge me for failed queries?;No, you are not charged for failed queries. /cloudwatch/faqs/;What is cross-account observability in CloudWatch?;Cross-account observability in CloudWatch lets you monitor and troubleshoot applications that span across multiple accounts within a Region. Using cross-account observability, you can seamlessly search, visualize, and analyze your metrics, logs, and traces, without having to worry about account boundaries. You can start with an aggregated cross-account view of your application to visually identify the resources exhibiting errors and dive deep into correlated traces, metrics, and logs to root cause the issue. The seamless cross-account data access and navigation enabled by cross-account monitoring helps you reduce the manual effort required to troubleshoot issues and save valuable time in resolution. Cross-account observability is an addition to CloudWatch’s unified observability capability. /cloudwatch/faqs/;How do I get started with cross-account observability?;Cross-account observability introduces two new account concepts. “Monitoring account” is a central AWS account that can view and interact with observability data generated across other accounts. A “source account” is an individual AWS account that generates observability data for the resources that reside in it. Once you identify your monitoring and source accounts, you complete your cross-account monitoring configuration by selecting which telemetry data to share with your monitoring account. Within minutes, you can easily setup central monitoring accounts from which you have a complete view of the health and performance of your applications deployed across many related accounts or an entire AWS organization. With cross-account observability in CloudWatch, you can get a birds-eye view of your cross-application dependencies that can impact service availability, and you can pinpoint issues proactively and troubleshoot with reduced mean time to resolution. /cloudwatch/faqs/;What CloudWatch monitoring features can I use across multiple AWS accounts?;Using cross-account observability, you can search for log groups stored across multiple accounts from a central view, run cross-account Logs Insights queries, and create Contributor Insights rules across accounts to identify top-N contributors generating log entries. You can use metrics search to visualize metrics from many accounts in a consolidated view, create alarms that evaluate metrics from other accounts to be notified of anomalies and trending issues, and visualize them on centralized dashboards. You can also use this capability to set up a single, cross-account metric stream to include metrics that span multiple AWS accounts in an AWS Region. With cross-account observability, you can also view an interactive map of your cross-account applications using ServiceLens with one-step drill downs to relevant metrics, logs, and traces. /cloudwatch/faqs/;Can I still use CloudWatch cross-account, cross-Region features on my console?;Both cross-account monitoring in CloudWatch and the cross-account, cross-Region features will be available on the CloudWatch console. The cross-account, cross-Region drop-down menus will be removed from the console when you set up cross-account observability in CloudWatch. Note that the cross-account observability experience in CloudWatch is available in one Region at a time. The cross-account, cross-Region feature allows access to organization-wide telemetry through IAM roles. Cross-account observability in CloudWatch uses the Observability Access Manager API to define access policies. Learn more in our documentation. /cloudwatch/faqs/;What can I measure with Amazon CloudWatch Metrics?;Amazon CloudWatch allows you to monitor AWS cloud resources and the applications you run on AWS. Metrics are provided automatically for a number of AWS products and services, including Amazon EC2 instances, EBS volumes, Elastic Load Balancers, Auto Scaling groups, EMR job flows, RDS DB instances, DynamoDB tables, ElastiCache clusters, RedShift clusters, OpsWorks stacks, Route 53 health checks, SNtopics, SQS queues, SWF workflows, and Storage Gateways. You can also monitor custom metrics generated by your own applications and services. /cloudwatch/faqs/;What is the retention period of all metrics?;You can publish and store custom metrics down to one-second resolution. Extended retention of metrics was launched on November 1, 2016, and enabled storage of all metrics for customers from the previous 14 days to 15 months. CloudWatch retains metric data as follows: /cloudwatch/faqs/;What is the minimum resolution for the data that Amazon CloudWatch receives and aggregates?;The minimum resolution supported by CloudWatch is one-second data points, which is a high-resolution metric, or you can store metrics at one-minute granularity. Sometimes metrics are received by Cloudwatch at varying intervals, such as three-minute or five-minute intervals. If you do not specify that a metric is high resolution, by setting the StorageResolution field in the PutMetricData API request, then by default CloudWatch will aggregate and store the metrics at one-minute resolution. /cloudwatch/faqs/;Can I delete any metrics?;CloudWatch does not support metric deletion. Metrics expire based on the retention schedules described above. /cloudwatch/faqs/;Will I lose the metrics data if I disable monitoring for an Amazon EC2 instance?;No. You can always retrieve metrics data for any Amazon EC2 instance based on the retention schedules described above. However, the CloudWatch console limits the search of metrics to two weeks after a metric is last ingested to ensure that the most up-to-date instances are shown in your namespace. /cloudwatch/faqs/;Can I access the metrics data for a terminated Amazon EC2 instance or a deleted Elastic Load Balancer?;Yes. Amazon CloudWatch stores metrics for terminated Amazon EC2 instances or deleted Elastic Load Balancers for 15 months. /cloudwatch/faqs/;Why does the graphing of the same time window look different when I view the metrics in five-minute and one-minute periods?;If you view the same time window in a 5 minute period versus a 1 minute period, you may see that data points are displayed in different places on the graph. For the period you specify in your graph, Amazon CloudWatch will find all the available data points and calculates a single, aggregate point to represent the entire period. In the case of a 5 minute period, the single data point is placed at the beginning of the 5 minute time window. In the case of a 1 minute period, the single data point is placed at the 1 minute mark. We recommend using a one minute period for troubleshooting and other activities that require the most precise graphing of time periods. /cloudwatch/faqs/;What is a Custom Metric?;You can use Amazon CloudWatch to monitor data produced by your own applications, scripts, and services. A custom metric is any metric you provide to Amazon CloudWatch. For example, you can use custom metrics as a way to monitor the time to load a web page, request error rates, number of processes or threads on your instance, or amount of work performed by your application. You can get started with custom metrics by using the PutMetricData API, our sample monitoring scripts for Windows and Linux, CloudWatch collectd plugin, as well as a number of applications and tools offered by AWS partners. /cloudwatch/faqs/;What resolution can I get from a Custom Metric?;A custom metric can be one of the following: /cloudwatch/faqs/;What metrics are available at high resolution?;Currently, only custom metrics that you publish to CloudWatch are available at high resolution. High-resolution custom metrics are stored in CloudWatch at one-second resolution. High resolution is defined by the StorageResolution parameter in the PutMetricData API request, with a value of one, and is not a required field. If you do not specify a value for the optional StorageResolution field, then CloudWatch will store the custom metric at one-minute resolution by default. /cloudwatch/faqs/;Are high-resolution custom metrics priced differently than regular custom metrics?;No, high-resolution custom metrics are priced in the same manner as standard one-minute custom metrics. /cloudwatch/faqs/;When would I use a Custom Metric over having my program emit a log to CloudWatch Logs?;You can monitor your own data using custom metrics, CloudWatch Logs, or both. You may want to use custom metrics if your data is not already produced in log format, for example operating system processes or performance measurements. Or, you may want to write your own application or script, or one provided by an AWS partner. If you want to store and save individual measurements along with additional detail, you may want to use CloudWatch Logs. /cloudwatch/faqs/;What statistics can I view and graph in CloudWatch?;You can retrieve, graph, and set alarms on the following statistical values for Amazon CloudWatch metrics: Average, Sum, Minimum, Maximum, and Sample Count. Statistics can be computed for any time periods between 60 seconds and one day. For high-resolution custom metrics, statistics can be computed for time periods between one second and three hours. /cloudwatch/faqs/;What is CloudWatch Application Insights for .NET and SQL Server?;Amazon CloudWatch Application Insights for .NET and SQL Server is a capability that you can use to easily monitor your .NET and SQL Server applications. It helps identify and set up key metrics and logs across your application resources and technology stack, i.e. database, web (IIS) and application servers, OS, load balancers, queues, etc. It constantly monitors these telemetry data to detect and correlate anomalies and errors, to notify you of any problems in your application. To aid in troubleshooting, it creates automatic dashboards to visualize problems it detects which includes correlated metric anomalies and log errors, along with additional insights to point you to their potential root cause. /cloudwatch/faqs/;What are the benefits of using CloudWatch Application Insights for .NET and SQL Server?;Automatically recognize application metrics and logs: It scans your application resources, provides a list of recommended metrics and logs to monitor, and sets them up automatically, making it easier to set up monitoring for your applications. Intelligent problem detection: It uses built-in rules and machine learning algorithms to dynamically monitor and analyze symptoms of a problem across your application stack and detect application problems. It helps you reduce the overhead of dealing with individual metric spikes, or events, or log exceptions, and instead get notified on real problems, along with contextual information these problems. Faster troubleshooting: It assesses the detected problems to give you insights on them, such as the possible root cause of the detected problem and list of metrics and logs impacted because of the problem. You can provide feedback on generated insights to make the problem detection engine specific to your use case. /cloudwatch/faqs/;How do I get started with monitoring using CloudWatch Application Insights for .NET and SQL Server?;On-board application: Specify the application you want to monitor by choosing the AWS Resource Group associated with it. /cloudwatch/faqs/;What is CloudWatch Metric Streams?;CloudWatch Metric Streams is a feature that enables you to continuously stream CloudWatch metrics to a destination of your choice with minimal setup and configuration. It is a fully managed solution, and doesn’t require you to write any code or maintain any infrastructure. With a few clicks, you can configure a metric stream to destinations like Amazon Simple Storage Service (S3). You can also send your metrics to a selection of third-party service providers to keep your operational dashboards up to date. /cloudwatch/faqs/;Why should I use CloudWatch Metric Streams?;Metric Streams provides an alternative way of obtaining metrics data from CloudWatch without the need to poll APIs. You can create a metric stream with just a few clicks, and your metrics data will start to flow to your destination. You can easily direct your metrics to your data lake on AWS such as on Amazon S3, and start analyzing usage or performance with tools such as Amazon Athena. Metrics Streams also makes it easier to send CloudWatch metrics to popular third-party service providers using an Amazon Kinesis Data Firehose HTTP endpoint. You can create a continuous, scalable stream including the most up-to-date CloudWatch metrics data to power dashboards, alarms, and other tools that rely on accurate and timely metric data. /cloudwatch/faqs/;How can I create and manage CloudWatch Metric Streams?;You can create and manage Metric Streams through the CloudWatch Console or programmatically through the CloudWatch API, AWS SDK, AWS CLI, or AWS CloudFormation to provision and configure Metric Streams. You can also use AWS CloudFormation templates provided by third-party service providers to set up Metric Streams delivery to destinations outside AWS. For more information, see the documentation on CloudWatch Metric Streams. /cloudwatch/faqs/;Can I manage metrics to be included in my CloudWatch Metric Stream?;Yes. It is possible to choose to send all metrics by default, or create filter rules to include and exclude groups of metrics defined by namespace, e.g. AWS/EC2. Metric Streams automatically detects new metrics matching filter rules and includes metric updates in the stream. When resources are terminated, Metric Streams will automatically stop sending updates for the inactive metrics. /cloudwatch/faqs/;What formats does CloudWatch Metric Streams support?;Metric Streams can output in either OpenTelemetry or JSON format. You can select the output format when creating or managing metric streams. /cloudwatch/faqs/;Can I monitor the cost and volume of data delivered by CloudWatch Metric Streams?;Yes. You can visit the monitoring section of the Metric Streams console page. You will see automatic dashboards for the volume of metric updates over time. These metrics are also available under the AWS/CloudWatch namespace and can be used to create alarms to send notifications in the case of an unusual spike in volume. /cloudwatch/faqs/;What log monitoring does Amazon CloudWatch provide?;CloudWatch Logs lets you monitor and troubleshoot your systems and applications using your existing system, application and custom log files. /cloudwatch/faqs/;What are Amazon CloudWatch Vended Logs?;Amazon CloudWatch Vended logs are logs that are natively published by AWS services on behalf of the customer. VPC Flow logs is the first Vended log type that will benefit from this tiered model. However, more AWS Service log types will be added to Vended Log type in the future. /cloudwatch/faqs/;Is CloudWatch Logs available in all regions?;Please refer to Regional Products and Services for details of CloudWatch Logs service availability by region. /cloudwatch/faqs/;How much does CloudWatch Logs cost?;Please see our pricing page for the latest information. /cloudwatch/faqs/;What kinds of things can I do with my logs and Amazon CloudWatch?;CloudWatch Logs is capable of monitoring and storing your logs to help you better understand and operate your systems and applications. When you use CloudWatch Logs with your logs, your existing log data is used for monitoring, so no code change are required. Here are two examples of what you can do with Amazon CloudWatch and your logs: /cloudwatch/faqs/;What types of data can I send to Amazon CloudWatch Logs from my EC2 instances running Microsoft SQL Server and Microsoft Windows Server?;You can configure the EC2Config service to send a variety of data and log files to CloudWatch including: custom text logs, Event (Application, Custom, Security, System) logs, Event Tracing (ETW) logs, and Performance Counter (PCW) data. Learn more about the EC2Config service here. /cloudwatch/faqs/;How frequently does the CloudWatch Logs Agent send data?;The CloudWatch Logs Agent will send log data every five seconds by default and is configurable by the user. /cloudwatch/faqs/;What log formats does CloudWatch Logs support?;CloudWatch Logs can ingest, aggregate and monitor any text based common log data or JSON-formatted logs. /cloudwatch/faqs/;What if I configure the CloudWatch Logs Agent to send non-text log data?;The CloudWatch Logs Agent will record an error in the event it has been configured to report non text log data. This error is recorded in the /var/logs/awslogs.log. /cloudwatch/faqs/;How do I start monitoring my logs with CloudWatch Logs?;You can monitor log events as they are sent to CloudWatch Logs by creating Metric Filters. Metric Filters turn log data into Amazon CloudWatch Metrics for graphing or alarming. Metric Filters can be created in the Console or the CLI. Metric Filters search for and match terms, phrases or values in your log events. When a Metric Filter finds one of the terms, phrases or values in your log events, it counts it in an Amazon CloudWatch Metric that you choose. For example, you can create a Metric Filter to search for and count the occurrence of the word “Error” in your log events. Metric Filters can also extract values from space delimited log events, such as the latency of web requests. You can also use conditional operators and wildcards to create exact matches. The Amazon CloudWatch Console can help you test your patterns before creating Metric Filters. /cloudwatch/faqs/;What is the syntax of Metric Filter patterns?;A Metric Filter pattern can contain search terms or a specification of your common log or JSON event format. /cloudwatch/faqs/;How do I know that a Metric Filter pattern I specified will match my log events?;CloudWatch Logs lets you test the Metric Filter patterns you want before you create a Metric Filter. You can test your patterns against your own log data that is already in CloudWatch Logs or you can supply your own log events to test. Testing your pattern will show you which log events matched the Metric Filter pattern and, if extracting values, what the extracted value is in the test data. Metric Filter testing is available for use in the console and the CLI. /cloudwatch/faqs/;Can I use regular expressions with my log data?;Amazon CloudWatch Metric Filters does not support regular expressions. To process your log data with regular expressions, consider using Amazon Kinesis and connect the stream with a regular expression processing engine. /cloudwatch/faqs/;How do I retrieve my log data?;You can retrieve any of your log data using the CloudWatch Logs console or through the CloudWatch Logs CLI. Log events are retrieved based on the Log Group, Log Stream and time with which they are associated. The CloudWatch Logs API for retrieving log events is GetLogEvents. /cloudwatch/faqs/;How do I search my logs?;You can use the CLI to retrieve your log events and search through them using command line grep or similar search functions. /cloudwatch/faqs/;How long does CloudWatch Logs store my log data?;You can store your log data in CloudWatch Logs for as long as you want. By default, CloudWatch Logs will store your log data indefinitely. You can change the retention for each Log Group at any time. /cloudwatch/faqs/;What permissions do I need to access Logs Insights?;To access Logs Insights, your IAM policy must include permissions for logs:DescribeLogGroups and logs:FilterLogEvents. /cloudwatch/faqs/;What logs can I query with CloudWatch Logs Insights?;"You can use Logs Insights to query all logs being sent to CloudWatch. Logs Insights automatically discovers the logs fields from logs from AWS services such as Lambda, CloudTrail, Route53, and VPC Flow Logs; and any application log that generates log events in JSON format. Additionally, for all log types, it generates 3 system fields @message, @logStream, and @timestamp for all logs sent to CloudWatch. @message contains the raw unparsed log event, @logStream contains the name of the source that generated the log event, and @timestamp contains the time at which the log event was added to CloudWatch." /cloudwatch/faqs/;Which query language does CloudWatch Logs Insights support?;Logs Insights introduces a new purpose-built query language for log processing. The query language supports a few simple, but powerful query commands. You can write commands to retrieve one or more log fields, find log events that match one or more search criteria, aggregate your log data, and extract ephemeral fields from your text-based logs. The query language is easy to learn, and Logs Insights offers in-product help in the form of sample queries, command descriptions, and query auto-completion to help you get started. You can find additional details about the query language here. /cloudwatch/faqs/;What are the service limits for CloudWatch Logs Insights?;The service limits are documented here. /cloudwatch/faqs/;What regions is CloudWatch Logs Insights available in?;Logs Insights is available in US West (Oregon), US West (NCalifornia), US East (Ohio), US East (NVirginia), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), South America (São Paulo). /cloudwatch/faqs/;What type of queries does CloudWatch Logs Insights support?;You can write queries containing aggregations, filters, regular expressions, and text searches. You can also extract data from log events to create ephemeral fields, which can be further processed by the query language to help you access the information you are looking for. The query language supports string, numeric, and mathematical functions, such as concat, strlen, trim, log, and sqrt, among others. You can also use boolean and logical expressions, and aggregate functions such as min, max, sum, average, and percentile, among others. You can find additional details about the query language and supported functions here. /cloudwatch/faqs/;What query commands and functions can I use with CloudWatch Logs Insights?;You can find a list of query commands here. You can find a list of supported functions here. /cloudwatch/faqs/;What data visualizations can I use with CloudWatch Logs Insights?;You can use visualizations to identify trends and patterns that occur over time within your logs. Logs Insights supports visualizing data using line charts and stacked area charts. It generates visualizations for all queries containing one or more aggregate functions, where data is grouped over a time interval specified using the bin() function. You can find additional details about visualizing timeseries data here. /cloudwatch/faqs/;Can I use regular expressions with CloudWatch Logs Insights?;You can use Java-style regular expressions with Logs Insights. Regular expressions can be used in the filter command. You can find examples of queries with regular expressions using the in-product help or here. /cloudwatch/faqs/;How do I escape special characters with CloudWatch Logs Insights queries?;You can use backticks to escape special characters. Log field names that contain characters other than alphanumeric characters, @, and . require escaping with backticks. /cloudwatch/faqs/;Why do certain log fields have a “@” sign and others don’t?;System fields generated by Logs Insights begin with @. Logs Insights currently generates 3 system fields @message which contains the raw, unparsed log event as sent to CloudWatch, @logStream which contains the name of the source that generated the log event, and @timestamp which contains the time when the log event was added to CloudWatch. /cloudwatch/faqs/;Can I query historical logs with CloudWatch Logs Insights?;Logs Insights enables you to query log data that was added to CloudWatch Logs on or after November 5, 2018. /cloudwatch/faqs/;Can I search for log events from a specific log stream?;"You can search for log events from a specific log stream by adding the query command filter @logStream = ""log_stream_name"" to your log query." /cloudwatch/faqs/;Today I use an AWS Partner ISV solution to analyze my logs from CloudWatch. What does CloudWatch Logs Insights change for me?;CloudWatch Logs already supports integration options with other AWS Services such as Amazon Kinesis, Amazon Kinesis Data Firehose, Amazon Elasticsearch and AWS Partner ISV solutions such as Splunk, Sumo Logic, and DataDog, among others, to provide you with choice and flexibility across all environments, for your custom log processing, enrichment, analytics, and visualization needs. In addition, the query capabilities of CloudWatch Logs Insights are available for programmatic access through the AWS SDK, to facilitate AWS ISV Partners to build deeper integrations, advanced analytics, and additional value on top of CloudWatch Logs Insights. /cloudwatch/faqs/;How will I benefit from having access to query capabilities of CloudWatch Logs Insights through an AWS ISV Partner solution?;ISV Partner integrations with CloudWatch Logs Insights enable you to bring in your log data into one place and have the ability to analyze using the tools and frameworks of your choice in a high performance, cost-effective way, without having to move large amounts of data. It also provides you with faster access to your logs by removing the associated data transfer latencies and eliminates the operational complexities of configuring and maintaining certain data transfers. /cloudwatch/faqs/;What kind of sensitive data can I protect in CloudWatch Logs?;When you create the data protection policy in CloudWatch Logs, you can specify the data you want to protect. There are many data identifiers for you to choose from, such as email addresses, driver’s licenses from many countries, credit card numbers, addresses, and more. The variety of targeted data identifiers provides the flexibility to choose what sensitive data is used by your applications and mask the sensitive data that does not need to be easily accessible. It is important that you decide what information is sensitive to your application and select the relevant identifiers for your use cases. /cloudwatch/faqs/;What types of CloudWatch Alarms can be created?;You can create an alarm to monitor any Amazon CloudWatch metric in your account. For example, you can create alarms on an Amazon EC2 instance CPU utilization, Amazon ELB request latency, Amazon DynamoDB table throughput, Amazon SQS queue length, or even the charges on your AWS bill. /cloudwatch/faqs/;What actions can I take from a CloudWatch Alarm?;When you create an alarm, you can configure it to perform one or more automated actions when the metric you chose to monitor exceeds a threshold you define. For example, you can set an alarm that sends you an email, publishes to an SQS queue, stops or terminates an Amazon EC2 instance, or executes an Auto Scaling policy. Since Amazon CloudWatch alarms are integrated with Amazon Simple Notification Service, you can also use any notification type supported by SNS. You can use the AWS Systems Manager OpsCenter action to automatically create an OpsItem when an alarm enters the ALARM state. This helps you to quickly diagnose and remediate issues with AWS resources from a single console. /cloudwatch/faqs/;What thresholds can I set to trigger a CloudWatch Alarm?;When you create an alarm, you first choose the Amazon CloudWatch metric you want it to monitor. Next, you choose the evaluation period (e.g., five minutes or one hour) and a statistical value to measure (e.g., Average or Maximum). To set a threshold, set a target value and choose whether the alarm will trigger when the value is greater than (>), greater than or equal to (>=), less than (<), or less than or equal to (<=) that value. /cloudwatch/faqs/;My CloudWatch Alarm is constantly in the Alarm state, what did I do wrong?;Alarms continue to evaluate metrics against your chosen threshold, even after they have already triggered. This allows you to view its current up-to-date state at any time. You may notice that one of your alarms stays in the ALARM state for a long time. If your metric value is still in breach of your threshold, the alarm will remain in the ALARM state until it no longer breaches the threshold. This is normal behavior. If you want your alarm to treat this new level as OK, you can adjust the alarm threshold accordingly. /cloudwatch/faqs/;How long can I view my Alarm history?;Alarm history is available for 14 days. To view your alarm history, log in to CloudWatch in the AWS Management Console, choose Alarms from the menu at left, select your alarm, and click the History tab in the lower panel. There you will find a history of any state changes to the alarm as well as any modifications to the alarm configuration. /cloudwatch/faqs/;What is CloudWatch Dashboards?;Amazon CloudWatch Dashboards allow you to create, customize, interact with, and save graphs of AWS resources and custom metrics. /cloudwatch/faqs/;How do I get started with CloudWatch Dashboards?;To get started, visit the Amazon CloudWatch Console and select “Dashboards”. Click the “Create Dashboard” button. You can also copy the desired view from Automatic Dashboards by clicking on Options -> “Add to Dashboard”. /cloudwatch/faqs/;What are the advantages of Automatic Dashboards?;Automatic Dashboards are pre-built with AWS service recommended best practices, remain resource aware, and dynamically update to reflect the latest state of important performance metrics. You can now filter and troubleshoot to a specific view without adding additional code to reflect the latest state of your AWS resources. Once you have identified the root cause of a performance issue, you can quickly act by going directly to the AWS resource. /cloudwatch/faqs/;Do the dashboards support auto refresh?;Yes. Dashboards will auto refresh while you have them open. /cloudwatch/faqs/;Can I share my dashboard?;Yes, a dashboard is available to anyone with the correct permissions for the account with the dashboard. /cloudwatch/faqs/;What is CloudWatch Events?;Amazon CloudWatch Events (CWE) is a stream of system events describing changes in your AWS resources. The events stream augments the existing CloudWatch Metrics and Logs streams to provide a more complete picture of the health and state of your applications. You write declarative rules to associate events of interest with automated actions to be taken. /cloudwatch/faqs/;What services emit CloudWatch Events?;Currently, Amazon EC2, Auto Scaling, and AWS CloudTrail are supported. Via AWS CloudTrail, mutating API calls (i.e., all calls except Describe*, List*, and Get*) across all services are visible in CloudWatch Events. /cloudwatch/faqs/;What can I do once an event is received?;When an event matches a rule you've created in the system, you can automatically invoke an AWS Lambda function, relay the event to an Amazon Kinesis stream, notify an Amazon SNtopic, or invoke a built-in workflow. /cloudwatch/faqs/;Can I generate my own events?;Yes. Your applications can emit custom events by using the PutEvents API, with a payload uniquely suited to your needs. /cloudwatch/faqs/;Can I do things on a fixed schedule?;CloudWatch Events is able to generate events on a schedule you set by using the popular Unix cron syntax. By monitoring for these events, you can implement a scheduled application. /cloudwatch/faqs/;What is the difference between CloudWatch Events and AWS CloudTrail?;CloudWatch Events is a near real time stream of system events that describe changes to your AWS resources. With CloudWatch Events, you can define rules to monitor for specific events and perform actions in an automated manner. AWS CloudTrail is a service that records API calls for your AWS account and delivers log files containing API calls to your Amazon S3 bucket or a CloudWatch Logs log group. With AWS CloudTrail, you can look up API activity history related to creation, deletion and modification of AWS resources and troubleshoot operational or security issues. /cloudwatch/faqs/;What is the difference between CloudWatch Events and AWS Config?;AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. Config rules help you determine whether configuration changes are compliant. CloudWatch Events is for reacting in near real time to resource state changes. It doesn’t render a verdict on whether the changes comply with policy or give detailed history like Config/Config Rules do. It is a general purpose event stream. /cloudwatch/faqs/;What is CloudWatch Container Insights?;CloudWatch Container Insights is a feature for monitoring, troubleshooting, and alarming on your containerized applications and microservices. Container Insights simplifies the isolation and analysis of performance issues impacting your container environment. DevOps and systems engineers have access to automatic dashboards in the CloudWatch console, giving them end-to-end operational visibility of metrics, logs, and distributed traces summarizing the performance and health of their Amazon Elastic Container Service for Kubernetes (EKS), Amazon Elastic Container Service (ECS), AWS Fargate, and Kubernetes clusters by pods/tasks, containers, and services. /cloudwatch/faqs/;How can I get started with CloudWatch Container Insights?;You can get started collecting detailed performance metrics, logs, and metadata from your containers and clusters in just a few clicks by following these steps in the CloudWatch Container Insights documentation. /cloudwatch/faqs/;How is CloudWatch Container Insights priced?;CloudWatch Container Insights automatically collects custom metrics from performance events ingested as CloudWatch Logs from your container environment. More details on pricing is available on the CloudWatch pricing page. /cloudwatch/faqs/;What is Prometheus and why do I want to collect Prometheus metrics in CloudWatch?;Prometheus is a popular open source monitoring project, part of the Cloud Native Compute Foundation (CNCF). The open source community has built over 150 plugins and defined a framework that DevOps teams can use to expose custom metrics to be collected using a pull-based approach from their applications. With this new feature, DevOps teams can automatically discover services for containerized workloads such as AWS App Mesh, NGINX, and Java/JMX. They can then expose custom metrics on those services, and ingest the metrics in CloudWatch. By curating the collection and aggregation of Prometheus metrics, CloudWatch users can monitor, troubleshoot, and alarm on application performance degradation and failures faster while reducing the number of monitoring tools required. /cloudwatch/faqs/;How does pricing work when ingesting Prometheus metrics from my container environments?;You will be charged for what you use for the following: (1) CloudWatch Logs ingested by the Gigabyte (GB), (2) CloudWatch Logs stored, and (3) CloudWatch custom metrics. Please refer to the CloudWatch pricing page for pricing details in your AWS Region. /cloudwatch/faqs/;Is the storage retention configurable for Prometheus metrics high cardinality events ingested as CloudWatch Logs?;Yes. Each Kubernetes (k8s) cluster has its own log group for the events (e.g., /aws/containerinsights//prometheus) with their own configurable retention period. For more details, please refer to the documentation on log group retention. /cloudwatch/faqs/;How does metric storage retention work for Prometheus metrics?;Prometheus metrics are automatically ingested as CloudWatch custom metrics. The retention period is 15 months per metric data point with automatic roll up (<60secs available for 3 hours, one min available for 15 days, 5 min available for 63 days, one hour available for 15 months). To learn more, see the documentation on CloudWatch metrics retention. /cloudwatch/faqs/;Are all Prometheus metrics types supported for the Public Beta?;No. Current metric types supported are Gauge and Counters. Histogram and Summary metrics are planned for an upcoming release. /cloudwatch/faqs/;Do you support PromQL as a query language?;No. All metrics are ingested as CloudWatch Logs events and can be queried using CloudWatch Logs Insights queries. For more information, see the documentation on CloudWatch Logs Insights search language syntax. /cloudwatch/faqs/;How can I get started with Internet Monitor?;To use Internet Monitor, you create a monitor and associate your application's resources with it, Amazon Virtual Private Clouds (VPCs), CloudFront distributions, or WorkSpaces directories, to enable Internet Monitor to know where your application's internet traffic is. Internet Monitor then provides internet measurements from AWS that are specific to the locations and networks that communicate with your application. /cloudwatch/faqs/;What are Internet Monitor’s components?;As you explore Internet Monitor, it helps to be familiar with the components and concepts you'll see referenced in the service. Internet Monitor uses or references the following: Monitor, CloudWatch logs, CloudWatch metrics, city-networks, health events, Autonomous System Numbers (ASNs), monitored resource, internet measurements, round-trip time, bytes transferred, and performance and availability scores. /cloudwatch/faqs/;How much does Internet Monitor cost?;Internet Monitor pricing has the following components: A fee per monitored resource, a fee per city-networks, and charges for the diagnostic logs published to CloudWatch Logs. For more information, see the Amazon CloudWatch Internet Monitor pricing page. /cloudwatch/faqs/;Which AWS Regions is Internet Monitor available in?;For Internet Monitor, Regional support depends on the types of resources that you add to your monitor. For Amazon CloudFront distributions and Amazon WorkSpaces directories, Internet Monitor is available in all supported Regions. For Amazon Virtual Private Clouds (VPCs), VPCs from an opt-in Region can be added only to a monitor created in the same Region. For a complete list of supported AWS Regions, see Amazon CloudWatch Internet Monitor endpoints. /cloudwatch/faqs/;What is Amazon CloudWatch Digital Experience Monitoring (DEM)?;Amazon CloudWatch DEM lets you monitor how your end users experience your applications (including performance, availability, and usability). /cloudwatch/faqs/;What is Amazon CloudWatch RUM?;Amazon CloudWatch RUM is a real user monitoring feature that gives you visibility into an application’s client-side performance to help you reduce mean time to resolution (MTTR). With CloudWatch RUM, you can collect client-side data on web application performance in real time to identify and debug issues. It complements the CloudWatch Synthetics data to give you more visibility into the end-user’s digital experience. You can visualize anomalies in performance and use the relevant debugging data (such as error messages, stack traces, and user sessions) to fix performance issues (such as JavaScript errors, crashes, and latencies). You can also understand the range of end-user impacts, including number of sessions, geolocations, or browsers. CloudWatch RUM aggregates data on your users' journey through your application, which can help you determine which features to launch and bug fixes to prioritize. /cloudwatch/faqs/;How can I get started with CloudWatch RUM?;Create an app monitor in CloudWatch RUM and add the lightweight web client in the HTML header of your application. Then start using CloudWatch RUM’s dashboards to receive user insights from different geolocations, devices, platforms, and browsers. /cloudwatch/faqs/;What is Amazon CloudWatch Evidently?;Amazon CloudWatch Evidently allows you to conduct experiments and identify unintended consequences of new features before rolling them out for general use, thereby reducing risk related to new feature roll-outs. Evidently allows you to validate new features across the full application stack before release, which makes for a safer release. When launching new features, you can expose them to a smaller user base, monitor key metrics such as page load times or conversions, and then dial up traffic. Evidently also allows developers to try out different designs, collect user data, and release the most effective design in production. It assists you in interpreting and acting on experiment results without the need for advanced statistical knowledge. You can use the insights provided by Evidently’s statistical engine (such as anytime p-value and confidence intervals) to make decisions while an experiment is in progress. /cloudwatch/faqs/;How can I get started with CloudWatch Evidently?;You can use the CloudWatch RUM JavaScript code snippet to collect client-side user journeys and performance metrics. If desired, you can also add custom metrics like conversions using the Evidently API. Next, new features to be tested can be instrumented with the CloudWatch Evidently SDK, which provides the ability to control how users get exposed to new features. Now you can run launches and experiments, using either the AWS console or CLI. /cloudwatch/faqs/;What is Amazon CloudWatch Synthetics?;Amazon CloudWatch Synthetics allows you to monitor application endpoints more easily. It runs tests on your endpoints every minute, 24x7, and alerts you as soon as your application endpoints don’t behave as expected. These tests can be customized to check for availability, latency, transactions, broken or dead links, step by step task completions, page load errors, load latencies for UI assets, complex wizard flows, or checkout flows in your applications. You can also use CloudWatch Synthetics to isolate alarming application endpoints and map them back to underlying infrastructure issues to reduce mean time to resolution. /cloudwatch/faqs/;How can I get started with CloudWatch Synthetics?;It's easy to get started with CloudWatch Synthetics. You can write your first passing canary in minutes. To learn more, visit the documentation on Amazon CloudWatch Synthetics. /cloudwatch/faqs/;When should I use Amazon CloudWatch Evidently and when should I use AWS AppConfig?;The two services can be used separately, but are even better together. /cloudwatch/faqs/;What is Amazon CloudWatch Metrics Insights?;CloudWatch Metrics Insights is a high-performance query engine that helps you slice and dice your operational metrics in real time and create aggregations on the fly using standard SQL queries. Metrics Insights helps you understand the status of your application health and performance by giving you the ability to analyze your metrics at scale. It is integrated with CloudWatch Dashboards, so you can save your queries into your health and performance dashboards to proactively monitor and pinpoint issues quickly. /cloudwatch/faqs/;How can I get started with CloudWatch Metrics Insights?;To get started, just click on the metrics tab on your CloudWatch console, and you will find Metrics Insights as a built-in query engine under Query tab at no additional cost. While Metrics Insights comes with standard SQL language, you can also get started with Metrics Insights by using the visual query builder. To use the query builder, you select your metrics of interest, namespaces and dimensions visually, and the console automatically constructs your SQL queries for you, based on your selections. You can use the query editor to type in your raw SQL queries anytime to dive deep and pinpoint issues to further granular detail. Metrics Insights also comes with a set of out of the box sample queries that can help you start monitoring and investigating your application performance instantly. Metrics Insights is also available programmatically through CloudFormation, the AWS SDK, and CLI. /autoscaling/faqs/;What is AWS Auto Scaling?;AWS Auto Scaling is a new AWS service that helps you optimize the performance of your applications while lowering infrastructure costs by easily and safely scaling multiple AWS resources. It simplifies the scaling experience by allowing you to scale collections of related resources that support your application with just a few clicks. AWS Auto Scaling helps you configure consistent and congruent scaling policies across the full infrastructure stack backing your application. AWS Auto Scaling will automatically scale resources as needed to align to your selected scaling strategy, so you maintain performance and pay only for the resources you actually need. /autoscaling/faqs/;What are the benefits of AWS Auto Scaling?;AWS Auto Scaling is a fast, easy way to optimize the performance and costs of your applications. /autoscaling/faqs/;When should I use AWS Auto Scaling?;You should use AWS Auto Scaling if you have an application that uses one or more scalable resources and experiences variable load. A good example would be an e-commerce web application that receives variable traffic through the day. It follows a standard three tier architecture with Elastic Load Balancing for distributing incoming traffic, Amazon EC2 for the compute layer, and DynamoDB for the data layer. In this case, AWS Auto Scaling will scale one or more EC2 Auto Scaling groups and DynamoDB tables that are powering the application in response to the demand curve. /autoscaling/faqs/;How can I get started with AWS Auto Scaling?;AWS Auto Scaling allows you to select your applications based on resource tags or AWS CloudFormation stacks. In just a few clicks, you can create a scaling plan for your application, which defines how each of the resources in your application should be scaled. For each resource, AWS Auto Scaling creates a target tracking scaling policy with the most popular metric for that resource type and keeps it at a target value based on your selected scaling strategy. To set target values for your resource metrics, you can choose from three predefined scaling recommendations that optimize availability, optimize costs, or balance the two. Or, if you prefer, you can define your own target values. AWS Auto Scaling also automatically sets the min/max values for the resources. /autoscaling/faqs/;What are the different ways that I can scale AWS resources?;AWS customers have multiple options for scaling resources. Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. EC2 Auto Scaling can also detect when an instance is unhealthy, terminate it, and launch an instance to replace it. When you use EC2 Auto Scaling, your applications gain better fault tolerance, availability, and cost management. /autoscaling/faqs/;When should I use AWS Auto Scaling vs. Amazon EC2 Auto Scaling?;You should use AWS Auto Scaling to manage scaling for multiple resources across multiple services. AWS Auto Scaling lets you define dynamic scaling policies for multiple EC2 Auto Scaling groups or other resources using predefined scaling strategies. Using AWS Auto Scaling to configure scaling policies for all of the scalable resources in your application is faster than managing scaling policies for each resource via its individual service console. It’s also easier, as AWS Auto Scaling includes predefined scaling strategies that simplify the setup of scaling policies. You should also use AWS Auto Scaling if you want to create predictive scaling for EC2 resources. /autoscaling/faqs/;When should I use AWS Auto Scaling vs. Auto Scaling for individual services?;You should use AWS Auto Scaling to manage scaling for multiple resources across multiple services. AWS Auto Scaling enables unified scaling for multiple resources, and has predefined guidance that helps make it easier and faster to configure scaling. If you prefer, you can instead choose to use the individual service consoles, Auto Scaling API, or Application Auto Scaling API to scale individual AWS services. You should also use the individual consoles or API if you want to setup step scaling policies or scheduled scaling, as AWS Auto Scaling creates target tracking scaling policies only. /autoscaling/faqs/;What is Predictive Scaling?;Predictive Scaling is a feature of AWS Auto Scaling that looks at historic traffic patterns and forecasts them into the future to schedule changes in the number of EC2 instances at the appropriate times going forward. Predictive Scaling uses machine learning models to forecast daily and weekly patterns. /autoscaling/faqs/;Which services can I use Predictive Scaling with?;At this time, Predictive Scaling only generates schedules for EC2 instances. /autoscaling/faqs/;How is AWS Auto Scaling different than the scaling capabilities for individual services?;The following table provides a comparison of AWS scaling options. /autoscaling/faqs/;What can I scale with AWS Auto Scaling?;You can use AWS Auto Scaling to setup scaling for the following resources in your application through a single, unified interface: /autoscaling/faqs/;How does AWS Auto Scaling make scaling recommendations?;AWS Auto Scaling bases its scaling recommendations on the most popular scaling metrics and thresholds used for Auto Scaling. It also recommends safe guardrails for scaling by providing recommendations for the minimum and maximum sizes of the resources. This way you can get started quickly and can then fine tune your scaling strategy over time. /autoscaling/faqs/;How do I select an application stack within AWS Auto Scaling?;You can either select an AWS CloudFormation stack or select resources based on common resource tag(s). Please note that currently, ECS services cannot be discovered using tags. /autoscaling/faqs/;How does AWS Auto Scaling discover what resources can scale?;AWS Auto Scaling will scan your selected AWS CloudFormation stack or resources with the specified tags to identify the supported AWS resource types that can be scaled. Please note that currently, ECS services cannot be discovered using tags. /autoscaling/faqs/;Which regions is AWS Auto Scaling available in?;AWS Auto Scaling is available in Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Sydney), Canada (Central), US West (Northern California), Europe (London), Europe (Frankfurt), EU (Paris), EU (Milan), US East (Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Singapore). /autoscaling/faqs/;How much does AWS Auto Scaling cost?;Similar to Auto Scaling on individual AWS resources, AWS Auto Scaling is free to use. AWS Auto Scaling is enabled by Amazon CloudWatch, so service fees apply for CloudWatch and your application resources (such as Amazon EC2 instances, Elastic Load Balancing load balancers, etc.). /cloudformation/faqs/;What is AWS CloudFormation?;AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS and third-party resources, and provision and manage them in an orderly and predictable fashion. /cloudformation/faqs/;What can developers do with AWS CloudFormation?;Developers can deploy and update compute, database, and many other resources in a simple, declarative style that abstracts away the complexity of specific resource APIs. AWS CloudFormation is designed to allow resource lifecycles to be managed repeatably, predictable, and safely, while allowing for automatic rollbacks, automated state management, and management of resources across accounts and regions. Recent enhancements and options allow for multiple ways to create resources, including using AWS CDK for coding in higher-level languages, importing existing resources, detecting configuration drift, and a new Registry that makes it easier to create custom types that inherit many core CloudFormation benefits. /cloudformation/faqs/;How is CloudFormation different from AWS Elastic Beanstalk?;These services are designed to complement each other. AWS Elastic Beanstalk provides an environment where you can easily deploy and run applications in the cloud. It is integrated with developer tools and provides a one-stop experience for managing application lifecycle. If your application workloads can be managed as Elastic Beanstalk workloads, you can enjoy a more turn-key experience in creating and updating applications. Behind the scenes, Elastic Beanstalk uses CloudFormation to create and maintain resources. If your application requirements dictate more custom control, the additional functionality of CloudFormation gives you more options to control your workloads. /cloudformation/faqs/;What new concepts does AWS CloudFormation introduce?;CloudFormation introduces four concepts: A template is a JSON or YAML declarative code file that describes the intended state of all the resources you need to deploy your application. A stack implements and manages the group of resources outlined in your template, and allows the state and dependencies of those resources to be managed together. A change set is a preview of changes that will be executed by stack operations to create, update, or remove resources. A stack set is a group of stacks you manage together that can replicate a group. /cloudformation/faqs/;What resources does AWS CloudFormation support?;To see a complete list of supported AWS resources and their features, visit the Supported AWS Services page in the Release History of the documentation. /cloudformation/faqs/;Can I manage individual AWS resources that are part of an AWS CloudFormation stack?;"Yes, you can. CloudFormation does not get in the way; you retain full control of all elements of your infrastructure, and can continue using all your existing AWS and third-party tools to manage your AWS resources. However, because CloudFormation can allow for additional rules, best practices, and compliance controls, we recommend that you allow CloudFormation to manage the changes to your resources. This predictable, controlled approach helps in managing hundreds or thousands of resources across your application portfolio." /cloudformation/faqs/;What are the elements of an AWS CloudFormation template?;CloudFormation templates are JSON or YAML-formatted text files comprised of five types of elements: /cloudformation/faqs/;How does AWS CloudFormation choose actual resource names?;You can assign logical names to AWS resources in a template. When a stack is created, AWS CloudFormation binds the logical name to the name of the corresponding actual AWS resource. Actual resource names are a combination of the stack and logical resource name. This allows multiple stacks to be created from a template without fear of name collisions between AWS resources. /cloudformation/faqs/;Why can’t I name all my resources?;Although AWS CloudFormation allows you to name some resources (such as Amazon S3 buckets), CloudFormation doesn’t allow this for all resources. Naming resources restricts the reusability of templates and results in naming conflicts when an update causes a resource to be replaced. To minimize these issues, CloudFormation supports resource naming on a case by case basis. /cloudformation/faqs/;Can I install software at stack creation time using AWS CloudFormation?;Yes. AWS CloudFormation provides a set of application bootstrapping scripts that enable you to install packages, files, and services on your EC2 instances simply by describing them in your CloudFormation template. For more details and a how-to, see Bootstrapping Applications via AWS CloudFormation. /cloudformation/faqs/;Can I use AWS CloudFormation with Chef?;Yes. AWS CloudFormation can be used to bootstrap both the Chef Server and Chef Client software on your EC2 instances. For more details and a how-to, see Integrating AWS CloudFormation with Chef. /cloudformation/faqs/;Can I use AWS CloudFormation with Puppet?;Yes. AWS CloudFormation can be used to bootstrap both the Puppet Master and Puppet Client software on your EC2 instances. For more details and a how-to, see Integrating AWS CloudFormation with Puppet. /cloudformation/faqs/;Can I use AWS CloudFormation with Terraform?;Yes. CloudFormation can bootstrap your Terraform engine on your EC2 instances, and you can use Terraform resource providers to create resources in stacks, leveraging stack state management, dependencies, stabilization and rollback. /cloudformation/faqs/;Does AWS CloudFormation support Amazon EC2 tagging?;Yes. Amazon EC2 resources that support the tagging feature can also be tagged in an AWS template. The tag values can refer to template parameters, other resource names, resource attribute values (e.g. addresses), or values computed by simple functions (e.g., a concatenated a list of strings). CloudFormation automatically tags Amazon EBS volumes and Amazon EC2 instances with the name of the CloudFormation stack they are part of. /cloudformation/faqs/;Do I have access to the Amazon EC2 instance, or Auto Scaling Launch Configuration user-data fields?;Yes. You can use simple functions to concatenate string literals and attribute values of the AWS resources and pass them to user-data fields in your template. Please refer to our sample templates to learn more about these easy to use functions. /cloudformation/faqs/;What happens when one of the resources in a stack cannot be created successfully?;By default, the “automatic rollback on error” feature is enabled. This will direct CloudFormation to only create or update all resources in your stack if all individual operations succeed. If they do not, CloudFormation reverts the stack to the last known stable configuration. This is useful when, for example, you accidentally exceed your default limit of Elastic IP addresses, or you don’t have access to an EC2 AMI that you’re trying to run. This feature enables you to rely on the fact that stacks are created either fully or not at all, which simplifies system administration and layered solutions built on top of CloudFormation. /cloudformation/faqs/;Can stack creation wait for my application to start up?;Yes. One of the options CloudFormation provides is a WaitCondition resource that acts as a barrier, blocking the creation of other resources until a completion signal is received from an external source such as your application or management system. Other options include creating custom logic with AWS Lambda functions. /cloudformation/faqs/;Can I save my data when a stack is deleted?;Yes. CloudFormation allows you to define deletion policies for resources in the template. You can specify that snapshots be created for Amazon EBS volumes or Amazon RDS database instances before they are deleted. You can also specify that a resource should be preserved and not deleted when the stack is deleted. This is useful for preserving Amazon S3 buckets when the stack is deleted. /cloudformation/faqs/;Can I update my stack after it has been created?;Yes. You can use CloudFormation to modify and update the resources in your existing stacks in a controlled and predictable way. By using templates to manage your stack changes, you have the ability to apply version control to your AWS infrastructure just as you do with the software running on it. /cloudformation/faqs/;Can I create stacks in a Virtual Private Cloud (VPC)?;Yes. CloudFormation supports creating VPCs, subnets, gateways, route tables and network ACLs as well as creating resources such as elastic IPs, Amazon EC2 Instances, EC2 security groups, auto scaling groups, elastic load balancers, Amazon RDS database instances and Amazon RDS security groups in a VPC. /cloudformation/faqs/;How can I participate in the CloudFormation community?;Please join the AWS CloudFormation GitHub community. /cloudformation/faqs/;Can I manage resources created outside of CloudFormation?;Yes! With Resource Import, you can bring an existing resource into AWS CloudFormation management using resource import. /cloudformation/faqs/;How do I sign up for AWS CloudFormation?;To sign up for CloudFormation, click Create Free Account on the CloudFormation product page. After signing up, please refer to the CloudFormation documentation, which includes our Getting Started Guide. /cloudformation/faqs/;Why am I asked to verify my phone number when signing up for AWS CloudFormation?;CloudFormation registration requires you to have a valid phone number and email address on file with AWS in case we ever need to contact you. Verifying your phone number takes only a few minutes and involves receiving an automated phone call during the registration process and entering a PIN number using the phone key pad. /cloudformation/faqs/;How do I get started after I have signed up?;The best way to get started with CloudFormation is to work through the Getting Started Guide, which is included in our technical documentation. Within a few minutes, you will be able to deploy and use one of our sample templates that illustrate how to create the infrastructure needed to run applications such as such as WordPress. There are various other sources of CloudFormation training, from thirdsparty curriculum providers to tutorials and articles on the web. For more information, check out the CloudFormation Resources. /cloudformation/faqs/;Are there sample templates that I can use to check out AWS CloudFormation?;Yes, CloudFormation includes sample templates that you can use to test drive the offering and explore its functionality. Our sample templates illustrate how to interconnect and use multiple AWS resources in concert, following best practices for multiple Availability Zone redundancy, scale out, and alarming. To get started, all you need to do is go to the AWS Management Console, click Create Stack, and follow the steps to select and launch one of our samples. Once created, select your stack in the console and review the Template and Parameter tabs to look at the details of the template file used to create the respective stack. Sample templates are also available on GitHub. /cloudformation/faqs/;What is the AWS CloudFormation Registry?;The AWS CloudFormation Registry is a managed service that lets you register, use, and discover AWS and third-party resource types. Third-party resource types must be registered before they can be used to provision resources with AWS CloudFormation templates. Please see Using the AWS CloudFormation registry in our in the documentation for details. /cloudformation/faqs/;What are resource types in AWS CloudFormation?;A resource provider is a set of resource types with specifications and handlers that control the lifecycle of underlying resources via create, read, update, delete, and list operations. You can use resource providers to model and provision resources using CloudFormation. For example, AWS::EC2::Instance is a resource type from the Amazon EC2 provider. You can use this type to model and provision an Amazon EC2 instance using CloudFormation. Using the CloudFormation Registry, you can build and use resource providers to model and provision third-party resources such as SaaS monitoring, team productivity, or source code management resources. /cloudformation/faqs/;What is the difference between AWS and third party resource providers?;The difference between AWS and third party resource providers is their origin. AWS resource providers are built and maintained by Amazon and AWS to manage AWS resources and services. For example, three AWS resource providers help you manage Amazon DynamoDB, AWS Lambda, and Amazon EC2 resources. These providers contain resource types such as AWS::DynamoDB::Table, AWS::Lambda::Function, and AWS::EC2::Instance. For a complete reference, go to our documentation. /cloudformation/faqs/;What is a resource schema?;A resource schema defines a resource type in a structured and consistent format. This schema is also used to validate the definition of a resource type. The schema includes all the supported parameters and attributes for a given resource type, as well as the required permissions to create the resource with the least privileges possible. /cloudformation/faqs/;How do I develop resource types?;Use the AWS CloudFormation CLI to build resource providers. You start by defining a simple declarative schema for your resources, which includes permissions required and relationships to other resources. You then use the CloudFormation CLI to generate the scaffolding for resource lifecycle handlers (Create, Read, Update, Delete, and List), along with test stubs for unit and integration testing. /cloudformation/faqs/;How do I register a resource provider?;You can either use the open source AWS CloudFormation CLI or directly call the RegisterType and related Registry APIs available via the AWS SDKs and AWS CLI. For more details, please see Using the AWS CloudFormation registry in our in the documentation. AWS resource providers are available out of the box and do not require any additional registration steps before use. /cloudformation/faqs/;How does CloudFormation Public Registry relate to the CloudFormation Registry?;What is the AWS CloudFormation Public Registry /cloudformation/faqs/;Is there a cost for using third-party Resource Types available on the CloudFormation Public Registry?;What is the AWS CloudFormation Public Registry /cloudformation/faqs/;Does AWS verify publishers of third-party extensions on the CloudFormation Public Registry?;What is the AWS CloudFormation Public Registry /cloudformation/faqs/;What is the difference between a resource and a module?;A Resource Type is a code package containing provisioning logic, which allows you to manage the lifecycle of a resource like an Amazon EC2 Instance or an Amazon DynamoDB Table from creation to deletion, abstracting away complex API interactions. Resource Types contain a schema, which defines the shape and properties of a resource, and the necessary logic to provision, update, delete, and describe a resource. An example third-party Resource Type in the CloudFormation Public Registry is a Datadog monitor, MongoDB Atlas Project, or an Atlassian Opsgenie User among others. Modules are building blocks that can be reused across multiple CloudFormation templates and is used just like a native CloudFormation resource. These building blocks can be for a single resource, like best practices for defining an Amazon Elastic Compute Cloud (Amazon EC2) instance or they can be for multiple resources, to define common patterns of application architecture. /cloudformation/faqs/;How do I develop and add my own resource or module to the AWS CloudFormation Registry?;You can refer to this link to develop and add your own resource or module to the AWS CloudFormation Registry. You can choose to publish it privately or to the Public Registry. /cloudformation/faqs/;How much does AWS CloudFormation cost?;"There is no additional charge for using AWS CloudFormation with resource providers in the following namespaces: AWS::*, Alexa::*, and Custom::*. In this case, you pay for AWS resources (such as Amazon EC2 instances, Elastic Load Balancing load balancers, etc.) created using AWS CloudFormation just as if you had created them manually. You only pay for what you use, as you use it; there are no minimum fees and no required upfront commitments." /cloudformation/faqs/;Will I be charged for resources that were rolled back during a failed stack creation attempt?;Yes. Charges for AWS resources created during template instantiation apply irrespective of whether the stack as a whole could be created successfully. /cloudformation/faqs/;Are there limits to the number of templates or stacks?;For more information on the maximum number of AWS CloudFormation stacks that you can create, see Stacks in AWS CloudFormation quotas. Complete our request for a higher limit here, and we will respond to your request within two business days. /cloudformation/faqs/;Are there limits to the size of description fields?;For more information, see Template Description in AWS CloudFormation quotas and Parameters, Resources and Outputs in the AWS documentation. /cloudformation/faqs/;Are there limits to the number of parameters or outputs in a template?;For more information on the number of parameters and outputs you can specify in a template, see Parameters and Outputs sections in AWS CloudFormation quotas. /cloudformation/faqs/;Are there limits to the number of resources that can be created in a stack?;For more information on the number of resources you can declare in a template, see Resources in AWS CloudFormation quotas. Creating smaller templates and stacks and modularizing your application across multiple stacks is a best practice to minimize blast radius for your resource changes, and to troubleshoot issues with multiple resource dependencies faster, since smaller groups of resources will have less complex dependencies than larger groups. /cloudformation/faqs/;What are the AWS CloudFormation service access points in each region?;Endpoints for each region are available in AWS CloudFormation endpoints in the technical documentation. /cloudformation/faqs/;What are the AWS regions where AWS CloudFormation is currently available?;Please refer to Regional Products and Services for details of CloudFormation availability by region. /cloudtrail/faqs/;What is AWS CloudTrail?;CloudTrail enables auditing, security monitoring, and operational troubleshooting by tracking user activity and API usage. CloudTrail logs, continuously monitors, and retains account activity related to actions across your AWS infrastructure, giving you control over storage, analysis, and remediation actions. /cloudtrail/faqs/;What are the benefits of CloudTrail?;CloudTrail helps you prove compliance, improve security posture, and consolidate activity records across Regions and accounts. CloudTrail provides visibility into user activity by recording actions taken on your account. CloudTrail records important information about each action, including who made the request, the services used, the actions performed, parameters for the actions, and the response elements returned by the AWS service. This information helps you track changes made to your AWS resources and troubleshoot operational issues. CloudTrail makes it easier to ensure compliance with internal policies and regulatory standards. For more details, refer to the AWS compliance whitepaper Security at Scale: Logging in AWS. /cloudtrail/faqs/;Who should use CloudTrail?;Use CloudTrail if you need to audit activity, monitor security, or troubleshoot operational issues. /cloudtrail/faqs/;If I am a new AWS customer or existing AWS customer and don’t have CloudTrail set up, do I need to enable or set up anything to view my account activity?;No, nothing is required to begin viewing your account activity. You can visit the AWS CloudTrail console or AWS CLI and begin viewing up to the past 90 days of account activity. /cloudtrail/faqs/;Does the CloudTrail Event History show all account activity within my account?;AWS CloudTrail will only show the results of the CloudTrail Event history for the current Region you are viewing for the last 90 days, and supports a range of AWS services. These events are limited to management events that create, modify, and delete API calls and account activity. For a complete record of account activity, including all management events, data events, and read-only activity, you must configure a CloudTrail trail. /cloudtrail/faqs/;What search filters can I use to view my account activity?;You can specify Time range and one of the following attributes: event name, user name, resource name, event source, event ID, and resource type. /cloudtrail/faqs/;Can I use the lookup-events CLI command even if I don’t have a trail configured?;Yes, you can visit the CloudTrail console or use the CloudTrail API/CLI and begin viewing the past 90 days of account activity. /cloudtrail/faqs/;What additional CloudTrail features are available after creating a trail?;Set up a CloudTrail trail to deliver your CloudTrail events to Amazon Simple Storage Service (S3), Amazon CloudWatch Logs, and Amazon CloudWatch Events. This helps you use features to archive, analyze, and respond to changes in your AWS resources. /cloudtrail/faqs/;Can I restrict user access from viewing the CloudTrail Event History?;"Yes, CloudTrail integrates with AWS Identity and Access Management (IAM), which helps you control access to CloudTrail and to other AWS resources that CloudTrail requires. This includes the ability to restrict permissions to view and search account activity. Remove the ""cloudtrail:LookupEvents"" from the Users IAM policy to prevent that IAM user from viewing account activity." /cloudtrail/faqs/;Is there any cost associated with CloudTrail Event History being enabled on my account upon creation?;There is no cost for viewing or searching account activity with CloudTrail Event History. /cloudtrail/faqs/;Can I turn off CloudTrail Event History for my account?;For any CloudTrail trails created, you can stop logging or delete the trails. This will also stop account activity delivery to the Amazon S3 bucket you designated as part of your trail configuration and delivery to CloudWatch Logs if configured. Account activity for the past 90 days will still be collected and visible within the CloudTrail console and through the AWS Command Line Interface (CLI). /cloudtrail/faqs/;What services are supported by CloudTrail?;CloudTrail records account activity and service events from most AWS services. For the list of supported services, see CloudTrail Supported Services in the CloudTrail User Guide. /cloudtrail/faqs/;Are API calls made from the AWS Management Console recorded?;Yes. CloudTrail records API calls made from any client. The AWS Management Console, AWS Software Development Kits (SDKs), command line tools, and higher-level AWS services call AWS API operations, so these calls are recorded. /cloudtrail/faqs/;Where are my log files stored and processed before they are delivered to my S3 bucket?;Activity information for services with Regional endpoints (such as Amazon Elastic Compute Cloud [EC2] or Amazon Relational Database Service [RDS]) is captured and processed in the same Region as the action is made. It is then delivered to the Region associated with your S3 bucket. Activity information for services with single endpoints such as IAM and AWS Security Token Service (STS) is captured in the Region where the endpoint is located. It is then processed in the Region where the CloudTrail trail is configured and delivered to the Region associated with your S3 bucket. /cloudtrail/faqs/;What does it mean to apply a trail to all AWS Regions?;Applying a trail to all AWS Regions refers to creating a trail that will record AWS account activity across all Regions in which your data is stored. This setting also applies to any new Regions added. For more details on Regions and partitions, refer to the Amazon Resource Names and AWS Service Namespaces page. /cloudtrail/faqs/;What are the benefits of applying a trail to all Regions?;You can create and manage a trail across all Regions in the partition in one API call or a few selections. You will receive a record of account activity made in your AWS account across all Regions to one S3 bucket or CloudWatch Logs group. When AWS launches a new Region, you will receive the log files containing event history for the new Region without taking any action. /cloudtrail/faqs/;How do I apply a trail to all Regions?;In the CloudTrail console, you select yes to apply to all Regions in the trail configuration page. If you are using the SDKs or AWS CLI, you set the IsMultiRegionTrail to true. /cloudtrail/faqs/;What happens when I apply a trail to all Regions?;Once you apply a trail in all Regions, CloudTrail will create a new trail by replicating the trail configuration. CloudTrail will record and process the log files in each Region and deliver log files containing account activity across all Regions to a single S3 bucket and a single CloudWatch Logs log group. If you specified an optional Amazon Simple Notification Service (SNS) topic, CloudTrail will deliver Amazon SNnotifications for all log files delivered to a single SNtopic. /cloudtrail/faqs/;Can I apply an existing trail to all Regions?;Yes. You can apply an existing trail to all Regions. When you apply an existing trail to all Regions, CloudTrail will create a new trail for you in all Regions. If you previously created trails in other Regions, you can view, edit, and delete those trails from the CloudTrail console. /cloudtrail/faqs/;How long will it take for CloudTrail to replicate the trail configuration to all Regions?;Typically, it will take less than 30 seconds to replicate the trail configuration to all Regions. /cloudtrail/faqs/;How many trails can I create in a Region?;You can create up to five trails in a Region. A trail that applies to all Regions exists in each Region and is counted as one trail in each Region. /cloudtrail/faqs/;What is the benefit of creating multiple trails in a Region?;With multiple trails, different stakeholders such as security administrators, software developers, and IT auditors can create and manage their own trails. For example, a security administrator can create a trail that applies to all Regions and configure encryption using one Amazon Key Management Service (KMS) key. A developer can create a trail that applies to one Region for troubleshooting operational issues. /cloudtrail/faqs/;Does CloudTrail support resource-level permissions?;Yes. Using resource-level permissions, you can write granular access control policies to allow or deny access to specific users for a particular trail. For more details, go to CloudTrail documentation. /cloudtrail/faqs/;How can I secure my CloudTrail log files?;By default, CloudTrail log files are encrypted using S3 server-side encryption (SSE) and placed into your S3 bucket. You can control access to log files by applying IAM or S3 bucket policies. You can add an additional layer of security by enabling S3 multi-factor authentication (MFA) Delete on your S3 bucket. For more details on creating and updating a trail, see the CloudTrail documentation. /cloudtrail/faqs/;Where can I download a sample S3 bucket policy and an SNS topic policy?;You can download a sample S3 bucket policy and an SNtopic policy from the CloudTrail S3 bucket. You must update the sample policies with your information before you apply them to your S3 bucket or SNtopic. /cloudtrail/faqs/;How long can I store my activity log files?;You control the retention policies for your CloudTrail log files. By default, log files are stored indefinitely. You can use S3 Object lifecycle management rules to define your own retention policy. For example, you might want to delete old log files or archive them to Amazon Simple Storage Service Glacier (S3 Glacier). /cloudtrail/faqs/;What information is available in an event?;An event contains information about the associated activity: who made the request, the services used, the actions performed, the parameters for the action, and the response elements returned by the AWS service. For more details, see the CloudTrail Event Reference section of the user guide. /cloudtrail/faqs/;How long does it take CloudTrail to deliver an event for an API call?;Typically, CloudTrail delivers an event within 5 minutes of the API call. For more information on how CloudTrail works, see here. /cloudtrail/faqs/;How often will CloudTrail deliver log files to my S3 bucket?;CloudTrail delivers log files to your S3 bucket approximately every five minutes. CloudTrail does not deliver log files if no API calls are made on your account. /cloudtrail/faqs/;Can I be notified when new log files are delivered to my S3 bucket?;Yes. You can turn on Amazon SNnotifications to take immediate action on delivery of new log files. /cloudtrail/faqs/;What happens if CloudTrail is turned on for my account but my S3 bucket is not configured with the correct policy?;CloudTrail log files are delivered in accordance with the S3 bucket policies that you have in place. If the bucket policies are misconfigured, CloudTrail will not be able to deliver log files. /cloudtrail/faqs/;What are data events?;Data events provide insights into the resource (data plane) operations performed on or within the resource itself. Data events are often high-volume activities and include operations such as S3 object level API operations and AWS Lambda function invoke API. Data events are deactivated by default when you configure a trail. To record CloudTrail data events, you must explicitly add the supported resources or resource types you want to collect activity on. Unlike management events, data events incur additional costs. For more information, see CloudTrail pricing. /cloudtrail/faqs/;How can I consume data events?;Data events that are recorded by CloudTrail are delivered to S3, similar to management events. Once enabled, these events are also available in Amazon CloudWatch Events. /cloudtrail/faqs/;What are S3 data events? How do I record them?;S3 data events represent API activity on S3 Objects. To get CloudTrail to record these actions, you specify a S3 bucket in the data events section when creating a new trail or modifying an existing one. Any API actions on the Objects within the specified S3 bucket are recorded by CloudTrail. /cloudtrail/faqs/;What are Lambda data events? How do I record them?;Lambda data events record runtime activity of your Lambda functions. With Lambda data events, you can get details on Lambda function runtime. Examples of Lambda function runtime include which IAM user or service made the Invoke API call, when the call was made, and which function was applied. All Lambda data events are delivered to an S3 bucket and CloudWatch Events. You can turn on logging for Lambda data events using the CLI or CloudTrail console and select which Lambda functions get logged by creating a new trail or editing an existing trail. /cloudtrail/faqs/;Can I add a delegated administrator to my organization?;Yes, CloudTrail now supports adding up to three delegated administrators per organization. /cloudtrail/faqs/;Who is the owner of an organization trail or event data store at theorganizational level created by a delegated admin?;The management account will remain the owner of any organization trails or event datastores created at organization level, regardless of whether it was created by a delegated admin account or by a management account. /cloudtrail/faqs/;In which Regions is delegated administrator support available?;Currently, delegated administrator support for CloudTrail is available in all Regions where AWS CloudTrail is available, except for China (Beijing, operated by Sinnet) and China (Ningxia, operated by NWCD). /cloudtrail/faqs/;What are CloudTrail Insights events?;CloudTrail Insights events help you identify unusual activity in your AWS accounts such as spikes in resource provisioning, bursts of AWS Identity and Access Management (IAM) actions, or gaps in periodic maintenance activity. CloudTrail Insights uses machine learning (ML) models that continually monitor CloudTrail write management events for abnormal activity. /cloudtrail/faqs/;What type of activity does CloudTrail Insights help identify?;CloudTrail Insights detects unusual activity by analyzing CloudTrail write management events within an AWS account and a Region. An unusual or abnormal event is defined as the volume of AWS API calls that deviates from what is expected from a previously established operating pattern or baseline. CloudTrail Insights adapts to changes in your normal operating patterns by considering time-based trends in your API calls and applying adaptive baselines as workloads change. /cloudtrail/faqs/;How does CloudTrail Insights work with other AWS services that use anomaly detection?;CloudTrail Insights identifies unusual operational activity in your AWS accounts that helps you address operational issues, minimizing operational and business impact. Amazon GuardDuty focuses on improving security in your account, providing threat detection by monitoring account activity. Amazon Macie is designed to improve data protection in your account by discovering, classifying, and protecting sensitive data. These services provide complementary protections against different types of problems that could arise in your account. /cloudtrail/faqs/;Do I need to have CloudTrail set up in order for CloudTrail Insights to work?;Yes. CloudTrail Insights events are configured on individual trails, so you must have at least one trail set up. When you turn on CloudTrail Insights events for a trail, CloudTrail starts monitoring the write management events captured by that trail for unusual patterns. If CloudTrail Insights detects unusual activity, a CloudTrail Insights event is logged to the delivery destination specified in the trail definition. /cloudtrail/faqs/;What kinds of events does CloudTrail Insights monitor?;CloudTrail Insights tracks unusual activity for write management API operations. /cloudtrail/faqs/;How do I get started?;You can enable CloudTrail Insights events on individual trails in your account by using the console, the CLI, or the SDK. You can also enable CloudTrail Insights events across your organization by using an Organizational trail configured in your AWS Organizations management account. You can turn on CloudTrail Insights events by choosing the radio button in your trail definition. /cloudtrail/faqs/;Why should I use CloudTrail Lake?;CloudTrail Lake helps you examine incidents by querying all actions logged by CloudTrail and configuration items recorded by AWS Config. It simplifies incident logging by helping remove operational dependencies and provides tools that can help reduce your reliance on complex data process pipelines that span across teams. CloudTrail Lake does not require you to move and ingest CloudTrail logs elsewhere, which helps maintain data fidelity and decreases dealing with low-rate limits that throttle your logs. It also provides near real-time latencies as it is fine-tuned to process high-volume structured logs, making them available for incident investigation. Also, CloudTrail Lake provides a familiar, multi-attribute query experience with SQL and is capable of scheduling and handling multiple concurrent queries. /cloudtrail/faqs/;How does this feature relate to and work with other AWS services?;CloudTrail is the canonical source of logs for user activity and API usage across AWS services. You can use CloudTrail Lake to examine activity across AWS services once the logs are available in CloudTrail. You can query and analyze user activity and impacted resources, and use that data to address issues such as identifying bad actors and baselining permissions. /cloudtrail/faqs/;How can I ingest events from non-AWS sources such as custom applications or third-party sources?;You can find and add partner integrations to start receiving activity events from these applications in a few steps using the CloudTrail console, without having to build and maintain custom integrations. For sources other than the available partner integrations, you can use the new CloudTrail Lake APIs to set up your own integrations and push events to CloudTrail Lake. To get started, see Working with CloudTrail Lake in the CloudTrail User Guide. /cloudtrail/faqs/;When do you recommend using AWS Config advanced query instead of CloudTrail Lake for querying configuration items from AWS Config?;AWS Config advanced query is recommended for customers who want to aggregate and query on current state AWS Config configuration items (CI). This helps customers with inventory management, security and operational intelligence, cost optimization, and compliance data. AWS Config advanced query is free if you are an AWS Config customer. /cloudtrail/faqs/;If I enable ingestion of configuration items from AWS Config today into CloudTrail Lake, will lake ingest my historical configuration items (generated before creation of lake) or collect only the newly rlecorded configuration items?;CloudTrail Lake will not ingest AWS Config configuration items that were generated before CloudTrail Lake was configured. Newly recorded configuration items from AWS Config, at an account level or organization level, will be delivered to the specified CloudTrail Lake event data store. These configuration items will be available in the lake for query for the specified retention period, and can be used for historical data analysis. /cloudtrail/faqs/;Can I always know which user made a particular configuration change by querying CloudTrail Lake?;If multiple configuration changes are attempted on a single resource by multiple users in quick succession, only one configuration item may be created that would map to the end state configuration of the resource. In this and similar scenarios, it may not be possible to provide 100% correlation on which user made what configuration changes by querying CloudTrail and configuration items for a specific time-range and resource-id. /cloudtrail/faqs/;If I've used trails before, can I bring existing CloudTrail logs into my existing or new CloudTrail Lake event data store?;Yes. The CloudTrail Lake import capability supports copying CloudTrail logs from an S3 bucket that stores logs from across multiple accounts (from an organization trail) and multiple AWS Regions. You can also import logs from individual accounts and single-region trails. The import capability also lets you specify an import date range, so that you import only the subset of logs that are needed for long-term storage and analysis in CloudTrail Lake. After you've consolidated your logs, you can run queries on your logs, from the most recent events collected after you enabled CloudTrail Lake, to historic events brought over from your trails. /cloudtrail/faqs/;What CloudTrail events can I query after enabling the CloudTrail Lake feature?;You can enable CloudTrail Lake for any of the event categories collected by CloudTrail, depending on your internal troubleshooting needs. Event categories include management events that capture control plane activities such as CreateBucket and TerminateInstances, and data events that capture data plane activities such as GetObject and PutObject. You do not need a separate trail subscription for any of these events. You can choose your event retention duration for up to seven years, and you can query on that data anytime. /cloudtrail/faqs/;After I enable the CloudTrail Lake feature, how long do I need to wait to begin writing queries?;You can begin querying the activities that occur after enabling the feature almost immediately. /cloudtrail/faqs/;What are some of the common security and operational use cases that I can solve using CloudTrail Lake?;Common use cases include investigating security incidents, like unauthorized access or compromised user credentials, and enhancing your security posture by performing audits to regularly baseline user permissions. You can perform necessary audits to make sure the right set of users are making changes to your resources (such as security groups), and track any changes not adhering to your organization’s best practices. Additionally, you can track actions taken on your resources and assess modifications or deletions, and get deeper insights on your AWS services bills including the IAM users subscribing to services. /cloudtrail/faqs/;How do I get started?;If you are a current or new CloudTrail customer, you can immediately begin using the CloudTrail Lake capability to run queries by enabling the feature through the API or the CloudTrail console. Select the CloudTrail Lake tab on the left panel of the CloudTrail console, and select the Create Event Data Store button to choose the event retention duration (up to seven years). Then, make event selections from all event categories logged by CloudTrail (Management and Data events) to get started. /cloudtrail/faqs/;I have multiple AWS accounts. I would like log files for all the accounts to be delivered to a single S3 bucket. Can I do that?;Yes. You can configure one S3 bucket as the destination for multiple accounts. For detailed instructions, refer to aggregating log files to a single S3 bucket section of the CloudTrail user guide. /cloudtrail/faqs/;What is CloudTrail integration with CloudWatch Logs?;CloudTrail integration with CloudWatch Logs delivers management and data events captured by CloudTrail to a CloudWatch Logs log stream in the CloudWatch Logs log group you specify. /cloudtrail/faqs/;What are the benefits of CloudTrail integration with CloudWatch Logs?;This integration helps you receive SNnotifications of account activity captured by CloudTrail. For example, you can create CloudWatch alarms to monitor API calls that create, modify, and delete Security Groups and Network access control lists (ACLs). /cloudtrail/faqs/;How do I turn on CloudTrail integration with CloudWatch Logs?;You can turn on CloudTrail integration with CloudWatch Logs from the CloudTrail console by specifying a CloudWatch Logs log group and an IAM role. You can also use the AWS SDKs or the AWS CLI to turn on this integration. /cloudtrail/faqs/;What happens when I turn on CloudTrail integration with CloudWatch Logs?;After you turn on the integration, CloudTrail continually delivers account activity to a CloudWatch Logs log stream in the CloudWatch Logs log group you specified. CloudTrail also continues to deliver logs to your S3 bucket as before. /cloudtrail/faqs/;In which AWS Regions is CloudTrail integration with CloudWatch Logs supported?;This integration is supported in the Regions where CloudWatch Logs is supported. For more information, see Regions and endpoints in the AWS General Reference. /cloudtrail/faqs/;What charges do I incur once I turn on CloudTrail integration with CloudWatch Logs?;After you turn on CloudTrail integration with CloudWatch Logs, you incur standard CloudWatch Logs and CloudWatch charges. For details, go to the CloudWatch pricing page. /cloudtrail/faqs/;What is the benefit of CloudTrail log file encryption using server-side Encryption with AWS KMS?;CloudTrail log file encryption using SSE-KMS helps you add an additional layer of security to CloudTrail log files delivered to an S3 bucket by encrypting the log files with a KMS key. By default, CloudTrail will encrypt log files delivered to your S3 bucket using S3 server-side encryption. /cloudtrail/faqs/;I have an application that ingests and processes CloudTrail log files. Do I need to make any changes to my application?;With SSE-KMS, S3 will automatically decrypt the log files so that you do not need to make any changes to your application. As always, you must make sure that your application has appropriate permissions such as S3 GetObject and AWS KMS Decrypt permissions. /cloudtrail/faqs/;How do I configure CloudTrail log file encryption?;You can use the AWS Management Console, or AWS CLI or the AWS SDKs to configure log file encryption. For detailed instructions, refer to the documentation. /cloudtrail/faqs/;What charges do I incur once I configure encryption using SSE-KMS?;Once you configure encryption using SSE-KMS, you will incur standard AWS KMS charges. For details, go to AWS KMS pricing page. /cloudtrail/faqs/;What is CloudTrail log file integrity validation?;The CloudTrail log file integrity validation feature helps you determine whether a CloudTrail log file was unchanged, deleted, or modified since CloudTrail delivered it to the specified S3 bucket. /cloudtrail/faqs/;What is the benefit of the CloudTrail log file integrity validation?;You can use the log file integrity validation as an aid in your IT security and auditing processes. /cloudtrail/faqs/;How do I enable CloudTrail log file integrity validation?;You can enable the CloudTrail log file integrity validation feature from the console, AWS CLI or AWS SDKs. /cloudtrail/faqs/;What happens once I turn on the log file integrity validation feature?;Once you turn on the log file integrity validation feature, CloudTrail will deliver digest files on an hourly basis. The digest files contain information about the log files that were delivered to your S3 bucket and hash values for those log files. They also contain digital signatures for the previous digest file and the digital signature for the current digest file in the S3 metadata section. For more information about digest files, digital signatures, and hash values, go to CloudTrail documentation. /cloudtrail/faqs/;Where are the digest files delivered to?;The digest files are delivered to the same S3 bucket where your log files are delivered. However, they are delivered to a different folder so that you can enforce granular access control policies. For details, refer to the digest file structure section of the CloudTrail documentation. /cloudtrail/faqs/;How can I validate the integrity of a log file or digest file delivered by CloudTrail?;You can use the AWS CLI to validate the integrity of a log file or digest file. You can also build your own tools to do the validation. For more details on using the AWS CLI for validating the integrity of a log file, refer to the CloudTrail documentation. /cloudtrail/faqs/;I aggregate all my log files across all Regions and multiple accounts into one single S3 bucket. Will the digest files be delivered to the same S3 bucket?;Yes. CloudTrail will deliver the digest files across all Regions and multiple accounts into the same S3 bucket. /cloudtrail/faqs/;What is the AWS CloudTrail Processing Library?;The AWS CloudTrail Processing Library is a Java library that makes it easier to build an application that reads and processes CloudTrail log files. You can download the CloudTrail Processing Library from GitHub. /cloudtrail/faqs/;What functionality does CloudTrail Processing Library provide?;CloudTrail Processing Library provides functionality to handle tasks such as continually polling an SQS queue and reading and parsing Amazon Simple Queue Service (SQS) messages It can also download log files stored in S3, and parse and serialize log file events in a fault-tolerant manner. For more information, go to the user guide in the CloudTrail documentation. /cloudtrail/faqs/;What software do I need to start using the CloudTrail Processing Library?;You need aws-java-sdk version 1.9.3 and Java 1.7 or higher. /cloudtrail/faqs/;How do I get charged for CloudTrail?;CloudTrail helps you view, search, and download the last 90 days of your account’s management events for free. You can deliver one copy of your ongoing management events to S3 for free by creating a trail. Once a CloudTrail trail is set up, S3 charges apply based on your usage. /cloudtrail/faqs/;If I have only one trail with management events, and apply it to all Regions, will I incur charges?;No. The first copy of management events is delivered free of charge in each Region. /cloudtrail/faqs/;If I enable data events on an existing trail with free management events, will I get charged?;Yes. You will be charged for only the data events. The first copy of management events is delivered free of charge. /cloudtrail/faqs/;How do the AWS Partner Solutions help me analyze the events recorded by CloudTrail?;Multiple partners offer integrated solutions to analyze CloudTrail log files. These solutions include features like change tracking, troubleshooting, and security analysis. For more information, see the CloudTrail partners section. /cloudtrail/faqs/;How can I onboard an integration to CloudTrail Lake as an available source?;To get started with your integration you can review the Partner Onboarding Guide. Engage with your partner development team or partner solutions architect to connect you with the CloudTrail Lake team for a deeper dive or further questions. /cloudtrail/faqs/;Will turning on CloudTrail impact the performance of my AWS resources or increase API call latency?;No. Turning on CloudTrail has no impact on performance for your AWS resources or API call latency. /config/faqs/;What is AWS Config?;AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to use security and governance. With AWS Config you can discover existing AWS resources, export a complete inventory of your AWS resources with all configuration details, and determine how a resource was configured at any point in time. These capabilities use compliance auditing, security analysis, resource change tracking, and troubleshooting. /config/faqs/;What is an AWS Config rule?;An AWS Config rule represents desired configurations for a resource and is evaluated against configuration changes on the relevant resources, as recorded by AWS Config. The results of evaluating a rule against the configuration of a resource are available on a dashboard. Using AWS Config rules, you can assess your overall compliance and risk status from a configuration perspective, view compliance trends over time, and pinpoint which configuration change caused a resource to drift out of compliance with a rule. /config/faqs/;What is a conformance pack?;A conformance pack is a collection of AWS Config rules and remediation actions that is built using a common framework and packaging model on AWS Config. By packaging the preceding AWS Config artifacts, you can simplify the deployment and reporting aspects of governance policies and configuration compliance across multiple accounts and Regions and reduce the time that a resource is kept in a non-compliant state. /config/faqs/;What are the benefits of AWS Config?;AWS Config makes it easier to track your resource’s configuration without the need for upfront investments and avoiding the complexity of installing and updating agents for data collection or maintaining large databases. Once you enable AWS Config, you can view continuously updated details of all configuration attributes associated with AWS resources. You are notified through Amazon Simple Notification Service (SNS) of every configuration change. /config/faqs/;How can AWS Config help with audits?;AWS Config gives you access to resource configuration history. You can relate configuration changes with AWS CloudTrail events that possibly contributed to the change in configuration. This information provides you full visibility, right from details, such as, “Who made the change?” and “From what IP address?”, to the effect of this change on AWS resources and related resources. You can use this information to generate reports to aid auditing and assessing compliance over a period. /config/faqs/;Who should use AWS Config and AWS Config rules?;Any AWS customer looking to improve their security and governance posture on AWS by continuously evaluating the configuration of their resources would benefit from this capability. Administrators within larger organizations who recommend best practices for configuring resources can codify these rules as AWS Config rules, and instill self-governance among users. Information security experts who monitor usage activity and configurations to detect vulnerabilities can benefit from AWS Config rules. If you have a workload that must comply to specific standards (e.g. PCI-DSS or HIPAA) can use this capability to assess compliance of their AWS infrastructure configurations, and generate reports for their auditors. Operators who manage large AWS infrastructure or components that change frequently can also benefit from AWS Config rules for troubleshooting. If you want to track changes to resources configuration, answer questions about resource configurations, demonstrate compliance, troubleshoot, or perform security analysis, you should turn on AWS Config. /config/faqs/;Who should use AWS Config conformance packs?;If you are looking for a framework to build and deploy compliance packages for your AWS resource configurations across several accounts, then you should use conformance packs. This framework can be used to build customized packs for security, DevOps and other personas, and you can quickly get started using one of the sample conformance pack templates. /config/faqs/;Does the service prevent users from taking non-compliant actions?;AWS Config rules do not directly affect how end users consume AWS. AWS Config rules evaluate resource configurations only after a configuration change has been completed and recorded by AWS Config. AWS Config rules do not prevent the user from making changes that could be non-compliant. To control what you can provision on AWS and configuration parameters used during provisioning, use AWS Identity and Access Management (IAM) Policies and AWS Service Catalog respectively. /config/faqs/;Can rules be evaluated before provisioning a resource?;Yes, AWS Config rules can be set to proactive only, detective only, or both proactive and detective modes. For a full list of these rules, see documentation. /config/faqs/;How does AWS Config work with AWS CloudTrail?;AWS CloudTrail records user API activity on your account and helps you access information about this activity. You get full details about API actions, such as identity of the caller, the time of the API call, the request parameters, and the response elements returned by the AWS service. AWS Config records point-in-time configuration details for your AWS resources as Configuration Items (CIs). You can use a CI to answer, “What did my AWS resource look like?” at a point in time. You can use CloudTrail to answer “Who made an API call to modify this resource?” For example, you can use the AWS Management Console for AWS Config to detect security group “Production-DB” was incorrectly configured in the past. Using the integrated CloudTrail information, you can pinpoint which user misconfigured “Production-DB” security group. /config/faqs/;Can I connect my ServiceNow and Jira Service Desk instances to AWS Config?;Yes. The AWS Service Management Connector for ServiceNow and Jira Service Desk helps ServiceNow and Jira Service Desk end users to provision, manage, and operate AWS resources natively using ServiceNow and Jira Service Desk. ServiceNow users can track resources in a configuration item view, powered by AWS Config, seamlessly on ServiceNow with the AWS Service Management Connector. Jira Service Desk users can track resources within the issue request, with the AWS Service Management Connector. This simplifies AWS product request actions for ServiceNow and Jira Service Desk users and provides ServiceNow and Jira Service Desk administrators governance and oversight over AWS products. /config/faqs/;How do I get started with this service?;The quickest way to get started with AWS Config is to use the AWS Management Console. You can turn on AWS Config in a few selections. For additional details, see the Getting Started documentation. /config/faqs/;How do I access my resources’ configuration?;You can look up current and historical resource configuration using the AWS Management Console, AWS Command Line Interface or SDKs. /config/faqs/;Do I turn on AWS Config regionally or globally?;You turn on AWS Config on a per-Region basis for your account. /config/faqs/;Can AWS Config aggregate data across different AWS accounts?;Yes, you can set up AWS Config to deliver configuration updates from different accounts to one Amazon Simple Storage Service (S3) bucket, once the appropriate IAM policies are applied to the Amazon S3 bucket. You can also publish notifications to the one SNTopic, within the same Region, once appropriate IAM policies are applied to the SNTopic. /config/faqs/;Is API activity on AWS Config itself logged by CloudTrail?;Yes. All AWS Config API activity, including use of AWS Config API operations to read configuration data, is logged by CloudTrail. /config/faqs/;What time and time zones are displayed in the timeline view of a resource? What about daylight savings?;AWS Config displays the time at which Configuration Items (CIs) were recorded for a resource on a timeline. All times are captured in Coordinated Universal Time (UTC). When the timeline is visualized on the management console, the service uses the current time zone (adjusted for daylight savings, if relevant) to display all times in the timeline view. /config/faqs/;What is a resource’s configuration?;Configuration of a resource is defined by the data included in the Configuration Item (CI) of AWS Config. The initial release of AWS Config rules makes the CI for a resource available to relevant rules. AWS Config rules can use this information along with any other relevant information such as other attached resources and business hours to evaluate compliance of a resource’s configuration. /config/faqs/;What is a rule?;"Normal 0 false false false EN-US X-NONX-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:""Table Normal""; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:""Calibri"",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:""Times New Roman""; mso-bidi-theme-font:minor-bidi;} A rule represents desired Configuration Item (CI) attribute values for resources and is evaluated by comparing those attribute values with CIs recorded by AWS Config. There are two types of rules:" /config/faqs/;How are rules created?;Rules are typically set up by the AWS account administrator. They can be created by leveraging AWS–managed rules (a predefined set of rules provided by AWS) or through customer managed rules. With AWS–managed rules, updates to the rule are automatically applied to any account using that rule. In the customer-managed model, customers have a full copy of the rule and apply the rule within their own account. These rules are maintained by the customers. /config/faqs/;How many rules can I create?;You can create up to 150 rules in your AWS account by default. Additionally, you can request an increase for the limit on the number of rules in your account by visiting the AWS Service Limits page. /config/faqs/;What is an evaluation?;Evaluation of a rule determines whether a rule is compliant with a resource at a particular point in time. It is the result of evaluating a rule against the configuration of a resource. AWS Config rules will capture and store the result of each evaluation. This result will include the resource, rule, time of evaluation, and a link to the Configuration Item (CI) that caused non-compliance. /config/faqs/;What information does the AWS Config rules dashboard provide?;The AWS Config rules dashboard gives you an overview of resources tracked by AWS Config, and a summary of current compliance by resource and by rule. When you view compliance by resource, you can determine if any rule that applies to the resource is currently not compliant. You can view compliance by rule, which tells you if any resource under the purview of the rule is currently non-compliant. Using these summary views, you can explore further the AWS Config timeline view of resources, to determine which configuration parameters changed. Using this dashboard, you can start with an overview and drill into fine-grained views that give you full information about changes in compliance status, and which changes caused non-compliance. /config/faqs/;When should I use AWS Config rules versus conformance packs?;You can use individual AWS Config rules to evaluate resource configuration compliance in one or more accounts. Conformance packs provide the additional benefit of packaging rules along with remediation actions into a single entity that can be deployed across an entire organization with a single selection. Conformance packs are intended to simplify compliance management and reporting at scale when you are managing several accounts. Conformance packs are designed to provide aggregated compliance reporting at the pack level and immutability. This helps the managed AWS Config rules and remediation documents within the conformance pack to not be modified or deleted by the individual member accounts of an organization. /config/faqs/;How are AWS Config and AWS Config rules related to AWS Security Hub?;AWS Security Hub is a security and compliance service that provides security and compliance posture management as a service. It uses AWS Config and AWS Config rules as its primary mechanism to evaluate the configuration of AWS resources. AWS Config rules can also be used to evaluate resource configuration directly. AWS Config rules are also used by other AWS services, such as AWS Control Tower and AWS Firewall Manager. /config/faqs/;When do I use AWS Security Hub and AWS Config conformance packs?;If a compliance standard, such as PCI DSS, is already present in Security Hub, then the fully managed Security Hub service is an easier way to operationalize it. You can investigate findings through Security Hub’s integration with Amazon Detective, and you can build automated or semiautomated remediation actions using the Security Hub integration with Amazon EventBridge. However, if you want to assemble your own compliance or security standard, which can include security, operational, or cost optimization checks, AWS Config conformance packs are the way to go. AWS Config conformance packs simplify management of AWS Config rules by packaging a group of AWS Config rules and associated remediation actions into a single entity. This packaging simplifies deployment of rules and remediation actions across an organization. It also enables aggregated reporting, as compliance summaries can be reported at the pack level. You can start with the AWS Config conformance pack samples we provide and customize as you see fit. /config/faqs/;Do both Security Hub and AWS Config conformance packs support continuous compliance monitoring?;Yes, both Security Hub and AWS Config conformance packs support continuous compliance monitoring, given their reliance on AWS Config and AWS Config rules. The underlying AWS Config rules can be triggered either periodically or upon detecting changes to the configuration of resources. This helps you continuously audit and assess the overall compliance of your AWS resource configurations with your organization’s policies and guidelines. /config/faqs/;How do I get started with conformance packs?;The quickest way to get started is by creating a conformance pack using one of our sample templates through the CLI or the AWS Config console. Some of the sample templates include S3 Operational Best Practices, Amazon DynamoDB Operational Best Practices, and Operational Best Practices for PCI. These templates are written in YAML. You can download these templates from our documentation site and modify them to suit your environment using your favorite text editor. You can even add custom AWS Config rules that you could have previously written into the pack. /config/faqs/;Is there any cost associated with using this feature on AWS Config?;Conformance packs will be charged using a tiered pricing model. For more details, visit the AWS Config Pricing page. /config/faqs/;What is multi-account, multi-Region data aggregation?;Data aggregation on AWS Config helps you aggregate AWS Config data from multiple accounts and Regions into a single account and a single Region. Multi-account data aggregation is useful for central IT administrators to monitor compliance for multiple AWS accounts in the enterprise. /config/faqs/;Can I use the data aggregation capability to centrally provision Config rules across multiple accounts?;The data aggregation capability cannot be used for provisioning rules across multiple accounts. It is purely a reporting capability that provides visibility into your compliance. You can use AWS CloudFormation StackSets to provision rules across accounts and Regions. Learn more in this blog link. /config/faqs/;How do I enable data aggregation in my account?;Once AWS Config and AWS Config rules are enabled in your account, and the accounts being aggregated, you can enable data aggregation by creating an aggregator in your account. Learn more. /config/faqs/;What is an aggregator?;An aggregator is an AWS Config resource type that collects AWS Config data from multiple accounts and Regions. Use an aggregator to view the resource configuration and compliance data recorded on AWS Config for multiple accounts and Regions. /config/faqs/;What information does the aggregated view provide?;The aggregated view displays the total count of non-compliant rules across the organization, the top five non-compliant rules by number of resources, and the top five AWS accounts that have the highest number of non-compliant rules. You can then drill down to view more details about the resources that are violating the rule and the list of rules that are being violated by an account. /config/faqs/;I am not an AWS Organizations customer. Can I still use the data aggregation capability?;You can specify the accounts to aggregate the AWS Config data from by uploading a file or by individually entering accounts. Note that since these accounts are not part of any AWS organization, you will need each account to explicitly authorize the aggregator account. Learn more. /config/faqs/;What if I have an account that includes a Region not supported by this feature?;When you create an aggregator, you specify the Regions from where you can aggregate data. This list shows only Regions where this feature is available. You can also select “all Regions,” in which case as soon as support is added in other Regions, it will automatically aggregate the data. /config/faqs/;What AWS resources types are covered by AWS Config?;Review our documentation for a complete list of supported resource types. /config/faqs/;What Regions is AWS Config available in?;For details on the Regions where AWS Config is available, visit this page: /config/faqs/;What is a custom configuration item?;A custom Configuration Item (CI) is the CI for a third-party or custom resource. Examples include on-premises databases, Active Directory servers, version control systems like GitHub and third-party monitoring tools such as Datadog. /config/faqs/;What are AWS Config relationships and how are they used?;AWS Config takes the relationships among resources into account when recording changes. For example, if a new EC2 Security Group is associated with an EC2 Instance, AWS Config records the updated configurations of both the primary resource, the EC2 Security Group, and related resources, if these resources changed. /config/faqs/;Does AWS Config record every state that a resource has been in?;AWS Config detects change to a resource's configuration and records the configuration state that resulted from that change. In cases where several configuration changes are made to a resource in quick succession, AWS Config will record only the latest configuration of that resource that represents the cumulative impact of the set of changes. In these situations, AWS Config will list only the latest change in the relatedEvents field of the Configuration Item. This helps users and programs to continue to change infrastructure configurations without having to wait for AWS Config to record intermediate transient states. /config/faqs/;Does AWS Config record configuration changes that did not result from API activity on that resource?;Yes, AWS Config will regularly scan the configuration of resources for changes that haven’t yet been recorded and record these changes. CIs recorded from these scans will not have a relatedEvent field in the message, and only the latest state that is different from state already recorded is selected. /config/faqs/;Does AWS Config record configuration changes to software within EC2 instances?;Yes. AWS Config helps you record configuration changes to software within EC2 instances in your AWS account and also virtual machines (VMs) or servers in your on-premises environment. The configuration information recorded by AWS Config includes Operating System updates, network configuration, and installed applications. You can evaluate whether your instances, VMs, and servers are in compliance with your guidelines using AWS Config rules. The deep visibility and continuous monitoring capabilities provided by AWS Config help you assess compliance and troubleshoot operational issues. /config/faqs/;Does AWS Config continue to send notifications if a resource that was previously non-compliant is still non-compliant after a periodic rule evaluation?;AWS Config sends notifications only when the compliance status changes. If a resource was previously non-compliant and is still non-compliant, AWS Config will not send a new notification. If the compliance status changes to “compliant,” you will receive a notification for the change in status. /config/faqs/;Can I flag or exempt resources from being evaluated by AWS Config rules?;When you configure AWS Config rules, you can specify whether your rule runs evaluations against specified resource types or resources with a specific tag. /config/faqs/;How will I be charged for AWS Config?;With AWS Config, you are charged based on the number of configuration items recorded, the number of active AWS Config rule evaluations, and the number of conformance pack evaluations in your account. A configuration item is a record of the configuration state of a resource in your AWS account. An AWS Config rule evaluation is a compliance state evaluation of a resource by an AWS Config rule in your AWS account. A conformance pack evaluation is the evaluation of a resource by an AWS Config rule within the conformance pack. For more detail and examples, visit https://aws.amazon.com/config/pricing/. /config/faqs/;Does the pricing for AWS Config rules include the costs for Lambda functions?;You can choose from a set of managed rules provided by AWS or you can author your own rules, written as Lambda functions. Managed rules are fully maintained by AWS and you do not pay any additional Lambda charges to run them. You can enable managed rules, provide any required parameters, and pay a single rate for each active AWS Config rule in a given month. Custom rules give you full control as they are applied as Lambda functions in your account. In addition to monthly charges for an active rule, standard Lambda free tier* and function application rates apply to custom AWS Config rules. /config/faqs/;I want to change the Lambda function for my custom AWS Config rule. What is the recommended approach?;Charges are incurred whenever a new rule is created and it becomes active. If you must update or replace the Lambda function associated with a rule, the recommended approach is to update the rule instead of deleting it and creating a new rule. /config/faqs/;What AWS Partner solutions are available for AWS Config?;APN Partner solutions such as Splunk, ServiceNow, Evident.io, CloudCheckr, Redseal, and Red Hat CloudForms provide offerings that are fully integrated with data from AWS Config. Managed service providers, such as 2nd Watch and Cloudnexa have also announced integrations with AWS Config. Additionally, with AWS Config Rules, partners such as CloudHealth Technologies, Alert Logic, and Trend Micro are providing integrated offerings that can be used. These solutions include capabilities such as change management and security analysis and help you visualize, monitor, and manage AWS resource configurations. /opsworks/chefautomate/faqs/;What is AWS OpsWorks for Chef Automate?;AWS OpsWorks for Chef Automate provides a fully managed Chef server and suite of automation tools that give you workflow automation for continuous deployment, automated testing for compliance and security, and a user interface that gives you visibility into your nodes and their status. The Chef server gives you full stack automation by handling operational tasks such as software and operating system configurations, package installations, database setups, and more. The Chef server centrally stores your configuration tasks and provides them to each node in your compute environment at any scale, from a few nodes to thousands of nodes. OpsWorks for Chef Automate is completely compatible with tooling and cookbooks from the Chef community and automatically registers new nodes with your Chef server. /opsworks/chefautomate/faqs/;How is OpsWorks for Chef Automate different from OpsWorks Stacks?;OpsWorks for Chef Automate is a configuration management service that helps you instantly provision a Chef server and lets the service operate it, including performing backups and software upgrades. The service offers full compatibility with Chef’s Supermarket cookbooks and recipes. It supports native Chef tools such as TestKitchen and Knife. The OpsWorks Stacks service helps you model, provision, and manage your applications on AWS using the embedded Chef solo client that is installed on Amazon EC2 instances on your behalf. To learn more, see OpsWorks Stacks. /opsworks/chefautomate/faqs/;Who should use OpsWorks for Chef Automate?;Customers who are looking for a configuration management experience that is fully compatible with Chef, including all community scripts and tooling, but without operational overhead should adopt OpsWorks for Chef Automate. /opsworks/chefautomate/faqs/;How can I access OpsWorks for Chef Automate?;The OpsWorks for Chef Automate service is available through the AWS Management Console, AWS SDKs, and the AWS Command Line Interface (CLI). After the Chef server has been set up, it can also be managed by Chef-compatible tools such as Knife. /opsworks/chefautomate/faqs/;In which regions is OpsWorks for Chef Automate available?;See Regional Products and Services for details. /opsworks/chefautomate/faqs/;Are there any limits to OpsWorks for Chef Automate?;The default service limits are: /opsworks/chefautomate/faqs/;What network requirements must my servers meet to work with OpsWorks for Chef Automate?;Your servers must be able to connect to AWS public endpoints. See the documentation for details. /opsworks/chefautomate/faqs/;What is Chef and how does OpsWorks for Chef Automate use it?;Chef Automate is a software bundle by Chef Software, Inc. that automates how applications are configured, deployed, and managed through the use of code. OpsWorks for Chef Automate uses Chef recipes to deploy and configure software components on Amazon EC2 instances and on-premises servers. Chef has a rich ecosystem with hundreds of cookbooks that can be used in AWS, such as cookbooks for managing PostgreSQL, Nginx, Solr, and many more. /opsworks/chefautomate/faqs/;What is Chef Automate?;Chef Automate gives you a full-stack, continuous deployment pipeline, automated testing for compliance and security, and visibility into everything that's happening along the way. It builds on Chef for infrastructure automation, InSpec for compliance automation, and Habitat for application automation. You can transform your company into a highly collaborative, software-driven organization with Chef Automate as the engine. To learn more, see the Chef Automate product details page. /opsworks/chefautomate/faqs/;How do I use the Chef Automate console?;Chef Automate includes its own console. The Chef Automate Console can be accessed through the OpsWorks link on the the AWS Management Console. After you click the link, you will be prompted for the credentials that you were assigned when you set up the Chef Automate server. /opsworks/chefautomate/faqs/;I am an AWS OpsWorks Stacks customer. Should I migrate to OpsWorks for Chef Automate?;OpsWorks Stacks customers who are looking for full Chef server compatibility are encouraged to use OpsWorks for Chef Automate. To learn more about OpsWorks Stacks, see the OpsWorks Stacks product details page. /opsworks/chefautomate/faqs/;How can I migrate from OpsWorks Stacks to OpsWorks for Chef Automate?;Before you migrate, you first have to adapt your OpsWorks cookbooks to work on a Chef server. Some may work without alterations, however. If you are using OpsWorks instance scaling (either time-based or load-based), you’ll need to use an EC2 Auto Scaling group and OpsWork Chef’s node registration feature instead. You will later be able to work with your Chef server and nodes by using Chef’s Visibility console or Knife. /opsworks/chefautomate/faqs/;Which versions of Chef are supported?;The OpsWorks for Chef Automate service will regularly upgrade your Chef server to the latest recommended version. Please see our documentation for the latest supported version. We recommend running the most current, stable chef-client version on nodes associated with an AWS OpsWorks for Chef Automate server. /opsworks/chefautomate/faqs/;Which cloud resources power my AWS OpsWorks for Chef Automate server?;AWS OpsWorks for Chef Automate uses proven AWS features and services, such as Amazon EC2, Amazon EBS, Amazon S3, and Amazon CloudWatch to create the components that make up your managed Chef server. OpsWorks for Chef Automate uses the Amazon Linux Amazon Machine Image (AMI). /opsworks/chefautomate/faqs/;How can I back up my Chef server?;You can define a daily or weekly recurring Chef server backup. The service stores the backups in Amazon S3 on your behalf. Alternatively, you can choose to create manual backups on demand. /opsworks/chefautomate/faqs/;How many backups can I keep for every Chef server?;Backups are stored in Amazon S3 and incur additional fees. You can define a backup retention period of up to 30 generations. You can submit a service request to change that limit by using AWS Support channels. /opsworks/chefautomate/faqs/;How can I restore my Chef server to an earlier point in time?;After browsing through your available backups, you can choose a point in time from which to restore your Chef server. Server backups contain only Chef software persistent data such as cookbooks and registered nodes. /opsworks/chefautomate/faqs/;Which resources can I connect to my Chef server?;You can connect any EC2 instance or on-premises server that is running a supported operating system and has Internet access to an OpsWorks for Chef Automate server. You are charged an hourly fee for every connected resource. /opsworks/chefautomate/faqs/;How do I register nodes with the Chef server?;You’ll get user-data code snippets through the console. You can put these code snippets in an EC2 Auto Scaling group. These code snippets ensure that your instances are registered to your Chef server as Chef nodes, and that they run the corresponding Chef recipes. On-premises servers require that you install the Chef client agent software and register the server with your Chef server. /opsworks/chefautomate/faqs/;How can I obtain Chef related training?;You can choose your preferred Chef Automate training method from Chef’s website. /opsworks/chefautomate/faqs/;How can I keep the underlying Chef server running and up-to-date?;Your managed configuration management server is updated to the latest version of Chef Automate during the maintenance window that you configure. OpsWorks for Chef Automate also regularly runs security updates and operating system package updates for you. /opsworks/chefautomate/faqs/;What is an OpsWorks for Chef Automate maintenance window?;A maintenance window is a daily or weekly one-hour time slot during which OpsWorks for Chef Automate initiates Chef version updates without breaking changes, security updates, and operating system package updates. For example, if you select a maintenance window that begins every Sunday at 2:00 A.M., OpsWorks for Chef Automate initiates the platform update between 2:00 and 3:00 A.M. every Sunday. /opsworks/chefautomate/faqs/;How do I set up a maintenance window?;The maintenance window is enabled by default and can be set during the Chef server setup phase. You can change settings later by using the AWS Management Console, CLI, or APIs. /opsworks/chefautomate/faqs/;What kinds of version updates will be performed by OpsWorks for Chef Automate?;OpsWorks for Chef Automate performs version updates automatically as long as the updates include backward-compatible changes. When new versions of Chef software become available, system maintenance is designed to update the version of Chef Automate and Chef Server on the server automatically, as soon as the version update passes AWS testing. AWS performs extensive testing to verify that Chef upgrades are production-ready and do not disrupt existing customer environments, so there can be lags between Chef software releases and their availability for application to existing OpsWorks for Chef Automate servers. /opsworks/chefautomate/faqs/;When and how can I perform major version updates?;You can perform major version updates at any time by using the AWS OpsWorks for Chef Automate console, API, or CLI. /opsworks/chefautomate/faqs/;How does AWS OpsWorks for Chef Automate apply updates?;The updates are applied directly to the managed EC2 instance on which the Chef server is running. If the OpsWorks for Chef Automate health system detects any issues during the update, OpsWorks for Chef Automate will roll back changes and try again during the next maintenance window. /opsworks/chefautomate/faqs/;Will my Chef server be available during the maintenance window?;Your Chef server is not available when maintenance updates are being applied. Your connected nodes enter a pending-server state until maintenance is complete. The connected nodes will continue to operate normally. /opsworks/chefautomate/faqs/;How will I be notified of the availability of new OpsWorks for Chef Automate versions?;You are notified about new Chef versions through the OpsWorks for Chef Automate console. The service console informs you if your Chef server was updated during the maintenance window. /opsworks/chefautomate/faqs/;Where can I find details about changes between platform versions?;Details about changes between Chef Automate versions are on the Chef Automate Release Notes page. /opsworks/chefautomate/faqs/;How often are platform version updates released?;The number of version releases each year varies based on the frequency of Chef Automate patch releases from Chef and acceptance testing performed by AWS. /opsworks/chefautomate/faqs/;How do I get started with OpsWorks for Chef Automate?;The best way to get started with OpsWorks for Chef Automate is to review the AWS OpsWorks for Chef Automate Getting Started chapter of the technical documentation. /opsworks/chefautomate/faqs/;How do I create Chef cookbooks and recipes?;The easiest way to get started is to use existing Chef recipes. Many public repositories contain Chef cookbooks with recipes that can run with little to no modification. The OpsWorks for Chef Automate Starter Kit also includes an example Chef recipe and describes how it works. /opsworks/chefautomate/faqs/;Can I use community cookbooks from the Chef Supermarket?;Yes. OpsWorks for Chef Automate provides configuration management experience that is fully compatible with Chef Automate. You can use community-authored cookbooks with no AWS-specific modifications. /opsworks/chefautomate/faqs/;How do I upgrade my Chef nodes to a newer release version?;Chef node upgrades can be done at your convenience by using the Chef omnibus recipe. Although OpsWorks regularly performs Chef server version upgrades on your behalf, your Chef nodes continue to operate even if they remain on the earlier version. /opsworks/chefautomate/faqs/;Does my OpsWorks for Chef Automate server support community tools like Knife and Test Kitchen?;Yes. OpsWorks for Chef Automate provides configuration management experience that is fully compatible with Chef Automate. You can use the same ecosystem of tools as an on-premises Chef Automate server. /opsworks/chefautomate/faqs/;Is there a sample cookbook that I can use to check out OpsWorks for Chef Automate?;Yes. The OpsWorks for Chef Automate Starter Kit includes a sample cookbook that you can use to test drive the offering and explore its functionality. /opsworks/chefautomate/faqs/;Is it possible to use AWS Identity and Access Management (IAM) with OpsWorks for Chef Automate?;Yes. IAM users with the appropriate permissions can work with AWS OpsWorks for Chef Automate. The Chef users are not managed by IAM and must be provisioned from within Chef Automate. /opsworks/chefautomate/faqs/;How do I create IAM users?;You can use the IAM console, IAM command line interface (CLI), or IAM API to provision IAM users. By default, IAM users have no access to AWS services until permissions are granted. /opsworks/chefautomate/faqs/;Do I have root access to my OpsWorks for Chef Automate server EC2 instance?;Yes. You can provide an SSH key pair to enable root access to the OpsWorks for Chef Automate server EC2 instance. OpsWorks for Chef Automate provides you with tooling to perform common operational tasks, and so we recommend that you disable SSH access. /opsworks/chefautomate/faqs/;Where can I find more information about security and running applications on AWS?;See Amazon Web Services: Overview of Security Processes and the AWS Security Center. /opsworks/chefautomate/faqs/;Can I get a history of OpsWorks for Chef Automate API calls made on my account for security analysis and troubleshooting purposes?;Yes. To get a history of OpsWorks for Chef Automate API calls made on your account, you simply turn on AWS CloudTrail in the AWS Management Console. /opsworks/chefautomate/faqs/;How much do the AWS resources powering my application on OpsWorks for Chef Automate server cost?;The OpsWorks for Chef Automate server is configured on your behalf and powered by Amazon EC2, Amazon EBS, Amazon S3, and Amazon CloudWatch. For EC2 pricing information, see the EC2 pricing page. For S3 pricing information, see the S3 pricing page. For CloudWatch pricing information, see the CloudWatch pricing page. There are three EC2 instance types to choose from for running the Chef server: m4.large, r4.xlarge, r4.2xlarge. The hourly rate depends on the instance type used. /opsworks/chefautomate/faqs/;Am I billed for EC2 instances and on-premises servers that are connected to my OpsWorks for Chef Automate server?;You pay an hourly fee for each EC2 instance and on-premises server that is connected to an AWS OpsWorks for Chef Automate server. There are no minimum fees and no upfront commitments. For more information, see our pricing page. /opsworks/chefautomate/faqs/;How do I view the cost of AWS resources that have been used by my OpsWorks for Chef Automate server?;OpsWorks for Chef Automate automatically tags all Chef server resources with the name of your Chef server. You can use these tags with Cost Allocation Reports to organize and track your AWS costs. See AWS Account Billing for details. /opsworks/chefautomate/faqs/;Does AWS Support cover OpsWorks for Chef Automate?;Yes. AWS Support covers issues related to your use of OpsWorks for Chef Automate. See the Compare AWS Support Plans page for details. /opsworks/chefautomate/faqs/;What other support options are available?;You can tap into the breadth of existing AWS community knowledge to help you with your development by using the AWS OpsWorks discussion forum. See the AWS OpsWorks Forums page for details. /servicecatalog/faqs/;What is AWS Service Catalog?;AWS Service Catalog allows IT administrators to create, manage, and distribute catalogs of approved products to end users, who can then access the products they need in a personalized portal. Administrators can control which users have access to each product to enforce compliance with organizational business policies. Administrators can also setup adopted roles so that end users only require IAM access to AWS Service Catalog in order to deploy approved resources. AWS Service Catalog allows your organization to benefit from increased agility and reduced costs because end users can find and launch only the products they need from a catalog that you control. /servicecatalog/faqs/;Who should use AWS Service Catalog?;AWS Service Catalog was developed for organizations, IT teams, and managed service providers (MSPs) that need to centralize policies. It allows IT administrators to vend and manage AWS resource and services. For large organizations, it provides a standard method of provisioning cloud resources for thousands of users. It is also suitable for small teams, where front-line development managers can provide and maintain a standard dev/test environment. /servicecatalog/faqs/;How do I get started with AWS Service Catalog?;In the AWS Management Console, choose AWS Service Catalog in Management Tools. In the AWS Service Catalog console, administrators can create portfolios, add products, and grant users permissions to use them with just a few clicks. End users logged into the AWS Service Catalog console can see and launch the products that administrators have created for them. /servicecatalog/faqs/;What can end users to do with AWS Service Catalog that they could not do before?;End users have a simple portal in which to discover and launch products that comply with organizational policies and budget constraints. /servicecatalog/faqs/;What is a portfolio?;A portfolio is a collection of products, with configuration information that determines who can use those products and how they can use them. Administrators can create a customized portfolio for each type of user in an organization and selectively grant access to the appropriate portfolio. When an administrator adds a new version of a product to a portfolio, that version is automatically available to all current portfolio users. The same product can be included in multiple portfolios. Administrators also can share portfolios with other AWS accounts and allow the administrators of those accounts to extend the portfolios by applying additional constraints. By using portfolios, permissions, sharing, and constraints, administrators can ensure that users are launching products that are configured properly for the organization’s needs. /servicecatalog/faqs/;Is AWS Service Catalog a regionalized service?;Yes. AWS Service Catalog is fully regionalized, so you can control the regions in which data is stored. Portfolios and products are a regional construct which will need to be created per region and are only visible/usable on the regions in which they were created. /servicecatalog/faqs/;In which Regions is AWS Service Catalog available?;For a full list of supported AWS Regions, see the AWS Region Table. /servicecatalog/faqs/;Are APIs available? Can I use the CLI to access AWS Service Catalog?;Yes, APIs are available and enabled through the CLI. Actions from the management of Service Catalog artifacts through to provisioning and terminating are available. You can find more information in the AWS Service Catalog documentation or download the latest AWS SDK or CLI. /servicecatalog/faqs/;Can I privately access AWS Service Catalog APIs from my Amazon Virtual Private Cloud (VPC) without using public IPs?;Yes, you can privately access AWS Service Catalog APIs from your Amazon Virtual Private Cloud (VPC) by creating VPC Endpoints. With VPC Endpoints, the routing between the VPC and AWS Service Catalog is handled by the AWS network without the need for an Internet gateway, NAT gateway, or VPN connection. The latest generation of VPC Endpoints used by AWS Service Catalog are powered by AWS PrivateLink, an AWS technology enabling the private connectivity between AWS services using Elastic Network Interfaces (ENI) with private IPs in your VPCs. To learn more about AWS PrivateLink, visit the AWS PrivateLink documentation. /servicecatalog/faqs/;Does AWS Service Catalog offer a Service Level Agreement (SLA)?;Yes. The AWS Service Catalog SLA provides for a service credit if a customer's monthly uptime percentage is below our service commitment in any billing cycle. /servicecatalog/faqs/;How do I create a portfolio?;You create portfolios in the AWS Service Catalog console. For each portfolio, you specify the name, a description, and owner. /servicecatalog/faqs/;How do I create a product?;Each Service Catalog product is based on an infrastructure-as-code (IaC) template. You can use CloudFormation templates or Terraform configurations (single tar.gz file). You can create a product via the AWS Service Catalog console by either uploading an IaC template, providing a link to an S3 bucket where the template is stored, or connecting to an external Git repository where the template is stored. When creating products, you can provide additional information for the product listing, including a detailed product description, version information, support information, and tags. /servicecatalog/faqs/;Why would I use tags with a portfolio?;Tags are useful for identifying and categorizing AWS resources that are provisioned by end users. You can also use tags in AWS Identity and Access Management (IAM) policies to allow or deny access to IAM users, groups, and roles or to restrict operations that can be performed by IAM users, groups, and roles. When you add tags to your portfolio, the tags are applied to all instances of resources provisioned from products in the portfolio. /servicecatalog/faqs/;How do I make a portfolio available to my users?;You publish portfolios that you’ve created or that have been shared with you to make them available to IAM users in the AWS account. To publish a portfolio, you add IAM users, groups, or roles to the portfolio from the AWS Service Catalog console by navigating to the portfolio details page. When you add users to a portfolio, they can browse and launch any of the products in the portfolio. Typically, you create multiple portfolios with different products and access permissions customized for specific types of end users. For example, a portfolio for a development team will likely contain different products from a portfolio targeted at the sales and marketing team. A single product can be published to multiple portfolios with different access permissions and provisioning policies. /servicecatalog/faqs/;Can I share my portfolio with other AWS accounts?;Yes. You can share your portfolios with users in one or more other AWS accounts. When you share your portfolio with other AWS accounts, you retain ownership and control of the portfolio. Only you can make changes, such as adding new products or updating products. You, and only you, can also “unshare” your portfolio at any time. Any products, or stacks, currently in use will continue to run until the stack owner decides to terminate them. /servicecatalog/faqs/;Can I create a product from an existing Amazon EC2 AMI?;Yes. You can use an existing Amazon EC2 AMI to create a product by wrapping it in an AWS CloudFormation template. /servicecatalog/faqs/;Can I use products from the AWS Marketplace?;Yes. You can subscribe to a product in the AWS Marketplace and use the copy to Service Catalog action to copy your Marketplace product directly to Service Catalog. Also you can use the Amazon EC2 AMI for the product to create an AWS Service Catalog product. To do that, you wrap the subscribed product in an AWS CloudFormation template. For more details on how to copy or package your AWS Marketplace products, please click here. /servicecatalog/faqs/;How do I control access to portfolios and products?;To control access to portfolios and products, you assign IAM users, groups, or roles on the Portfolio details page. Providing access allows users to see the products that are available to them in the AWS Service Catalog console. /servicecatalog/faqs/;Can I provide a new version of a product?;Yes. You can create new product versions in the same way you create new products. When a new version of a product is published to a portfolio, end users can choose to launch the new version. They can also choose to update their running stacks to this new version. AWS Service Catalog does not automatically update products that are in use when an update becomes available. /servicecatalog/faqs/;Can I provide a product and retain full control over the associated AWS resources?;Yes. You have full control over the AWS accounts and roles used to provision products. To provision AWS resources, you can use either the user’s IAM access permissions or your pre-defined IAM role. To retain full control over the AWS resources, you specify a specific IAM role at the product level. AWS Service Catalog uses the role to provision the resources in the stack. /servicecatalog/faqs/;Can I restrict the AWS resources that users can provision?;Yes. You can define rules that limit the parameter values that a user enters when launching a product. These rules are called template constraints because they constrain how the AWS CloudFormation template for the product is deployed. You use a simple editor to create template constraints, and you apply them to individual products. /servicecatalog/faqs/;Can I use a YAML language CloudFormation template in Service Catalog?;Yes, we currently support both JSON and YAML language templates. /servicecatalog/faqs/;How do I find out which products are available?;You can see which products are available by logging in to the AWS Service Catalog console and searching the portal for products that meet your needs, or you can navigate to the full product list page. You can sort to find the product that you want. /servicecatalog/faqs/;How do I deploy a product?;When you find a product that meets your requirements in the portal, choose Launch. You will be guided through a series of questions about how you plan to use the product. The questions might be about your business needs or your infrastructure requirements (such as “Which EC2 instance type?”). When you have provided the required information, you’ll see the product in the AWS Service Catalog console. While the product is being provisioned, you will see that it is “in progress.” After provisioning is complete, you will see “complete” and information, such as endpoints or Amazon Resource Names (ARNs), that you can use to access the product. /servicecatalog/faqs/;Can I see which products I am using?;Yes. You can see which products you are using in the AWS Service Catalog console. You can see all of the stacks that are in use, along with the version of the product used to create them. /servicecatalog/faqs/;How do I update my products when a new version becomes available?;When a new version of a product is published, you can use the Update Stack command to use that version. If you are currently using a product for which there is an update, it continues to run until you close it, at which point you can choose to use the new version. /servicecatalog/faqs/;How do I monitor the health of my products?;You can see the products that you are using and their health state in the AWS Service Catalog console. /servicecatalog/faqs/;What is AWS Service Catalog support for Terraform open source?;AWS Service Catalog enables customers using Terraform open source to provide self-service provisioning with governance to their end users in AWS. Central IT can use a single tool to organize, govern, and distribute their Terraform configurations within AWS at scale. They can access AWS Service Catalog key features, including cataloging of standardized and pre-approved templates, access control, least privileges during provisioning, versioning, sharing to thousands of AWS accounts, and tagging. End-users simply see the list of products and versions they have access to, and can deploy them in a single action. /servicecatalog/faqs/;Who should use AWS Service Catalog Support for Terraform?;If Terraform open-source is your IaC tool of choice, you can use Service Catalog to offer your teams Terraform configurations self-service provisioning. If you use a mix of CloudFormation and Terraform configurations across different teams or use cases, you can now use AWS Service Catalog as the single tool to catalog and share both. For your end users, AWS Service Catalog provides an easy-to-use, common interface to view and provision resources regardless of the IaC technology. /servicecatalog/faqs/;How do I get started with AWS Service Catalog support for Terraform open source?;To use AWS Service Catalog with Terraform open source, you need to setup a Terraform open source engine in one of your accounts. Create a Terraform open source engine by using the AWS provided Terraform Reference Engine, that will install and configure the code and infrastructure required for your Terraform open source engine to work with AWS Service Catalog. After this one-time setup, that takes just minutes, you can start creating Terraform open source type products in AWS Service Catalog. /servicecatalog/faqs/;Can I enable multiple AWS accounts to provision Terraform resources using a single, centralized TFOS engine?;Yes. AWS Service Catalog supports a “hub and spoke” model where a product is defined in a single central account, and can then be shared with thousands of AWS accounts. For Terraform, you can install your TFOS engine and create your Terraform products in this central Hub account. You can then share these with spoke accounts and enable access to IAM roles/users/groups in those accounts. Note that you will need to define launch roles with sufficient permissions in each of those accounts. /servicecatalog/faqs/;Is AWS Service Catalog support for Terraform open source a managed service?;Partially. AWS supports the cataloging, sharing, and end-user access for Terraform products. You are responsible for making sure your TFOS environment is ready and well-integrated with AWS Service Catalog. You also need to define a launch role with permissions to provision and tag all the resources associated with Terraform products. /servicecatalog/faqs/;Can I connect AWS Service Catalog to my source code repository where my Terraform configurations are stored?;Yes. AWS Service Catalog allows you to sync products to template files that are managed through GitHub, GitHub Enterprise, or Bitbucket. Regardless of which repository is chosen, the template file format is still required to be a single file archived in Tar and compressed in Gzip. /servicecatalog/faqs/;How are my Terraform open source product state files managed by AWS Service Catalog?;Each Terraform open source product has a single state file, that is stored in the AWS account of your Terraform open source engine in AWS S3 bucket. AWS Service Catalog administrators will see the list of state files, but they won’t be able to read or write their contents. Only your Terraform open source engine can read and write the contents of the state files. /servicecatalog/faqs/;What is the price for using this feature?;This feature is priced the same as all other AWS Service Catalog features, at $0.0007 per API call after the first 1,000 calls in an account/region. To learn more, read here. /servicecatalog/faqs/;What is AWS Service Catalog AppRegistry?;AWS Service Catalog AppRegistry allows organizations to understand the application context of their AWS resources. AppRegistry provides a repository for the information that describes the applications and associated resources that you use within your enterprise. /servicecatalog/faqs/;Who should use AWS Service Catalog AppRegistry?;AWS Service Catalog AppRegistry was developed for organizations that need a single, up-to-date, definition of applications within their AWS environment. /servicecatalog/faqs/;What is an application?;AWS Service Catalog AppRegistry enables you to define your application including a name, description, the associated CloudFormation stacks, and the application metadata represented by Attribute Groups. The associated CloudFormation stacks represent all the resources required for the application. This might be the infrastructure required in a single environment, or it could also include the code repositories, pipelines, and IAM resources that support the application across all environments. Either existing or new CloudFormation Stacks can be associated to applications. New stacks can be associated to the application upon provisioning by including an association to the application with the stack’s CloudFormation template. /servicecatalog/faqs/;What is an attribute group?;Attribute groups contain the application metadata that is important to your enterprise. Attribute groups include an open JSON schema, providing you the flexibility to capture complex enterprise metadata. Application attributes might include metadata such as the application security classification, organizational ownership, application type, cost center, and support information. Builders association attribute groups to their application. When attribute groups are updated, these updates are automatically reflected in all applications associated to the attribute group. /servicecatalog/faqs/;In which Regions is AWS Service Catalog available?;For a full list of supported AWS Regions, see the AWS Region Table. /servicecatalog/faqs/;Are APIs available? Can I use the CLI to access AWS Service Catalog AppRegistry?;Yes, a full set of API and CLI actions are available. /managed-services/faqs/;What types of operations plans does AWS Managed Services offer?;AWS Managed Services offers two operations plans to meet your needs: 1) AWS Managed Services Accelerate for your new and existing AWS accounts via detective controls, giving you full control and flexibility to use AWS as you always have, and 2) AWS Managed Services Advanced with preventative controls via a change management system within an AWS managed landing zone, which provides a full operational solution and trades some flexibility for increased operational rigor to protect your critical business applications. Customers can select either operations plan on an account by account basis. /managed-services/faqs/;How can AWS Managed Services help enterprises accelerate cloud adoption?;AWS Managed Services fast-tracks cloud adoption by providing a full range of operational services that augments your infrastructure management capability, and supports your existing operational processes. Leveraging AWS services and a growing library of automations, configurations, and run books, we provide an end-to-end operational solution for both new and existing AWS environments. /managed-services/faqs/;How much does AWS Managed Services cost?;AWS Managed Services offers you a pay-as-you-go approach for pricing for cloud services. With AWS you pay only for the individual services you need for as long as you use them. The price of AWS Managed Services is calculated based on the number of instances and the usage fees of all other AWS services within the accounts we manage. For more information on pricing, please contact sales. /managed-services/faqs/;How do AWS Partners and AWS Managed Services work together?;AWS Managed Services focuses on infrastructure management and creating scale through automation. We work closely with AWS certified partners in the majority of our customer engagements where they fill several areas of the customers end-to-end cloud solution not provided by AWS Managed Services such as migration consulting and application management. Customers looking for a single vendor to provide both application and infrastructure management are encouraged to contact one of our AWS Managed Service Providers. /managed-services/faqs/;Will AWS Managed Services work with existing IT Service Managements?;We designed AWS Managed Services to operate via APIs, enabling integration of the service into a wide range of existing IT Service Management (ITSM) systems and development platforms. We provide a standard integration with ServiceNow for AWS Managed Services Advanced, while AWS Managed Services Accelerate customers can use the AWS ITSM Integration. Integrations can be performed by AWS Professional Services or an AWS Managed Services Partner. /managed-services/faqs/;What industry standards does AWS Managed Services comply with?;AWS Managed Services follows Information Technology Infrastructure Library (ITIL*), the popular IT service management framework used by Enterprises. Many of the underlying AWS services managed by AWS Managed Services are certified. AWS Managed Services is certified for HIPAA, HITRUST, GDPR, SOC, ISO, and PCI. See here for more information about AWS and compliance. /managed-services/faqs/;What kind of workloads does AWS Managed Services support?;AWS Managed Services supports AWS infrastructure and services used in traditional, modernized, and cloud optimized workloads. We support the full range of AWS services and will help up to and including the operating system on Amazon EC2 instances. AWS Managed Services manages operations for AWS infrastructure, including AWS cloud resources running on AWS Outposts deployments on premises. /managed-services/faqs/;Does AWS Managed Services offer incident detection and response for my workloads?;AWS Managed Services (AMS) offers monitoring, incident detection, response, and remediation for AWS infrastructure and security incidents. In addition, AWS Incident Detection and Response, an add-on to Enterprise Support that offers 24x7 proactive monitoring and incident management for subscribed or onboarded workloads, is available at no additional charge in eligible regions for AWS Managed Services direct customers with AWS Enterprise Support. Contact your account team to subscribe accounts and onboard your workloads to AWS Incident Detection and Response. /managed-services/faqs/;Does AWS Managed Services manage applications?;AWS Managed Services (AMS) specializes in managing AWS infrastructure and services. AMS does not operate or configure your applications, however, we can work with application teams to develop application specific health monitoring through standard AWS services such as Amazon CloudWatch. By leveraging AMS, you can keep your AWS resources focused on innovation instead of undifferentiated operational tasks. For custom and packaged applications, we have a community of AWS Partners who provide application management as part of their portfolio of services. /managed-services/faqs/;Does AWS Managed Services manage third party tools?;AWS Managed Services (AMS) manages certain third party tools as part of the AWS Managed Services Advanced operating plan including certain endpoint security, directory services, and network firewall tools. If you have an operational need specific to your tools or environment, we encourage you to contact your sales representative about Operations on Demand, currently available in the United States, to determine if we can provide custom operational capabilities. /managed-services/faqs/;When can I start working with AWS Managed Services?;AWS Managed Services (AMS) meets you where you are in your cloud journey. Whether you are considering a move to the cloud, in the process of migrating, or have workloads running on AWS, AMS can expand your operations capabilities. AWS Managed Services Accelerate works with your existing AWS accounts and workloads and can begin helping you operate quickly. AWS Managed Services Advanced can also begin helping you right away with our designated Cloud Service Delivery Manager and Cloud Architect resources who will help guide you through your operational planning and decisions making, and operations will begin once we have deployed our managed landing zone and the first workload has been migrated. /managed-services/faqs/;How do I interact with my AWS resources with AWS Managed Services?;AWS Managed Services (AMS) offers flexibility in how you interact with AWS resources. AWS Managed Services Accelerate customers can interact with AWS services as they do today, and leverage AWS APIs, AWS Console, AWS Command Line Interface, through the AMS Console, or any existing ISV integrations. AWS Managed Services Advanced customers will interact primarily through the AMS change management platform using AWS CloudFormation Templates and our automated requests for change. However, customers can also leverage Developer Mode to interact with services directly in their Dev Test accounts, and can self-provision services for select AWS services that benefit from direct interaction via the AWS Console or APIs. /managed-services/faqs/;What languages are supported by AWS Managed Services?;AWS Managed Services provides support in English. /managed-services/faqs/;What operating systems does AWS Managed Services support?;For a list of supported operating systems, please refer to the AWS Managed Services documentation. /managed-services/faqs/;Does AWS Managed Services support Control Tower as a landing zone?;Yes, AWS Managed Services (AMS) Accelerate customers are free to use AWS Control Tower to deploy and manage their account configurations and landing zones. AWS Control Tower can be used with AWS Managed Services Advanced as long as it is added after AMS creates the initial landing zone, inclusive of networking, logging, and application accounts. /managed-services/faqs/;How does AWS Support differ from AWS Managed Services?;AWS Support provides a mix of tools and technology, people, and programs designed to proactively help customers optimize performance, lower costs, and innovate faster. AWS Support addresses requests that range from answering best practices questions, guidance on configuration, all the way to break-fix and problem resolution. /kinesis/video-streams/faqs/;What is Amazon Kinesis Video Streams?;Amazon Kinesis Video Streams makes it easy to securely stream media from connected devices to AWS for storage, analytics, machine learning (ML), playback, and other processing. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming media from millions of devices. It durably stores, encrypts, and indexes media in your streams, and allows you to access your media through easy-to-use APIs. Kinesis Video Streams enables you to quickly build computer vision and ML applications through integration with Amazon Rekognition Video, Amazon SageMaker, and libraries for ML frameworks such as Apache MxNet, TensorFlow, and OpenCV. For live and on-demand playback, Kinesis Video Streams provides fully-managed capabilities for HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (DASH). Kinesis Video Streams also supports ultra-low latency two-way media streaming with WebRTC, as a fully managed capability. /kinesis/video-streams/faqs/;What is time-encoded data?;Time-encoded data is any data in which the records are in a time series, and each record is related to its previous and next records. Video is an example of time-encoded data, where each frame is related to the previous and next frames through spatial transformations. Other examples of time-encoded data include audio, RADAR, and LIDAR signals. Amazon Kinesis Video Streams is designed specifically for cost-effective, efficient ingestion, and storage of all kinds of time-encoded data for analytics and ML use cases. /kinesis/video-streams/faqs/;What are common use cases for Kinesis Video Streams?;Kinesis Video Streams is ideal for building media streaming applications for camera-enabled IoT devices and for building real-time computer vision-enabled ML applications that are becoming prevalent in a wide range of use cases such as the following: /kinesis/video-streams/faqs/;What does Amazon Kinesis Video Streams manage on my behalf?;Amazon Kinesis Video Streams is a fully managed service for media ingestion, storage, and processing. It enables you to securely ingest, process, and store video at any scale for applications that power robots, smart cities, industrial automation, security monitoring, machine learning (ML), and more. Kinesis Video Streams also ingests other kinds of time-encoded data like audio, RADAR, and LIDAR signals. Kinesis Video Streams provides you SDKs to install on your devices to make it easy to securely stream media to AWS. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest media streams from millions of devices. It also durably stores, encrypts, and indexes the media streams and provides easy-to-use APIs so that applications can retrieve and process indexed media fragments based on tags and timestamps. Kinesis Video Streams provides a library to integrate ML frameworks such as Apache MxNet, TensorFlow, and OpenCV with video streams to build machine learning applications. Kinesis Video Streams is integrated with Amazon Rekognition Video, enabling you to build computer vision applications that detect objects, events, and people. /kinesis/video-streams/faqs/;What is a video stream?;A video stream is a resource that enables you to capture live video and other time-encoded data, optionally store it, and make the data available for consumption both in real time and on a batch or ad-hoc basis. When you choose to store data in the video stream, Kinesis Video Streams will encrypt the data, and generate a time-based index on the stored data. In a typical configuration, a Kinesis video stream has only one producer publishing data into it. The Kinesis video stream can have multiple consuming applications processing the contents of the video stream. /kinesis/video-streams/faqs/;What is a fragment?;A fragment is a self-contained sequence of media frames. The frames belonging to a fragment should have no dependency on any frames from other fragments. As fragments arrive, Kinesis Video Streams assigns a unique fragment number, in increasing order. It also stores producer-side and server-side time stamps for each fragment, as Kinesis Video Streams-specific metadata. /kinesis/video-streams/faqs/;What is a producer?;A producer is a general term used to refer to a device or source that puts data into a Kinesis video stream. A producer can be any video-generating device, such as a security camera, a body-worn camera, a smartphone camera, or a dashboard camera. A producer can also send non-video time-encoded data, such as audio feeds, images, or RADAR data. One producer can generate one or more video streams. For example, a video camera can push video data to one Kinesis video stream and audio data to another. /kinesis/video-streams/faqs/;What is a consumer?;Consumers are your custom applications that consume and process data in Kinesis video streams in real time, or after the data is durably stored and time-indexed when low latency processing is not required. You can create these consumer applications to run on Amazon EC2 instances. You can also use other Amazon AI services such as Amazon Rekognition, or third party video analytics providers to process your video streams. /kinesis/video-streams/faqs/;What is a chunk?;Upon receiving the data from a producer, Kinesis Video Streams stores incoming media data as chunks. Each chunk consists of the actual media fragment, a copy of media metadata sent by the producer, and the Kinesis Video Streams-specific metadata such as the fragment number, and server-side and producer-side timestamps. When a consumer requests media data through the GetMedia API operation, Kinesis Video Streams returns a stream of chunks, starting with the fragment number that you specify in the request. /kinesis/video-streams/faqs/;How do I think about latency in Amazon Kinesis Video Streams?;There are four key contributors to latency in an end-to-end media data flow. /kinesis/video-streams/faqs/;How do I publish data to my Kinesis video stream?;You can publish media data to a Kinesis video stream via the PutMedia operation, or use the Kinesis Video Streams Producer SDKs in Java, C++, or Android. If you choose to use the PutMedia operation directly, you will be responsible for packaging the media stream according to the Kinesis Video Streams data specification, handle the stream creation, token rotation, and other actions necessary for reliable streaming of media data to the AWS cloud. We recommend using the Producer SDKs to make these tasks simpler and get started faster. /kinesis/video-streams/faqs/;What is the Kinesis Video Streams PutMedia operation?;Kinesis Video Streams provides a PutMedia API to write media data to a Kinesis video stream. In a PutMedia request, the producer sends a stream of media fragments. As fragments arrive, Kinesis Video Streams assigns a unique fragment number, in increasing order. It also stores producer-side and server-side time stamps for each fragment, as Kinesis Video Streams-specific metadata. /kinesis/video-streams/faqs/;What is the Kinesis Video Streams Producer SDK?;The Amazon Kinesis Video Streams Producer SDK are a set of easy-to-use and highly configurable libraries that you can install and customize for your specific producers. The SDK makes it easy to build an on-device application that securely connects to a video stream, and reliably publishes video and other media data to Kinesis Video Streams. It takes care of all the underlying tasks required to package the frames and fragments generated by the device's media pipeline. The SDK also handles stream creation, token rotation for secure and uninterrupted streaming, processing acknowledgements returned by Kinesis Video Streams, and other tasks. /kinesis/video-streams/faqs/;In which programming platforms is the Kinesis Video Streams Producer SDK available?;Kinesis Video Streams Producer SDK's core is built in C, so it is efficient and portable to a variety of hardware platforms. Most developers will prefer to use the C, C++ or Java versions of the Kinesis Video Streams producer SDK. There is also an Android version of the producer SDK for mobile app developers who want to stream video data from Android devices. /kinesis/video-streams/faqs/;What should I be aware of before getting started with the Kinesis Video Streams producer SDK?;The Kinesis Video Streams producer SDK does all the heavy lifting of packaging frames and fragments, establishes a secure connection, and reliably streams video to AWS. However there are many different varieties of hardware devices and media pipelines running on them. To make the process of integration with the media pipeline easier, we recommend having some knowledge of: 1) the frame boundaries, 2) the type of a frame used for the boundaries, I-frame or non I-frame, and 3) the frame encoding time stamp. /kinesis/video-streams/faqs/;What is the GetMedia API?;"You can use the GetMedia API to retrieve media content from a Kinesis video stream. In the request, you identify stream name or stream Amazon Resource Name (ARN), and the starting chunk. Kinesis Video Streams then returns a stream of chunks in order by fragment number. When you put media data (fragments) on a stream, Kinesis Video Streams stores each incoming fragment and related metadata in what is called a ""chunk."" The GetMedia API returns a stream of these chunks starting from the chunk that you specify in the request." /kinesis/video-streams/faqs/;What is the GetMediaForFragmentList API?;You can use the GetMediaForFragmentList API to retrieve media data for a list of fragments (specified by fragment number) from the archived data in a Kinesis video stream. Typically a call to this API operation is preceded by a call to the ListFragments API. /kinesis/video-streams/faqs/;What is the ListFragments API?;You can use the ListFragments API to return a list of Fragments from the specified video stream and start location - using the fragment number or timestamps - within the retained data. /kinesis/video-streams/faqs/;How long can I store data in Kinesis Video Streams?;You can store data in their streams for as long as you like. Kinesis Video Streams allows you to configure the data retention period to suit your archival and storage requirements. /kinesis/video-streams/faqs/;What is the Kinesis Video Streams parser library?;The Kinesis Video Streams parser library makes it easy for developers to consume and process the output of Kinesis Video Streams GetMedia operation. Application developers will include the library in their video analytics and processing applications that operate on video streams. The applications themselves will run on your EC2 instances, although they can be run elsewhere. The library has features that make it easy to get a frame-level object and its associated metadata, extract and collect Kinesis Video Streams-specific metadata attached to fragments, and consecutive fragments. You can then build custom applications that can more easily use the raw video data for your use cases. /kinesis/video-streams/faqs/;If I have a custom processing application that needs to use the frames (and fragments) carried by the Kinesis video stream, how do I do that?;In general, if you want to consume video streams and then manipulate them to fit your custom application's needs, then there are two key steps to consider. First, get the bytes in a frame from the formatted stream vended by the GetMedia API. You can use the stream parser library to get the frame objects. Next, get the metadata necessary to decode a frame such as the pixel height, width, codec id, and codec private data. Such metadata is embedded in the track elements. The parser library makes extracting this information easier by providing helper classes to collect the track information for a fragment. /kinesis/video-streams/faqs/;How do I playback the video captured in my own application?;You can use Amazon Kinesis Video Streams’ HTTP Live Streams (HLS) and Dynamic Adaptive Streaming over HTTP (DASH) capabilities to playback the ingested video in fragmented MP4 or MPEG_TS packaged format. HLS and DASH are industry-standard, HTTP-based media streaming protocols. As you capture video from devices using Amazon Kinesis Video Streams, you can use the HLS or DASH APIs to playback live or recorded video. This capability is fully managed, so you do not have to build any cloud-based infrastructure to support video playback. For low-latency playback and two-way media streaming, see the FAQs on WebRTC–based streaming. /kinesis/video-streams/faqs/;How do I get started with Kinesis Video Streams HLS or DASH APIs?;To view a Kinesis video stream using HLS or DASH, you first create a streaming session using GetHLSStreamingSessionURL or GetDASHStreamingSessionURL APIs. This action returns a URL (containing a session token) for accessing the HLS or DASH session, which you can then use in a media player or a standalone application to playback the stream. You can use a third-party player (such as Video.js or Google Shaka Player) to display the video stream, by providing the HLS or DASH streaming session URL, either programmatically or manually. You can also play back video by entering the HLS or DASH streaming session URL in the Location bar of the Apple Safari or Microsoft Edge browsers. Additionally, you can use the video players for Android (Exoplayer) and iOS (AVMediaPlayer) for mobile apps. /kinesis/video-streams/faqs/;What are the basic requirements to use the Kinesis Video Streams HLS APIs?;An Amazon Kinesis video stream has the following requirements for providing data through HLS: /kinesis/video-streams/faqs/;What are the basic requirements to use the Kinesis Video Streams DASH APIs?;An Amazon Kinesis video stream has the following requirements for providing data through DASH: /kinesis/video-streams/faqs/;What are the available playback modes for HLS or DASH streaming in Kinesis Video Streams?;There are two different playback modes supported by both HLS and DASH: Live and On Demand. /kinesis/video-streams/faqs/;What is the delay in the playback of video using the API?;The latency for live playback is typically between 3 and 5 seconds, but this could vary. We strongly recommend running your own tests and proof-of-concepts to determine the target latencies. There are a variety of factors that impact latencies, including the use case, how the producer generates the video fragments, the size of the video fragment, the player tuning, and network conditions both streaming into AWS and out of AWS for playback. For low-latency playback, see the FAQs on WebRTC–based streaming. /kinesis/video-streams/faqs/;What are the relevant limits to using HLS or DASH?;A Kinesis video stream supports a maximum of ten active HLS or DASH streaming sessions. If a new session is created when the maximum number of sessions is already active, the oldest (earliest created) session is closed. The number of active GetMedia connections on a Kinesis video stream does not count against this limit, and the number of active HLS sessions does not count against the active GetMedia connection limit. See Kinesis Video Streams Limits for more details. /kinesis/video-streams/faqs/;What’s the difference between Kinesis Video Streams and AWS Elemental MediaLive?;AWS Elemental MediaLive is a broadcast-grade live video encoding service. It lets you create high-quality video streams for delivery to broadcast televisions and internet-connected multiscreen devices, like connected TVs, tablets, smart phones, and set-top boxes. The service functions independently or as part of AWS Media Services. /kinesis/video-streams/faqs/;Am I charged to use this capability?;Kinesis Video Streams uses a simple pay as you go pricing. There are no upfront costs and you only pay for the resources you use. Kinesis Video Streams pricing is based on the data volume (GB) ingested, volume of data consumed (GB) including through the HLS or DASH APIs, and the data stored (GB-Month) across all the video streams in your account. Please see the pricing page for more details. /kinesis/video-streams/faqs/;What is WebRTC and how does Kinesis Video Streams support this capability?;WebRTC is an open technology specification for enabling real-time communication (RTC) across browsers and mobile applications via simple APIs. It leverages peering techniques for real-time data exchange between connected peers and provides low media streaming latency required for human-to-human interaction. WebRTC specification includes a set of IETF protocols including Interactive Connectivity Establishment (ICE RFC5245), Traversal Using Relay around NAT (TURN RFC5766), and Session Traversal Utilities for NAT (STUN RFC5389) for establishing peer-to-peer connectivity, in addition to protocol specifications for real-time media and data streaming. Kinesis Video Streams provides a standards compliant WebRTC implementation, as a fully-managed capability. You can use this capability to securely live stream media or perform two-way audio or video interaction between any camera IoT device and WebRTC compliant mobile or web players. As a fully-managed capability, you do not have to build, operate, or scale any WebRTC related cloud infrastructure such as signaling or media relay servers to securely stream media across applications and devices. /kinesis/video-streams/faqs/;What does Amazon Kinesis Video Streams manage on my behalf to enable live media streaming with WebRTC?;Kinesis Video Streams provides managed end-points for WebRTC signaling that allows applications to securely connect with each other for peer-to-peer live media streaming. Next, it includes managed end-points for TURN that enables media relay via the cloud when applications cannot stream peer-to-peer media. It also includes managed end-points for STUN that enables applications to discover their public IP address when they are located behind a NAT or a firewall. Additionally, it provides easy to use SDKs to enable camera IoT devices with WebRTC capabilities. Finally, it provides client SDKs for Android, iOS, and for Web applications to integrate Kinesis Video Streams WebRTC signaling, TURNand STUN capabilities with any WebRTC compliant mobile or web player. /kinesis/video-streams/faqs/;What can I build using Kinesis Video Streams WebRTC capability?;With Kinesis Video Streams WebRTC, you can easily build applications for live media streaming or real-time audio or video interactivity between camera IoT devices, web browsers, and mobile devices for usecases such as helping parents keep an eye on their baby’s room, enable home-owners use a video doorbell to check who’s at the door, allow owners of camera-enabled robot vacuums to remotely control the robot by viewing the live camera stream on a mobile phone, and much more. /kinesis/video-streams/faqs/;How do I get started with Kinesis Video Streams WebRTC capability?;You can get started by building and running the sample applications in the Kinesis Video Streams SDKs for WebRTC available for Web browsers, Android or iOS based mobile devices, and for Linux, Raspbian, and MacOS based IoT devices. You can also run a quick demo of this capability in the Kinesis Video Streams management console by creating a signaling channel, and running the demo application to live stream audio and video from your laptop’s built-in camera and microphone. /kinesis/video-streams/faqs/;What is a Signaling Channel?;A signaling channel is a resource that enables applications to discover, set up, control, and terminate a peer-to-peer connection by exchanging signaling messages. Signaling messages are metadata that two applications exchange with each other to establish peer-to-peer connectivity. This metadata includes local media information such as media codecs and codec parameters, and possible network candidate paths for the two applications to connect with each other for live streaming. /kinesis/video-streams/faqs/;How do applications use a signaling channel to enable peer-to-peer connectivity?;Streaming applications can maintain persistent connectivity with a signaling channel and wait for other applications to connect to them or they can connect to a signaling channel only when they need to live stream media. The signaling channel enables applications to connect with each other in a one to few model using the concept of one master connecting to multiple viewers. The application that initiates the connection assumes the responsibility of a master via the ConnectAsMaster API and wait for viewers. Upto 10 applications can then connect to that signaling channel by assuming the viewer responsibility via the ConnectAsViewer API. Once connected to the signaling channel, the master and viewer applications can send each other signaling messages to establish peer-t0-peer connectivity for live media streaming. /kinesis/video-streams/faqs/;How do applications live stream peer-to-peer media when they are located behind a NAT or a firewall?;Applications use Kinesis Video Streams STUN end point to discover their public IP address when they are located behind a NAT or a firewall. An application provides its public IP address as a possible location where it can receive connection requests from other applications for live streaming. The default option for all WebRTC communication is direct peer-to-peer connectivity but if the NAT or firewall does now allow direct connectivity (e.g. in case of symmetric NATs), applications can connect to the Kinesis Video Streams TURN end points for relaying media via the cloud. The GetIceServerConfig API provides the necessary TURN end point information that applications can use in their WebRTC configuration. This configuration allows applications to use TURN relay as a fallback when they are unable to establish a direct peer-to-peer connection for live streaming. /kinesis/video-streams/faqs/;How does Kinesis Video Streams secure the live media streaming with WebRTC?;End to end encryption is a mandatory feature of WebRTC, and Kinesis Video Streams enforces it on all the components, including signaling and media or data streaming. Regardless of whether the communication is peer-to-peer or relayed via Kinesis Video Streams TURN end points, all WebRTC communications are securely encrypted through standardized encryption protocols. The signaling messages are exchanged using secure Websockets (WSS), data streams are encrypted using Datagram Transport Layer Security (DTLS), and media streams are encrypted using Secure Real-time Transport Protocol (SRTP). /kinesis/video-streams/faqs/;What is the Kinesis Video Streams management console?;The Kinesis Video Streams management console enables you to create, update, manage, and monitor your video streams. It console can also playback your media streams live or on an on-demand basis, as long as the content in the streams is in the supported media type. Using the player controls, you can view the live stream, skip forwards or backwards 10 seconds, use the date and time picker to rewind to a point in the past when you have set the corresponding retention period for the video stream. The Kinesis Video Streams management console's video playback capabilities are offered as a quick diagnostic tool for development and test scenarios for developers as they build solutions using Kinesis Video Streams. /kinesis/video-streams/faqs/;What media type does the console support?;The only supported video media type for playback in the Kinesis Video Streams management console is the popular H.264 format. This media format has wide support on devices, hardware and software encoders and playback engines. While, you can ingest any variety of video, audio, or other custom time-encoded data types for your own consumer applications and use cases, the management console will not perform playback of those other data types. /kinesis/video-streams/faqs/;What is the delay in the playback of video on the Kinesis Video Streams management console?;For a producer that is transmitting video data into the video stream, you will experience a 2 - 10 second lag in the live playback experience in the Kinesis Video Streams management console. The majority of the latency is added by the producer device as it accumulates frames into fragments before it transmits data over the internet. Once the data enters into the Kinesis Video Streams endpoint and you request playback, the console will get H.264 media type fragments from the durable storage, trans-package the fragments into a media format suitable for playback across different internet browsers. The trans-packaged media content will then be transferred to your location where you requested the playback from over the internet. /kinesis/video-streams/faqs/;What Is Server-Side Encryption for Kinesis Video Streams?;Server-side encryption is a feature in Kinesis Video Streams that automatically encrypts data before it's at rest by using an AWS KMS key that you specify. Data is encrypted before it is written to the Kinesis Video Streams storage layer, and it is decrypted after it is retrieved from storage. As a result, your data is always encrypted at rest within the Kinesis Video Streams service. /kinesis/video-streams/faqs/;How do I get started with server-side encryption?;Server-side encryption is always enabled on Kinesis video streams. If a user-provided key is not specified when the stream is created, the default key (provided by Kinesis Video Streams) is used. /kinesis/video-streams/faqs/;How much does it cost to use server-side encryption?;When you apply server-side encryption, you are subject to AWS KMS API usage and key costs. Unlike custom AWS KMS keys, the (Default) aws/kinesis-video KMS key is offered free of charge. However, you still pay for the API usage costs that Kinesis Video Streams incurs on your behalf. API usage costs apply for every KMS key, including custom ones. Kinesis Video Streams calls AWS KMS approximately every 45 minutes when it is rotating the data key. In a 30-day month, the total cost of AWS KMS API calls that are initiated by a Kinesis Video Streams stream should be less than a few dollars. This cost scales with the number of user credentials that you use on your data producers and consumers because each user credential requires a unique API call to AWS KMS. /kinesis/video-streams/faqs/;Is Amazon Kinesis Video Streams available in AWS Free Tier?;No. Amazon Kinesis Video Streams is not available in AWS Free Tier. /kinesis/video-streams/faqs/;How much does Kinesis Video Streams cost?;Furthermore, Kinesis Video Streams will only charge for media data it successfully received, with a minimum chunk size of 4 KB. For comparison, a 64 kbps audio sample is 8 KB in size, so the minimum chunk size is set low enough to accommodate the smallest of audio or video streams. /kinesis/video-streams/faqs/;How does Kinesis Video Streams bill for data stored in streams?;Kinesis Video Streams will charge you for total amount of data durably stored under any given stream. The total amount of stored data per video stream can be controlled using retention hours. /kinesis/video-streams/faqs/;How am I charged for using Kinesis Video Streams WebRTC capability?;For using the Amazon Kinesis Video Streams WebRTC capability, you are charged based on the number of signaling channels that are active in a given month, number of signaling messages sent and received, and TURN streaming minutes used for relaying media. A signaling channel is considered active in a month if at any time during the month a device or an application connects to it. TURN streaming minutes are metered in 1 minute increments. Please see the pricing page for more details. /kinesis/video-streams/faqs/;What does the Amazon Kinesis Video Streams SLA guarantee?;Our Amazon Kinesis Video Streams SLA guarantees a Monthly Uptime Percentage of at least 99.9% for Amazon Kinesis Video Streams. /kinesis/video-streams/faqs/;How do I know if I qualify for a SLA Service Credit?;You are eligible for a SLA credit for Amazon Kinesis Video Streams under the Amazon Kinesis Video Streams SLA if more than one Availability Zone in which you are running a task, within the same region has a Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle. /elastictranscoder/faqs/;What is Amazon Elastic Transcoder?;Amazon Elastic Transcoder is a highly scalable, easy to use and cost effective way for developers and businesses to convert (or “transcode”) video and audio files from their source format into versions that will playback on devices like smartphones, tablets and PCs. /elastictranscoder/faqs/;What can I do with Amazon Elastic Transcoder?;You can use Amazon Elastic Transcoder to convert video and audio files into supported output formats optimized for playback on desktops, mobile devices, tablets, and televisions. In addition to supporting a wide range of input and output formats, resolutions, bitrates, and frame rates, Amazon Elastic Transcoder also offers features for automatic video bit rate optimization, generation of thumbnails, overlay of visual watermarks, caption support, DRM packaging, progressive downloads, encryption and more. For more details, please visit the Product Details page. /elastictranscoder/faqs/;Why should I use Amazon Elastic Transcoder?;Amazon Elastic Transcoder manages all the complexity of running media transcoding in the AWS cloud. Amazon Elastic Transcoder enables you to focus on your content, such as the devices you want to support and the quality levels you want to provide, rather than managing the infrastructure and software needed for conversion. Amazon Elastic Transcoder scales to handle the largest encoding jobs. As with all Amazon Web Services, there are no up-front investments required, and you pay only for the resources that you use. We offer a free tier that enables you to explore the service and transcode up to up to 20 minutes of SD video or 10 minutes of HD video a month free of charge. To see terms and additional information on the free tier program, please visit the AWS Free Usage Tier page. /elastictranscoder/faqs/;How do I get started with Amazon Elastic Transcoder?;You can sign up for Amazon Elastic Transcoder through the AWS Management Console. You can then use the console to create a pipeline, set up an IAM role, and create your first transcoding job. To help you test Amazon Elastic Transcoder, the first 20 minutes of SD content (or 10 minutes of HD content) transcoded each month is provided free of charge. Once you exceed the number of minutes in this free usage tier, you will be charged at the prevailing rates. We do not watermark the output content or otherwise limit the functionality of the service, so you can use it and truly get a feel for its capabilities. To see terms and additional information on the free tier program, please visit the AWS Free Usage Tier page. If you do not have an AWS account, you can create one by clicking the Sign Up button at the top of this page. /elastictranscoder/faqs/;How do I use Amazon Elastic Transcoder?;To use Amazon Elastic Transcoder you need to have at least one media file in an Amazon S3 bucket. The easiest way to use Amazon Elastic Transcoder is to try it through the console. Create a transcoding pipeline that connects the input Amazon S3 bucket to the output Amazon S3 bucket. Create a transcoding job that will transcode your media file, choose a transcoding preset (a template), and submit the job. Your transcoded file will appear in your output bucket once it has been processed. /elastictranscoder/faqs/;What tools and libraries work with Amazon Elastic Transcoder?;Amazon Elastic Transcoder uses a JSON API, and we provide SDKs for Python, Node.js, Java, .NET, PHP, and Ruby. The new AWS Command Line Interface also supports Amazon Elastic Transcoder. You can see a full list of our SDKs here. /elastictranscoder/faqs/;Can I use the AWS Management Console with Amazon Elastic Transcoder?;Yes. Amazon Elastic Transcoder has a console that is accessed through the AWS Management Console. You can use our console to create pipelines, jobs, and presets as well as manage and view existing pipelines and jobs. /elastictranscoder/faqs/;How do I get my media files into Amazon S3?;There are many ways to get content into Amazon S3, from the simple web-based uploader in the AWS Management Console to programmatic approaches through APIs. For very large files, you may wish to use AWS Import/Export, AWS Direct Connect, or file-acceleration solutions available in the AWS Marketplace. For more information please refer to the Amazon S3 documentation and the AWS Digital Media website. /elastictranscoder/faqs/;How do I retrieve my media files from Amazon S3?;You can retrieve files from Amazon S3 programmatically, using the AWS Management Console or a third party tool. You can also mark Amazon S3 objects as public and download them directly from Amazon S3. /elastictranscoder/faqs/;Can I use a Content Distribution Network (CDN) to distribute my media files?;"Yes. You can easily use CDNto distribute your content; for example, you can use Amazon CloudFront to distribute your content to end-users with low latency, high data transfer speeds, and no commitments. You can use an output bucket that contains your transcoded content in Amazon S3 as the origin server for Amazon CloudFront. For more information, please visit the detail page for Amazon CloudFront." /elastictranscoder/faqs/;How long does it take to transcode a job?;Jobs start processing in the order in which they are received in a pipeline. Once a job is ready to be transcoded, many variables affect the speed of transcoding, for example, the input file size, resolution, and bitrate. For example, if you were to submit a 10 minute video using the iPhone 4 preset, it would take approximately 5 minutes. If a large number of jobs are received they are backlogged (queued). Please note that the transcoding speed may be different between regions. /elastictranscoder/faqs/;When will my job be ready?;You can use Amazon SNnotifications to be informed of job status changes. For example, you can be notified when your job starts to transcode and when it has finished transcoding. For more information on Amazon SNnotifications, please see the detail page on Amazon SNS. /elastictranscoder/faqs/;How many jobs are processed at once?;Pipelines operate independently from one another. Each pipeline processes jobs in parallel up to a default limit set for that pipeline. Within a job, each individual output also progresses in parallel. For more information on limits and capacity, visit the limits section in the Elastic Transcoder Developer Guide. You can request higher limits by opening a support case. /elastictranscoder/faqs/;How many jobs can I submit?;Currently, we allow a maximum of 100,000 jobs per pipeline. Once you exceed this limit, you will receive a 429 Rate Limit Exception. If you require this limit to be raised, please contact us here. /elastictranscoder/faqs/;Can I create multiple outputs per job?;Each transcoding job relates to a single input file and can create one or more output files. For example, you may wish to create audio only, low- and high-resolution renditions of the same input file and could do so as part of a single transcoding job. The number of outputs per job is limited. For more information on Amazon Elastic Transcoder limits, please refer to the documentation. /elastictranscoder/faqs/;How do I generate clips?;You can create a clip from your source media in your transcoding job. You specify a start time and a duration (both specified as HH:mm:ss.SSS or sssss.SSS.) To cut off the start of a file, you would just specify a start time. You can generate different length clips (or transcode the entire file) for each different output in your transcoding job. You will be charged based on the output duration of your transcode, so if you have a five-minute input file and you create a one-minute output from it, you will only be charged for one minute of transcoding. Please remember that fractional minutes are rounded up, so if you create a clip that is one minute and thirty seconds in duration, you will be charged for two minutes of transcoding. /elastictranscoder/faqs/;How do I stitch clips?;You can specify two or more input files that need to be stitched to create a single output file in your transcoding job. Input files are stitched in the order they are specified. So if you want to add a bumper to your video, specify the bumper file as the first input and your video file as the second input. For each input, you can specify a Start Time and a Duration, which allows you to stitch together only the parts of each input that you want included in the output. You will be charged for the output duration of your transcode, so if you are stitching two five-minute input files to create a ten-minute output, you will be charged for ten minutes of transcoding. /elastictranscoder/faqs/;What is a transcoding pipeline, what can I use it for, and how many can I have?;A pipeline is a queue-like structure that manages your transcoding jobs. A pipeline can process multiple jobs simultaneously, and generally starts to process jobs in the order in which you added them to the pipeline. Jobs often finish in a different order based on job specifications. It is up to you how you wish to use pipelines. Some examples include submitting jobs to different pipelines based on the priority or the duration of a transcode, or using different pipelines for your development, test and production environments. The number of pipelines per AWS account is limited. For more information on Amazon Elastic Transcoder limits, please refer to the documentation. /elastictranscoder/faqs/;What are transcoding presets?;A preset is a template that contains the settings that you want Amazon Elastic Transcoder to apply during the transcoding process, for example, the codec and the resolution that you want in the transcoded file. When you create a job, you specify which preset you want to use. We provide presets that create media files that play on any device and presets that target specific devices. For maximum compatibility, choose a “breadth preset” that creates output that plays on a wide range of devices. For optimum quality and file size, choose an “optimized preset” that creates output for a specific device or class of devices. /elastictranscoder/faqs/;What do I do if none of your transcoding presets work for me?;You can create your own custom presets based on an existing preset. Once you create your own custom preset, it is available across your AWS account for the Amazon Elastic Transcoder service within a specific region. For more information on presets, please refer to the Amazon Elastic Transcoder Developer Guide. The number of pipelines per AWS account is limited. For more information on Amazon Elastic Transcoder limits, please refer to the documentation. /elastictranscoder/faqs/;Why do I need to assign a role to a transcoding pipeline?;Amazon Elastic Transcoder uses AWS Identity and Access Management (IAM) roles to enable you to securely control access to your media assets. The IAM role sets a policy that defines what permissions you have for accessing Amazon S3 resources. You can assign different roles to different pipelines, and an IAM administrator can create specific roles for use with Amazon Elastic Transcoder. More information about IAM can be found here. /elastictranscoder/faqs/;How can I configure roles to be more restrictive?;You can use the AWS Management Console to edit and create new IAM roles. IAM roles that are created by Amazon Elastic Transcoder are visible in the AWS Management Console and can also be edited. /elastictranscoder/faqs/;How do I use notifications?;Amazon Elastic Transcoder uses Amazon SNto notify you of specific events. You can choose to be notified about jobs that start to process, jobs that complete, warnings, and errors. Each event type is assigned to an SNtopic, and you can use the same topic or different topics for each event. The Amazon Elastic Transcoder console will create an SNtopic for you or you can specify an existing one. /elastictranscoder/faqs/;Why should I use notifications?;Notifications are a much more efficient way to check transcoding status than polling the API. Notifications provide a way to be notified on specific events that occur in the system. For example, you can be notified on a completed event. This is useful if you want to know when a job has finished transcoding and this is far more efficient than calling the 'List Jobs By Status' or 'Read Job' API at regular intervals. /elastictranscoder/faqs/;Why does my job keep failing?;The most common reason for jobs to fail is that the input file is corrupted in some way. If you receive an error about the format not being supported, we are unable to decode your source file and we’d love for you to tell us more about on our Discussion Forum. We need the following information to assist with diagnosis: AWS Account ID, Region and Job ID. For a list of error codes, please refer to the documentation. /elastictranscoder/faqs/;How can I generate more than one thumbnail per job?;You can specify a thumbnail creation interval in seconds to create one thumbnail every n seconds. To create thumbnails in more than one size, you need to create different jobs. /elastictranscoder/faqs/;Can I reserve a transcoder for my exclusive use?;Amazon Elastic Transcoder provides a shared transcoding service and does not enable a transcoder to be reserved or allocated to an individual customer. /elastictranscoder/faqs/;Do I need to pay license fees?;We have licensed relevant intellectual property from the applicable patent pools for transcoding content. Like any other transcoder, customers are responsible for evaluating and, if necessary, securing licenses for distribution of content in various formats. /elastictranscoder/faqs/;Do you support live encoding?;Amazon Elastic Transcoder is a file-based transcoding service and does not support live transcoding. /elastictranscoder/faqs/;Are there limits to the service?;The number of transcoding pipelines, transcoding presets and outputs per job have limits. Most of these limits can be adjusted on a customer-by-customer basis. For the current limits, please refer to the documentation. /elastictranscoder/faqs/;How do I increase service limits?;If you require an increase in the service limits, please contact us here and provide all the information requested on the form. We will then contact you to discuss your requirements. /elastictranscoder/faqs/;Where is Amazon Elastic Transcoder available?;Amazon Elastic Transcoder is available in the following AWS regions: US East (N Virginia), US West (Oregon), US West (N California), EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Mumbai). /elastictranscoder/faqs/;Can I pass metadata when creating a job?;You have the option to attach up to 10 custom metadata key-value pairs to your Elastic Transcoder jobs. This metadata will be included in the job notifications and when reading the job via the API or console. You provide this information in the “UserMetadata” field on the Job object. /elastictranscoder/faqs/;What input formats do you support?;We support popular web, consumer and professional media formats. Examples include 3GP, AAC, AVI, FLV, MP4 and MPEG-2. If there is a format that you’ve found does not work, please let us know through our forum. /elastictranscoder/faqs/;Where can I find a comprehensive list of support formats?;We add new input formats on an ongoing basis, so such a list would age quickly. Please take advantage of our free tier and console to try a format not mentioned above and if you run into problems, please let us know! /elastictranscoder/faqs/;"When creating MP4 files, do you support ""fast start""?";We locate the MOOV atom for an MP4 at the start of the file so that your player can start playback immediately without waiting for the entire file to finish downloading. /elastictranscoder/faqs/;Do you support Apple ProRes or digital cinematography formats?;We do not support reading Apple ProRes files or raw camera formats like ARRI and RED at this time. /elastictranscoder/faqs/;What video formats can I transcode into?;We support the following video codecs: H.264, VP9, VP8, MPEG-2, and animated GIF. File formats supported include MPEG-2 TS container (for HLS), fmp4 (for Smooth Streaming and MPEG-DASH), MP4, WebM, FLV, MPG, and MXF (XDCAM-compatible). For information on file formats that are supported by specific codecs, please visit the Product Details page. /elastictranscoder/faqs/;What audio formats can I transcode into?;We support the following audio codecs: AAC, MP3, MP2, PCM, FLAC, and Vorbis. Audio-only file formats supported include MP3, MP4, FLAC, OGA, OGG, and WAV. For information on file formats that are supported by specific codecs, please visit the Product Details page. /elastictranscoder/faqs/;How is album art supported for audio files?;Album art is supported in MP4 files containing AAC audio, in MP3 files, and in FLAC files. Album art is not supported for OGA, OGG, WAV, WebM or MPEG-2 TS outputs. You can specify whether album art from the source file is passed through to the output, removed, or whether new album art should replace it or be appended to it. /elastictranscoder/faqs/;How do I create an audio file from a video file?;To strip out video and create an output that only contains the audio track, run a transcoding job with your input file and use one of the system transcoding presets that contains Audio in its name. Alternatively, you can create your own audio only custom transcoding preset. The output file will only contain the audio portion of the input file. /elastictranscoder/faqs/;Do you support surround sound formats?;The audio portion of the transcoded output from Amazon Elastic Transcoder is two-channel AAC, MP3 or Vorbis. /elastictranscoder/faqs/;Do you support audio channel remapping?;If the source file contains multi-channel audio, the output will contain the first two channels, which are frequently left and right audio tracks. For the MXF container, we support multiple modes of packaging the audio into the file, including optional insertion of motor only shots (MOS). /elastictranscoder/faqs/;Can I generate XDCAM-compatible video?;Yes, the easiest way to generate XDCAM-compatible outputs is to specify one of the XDCAM system presets when creating a transcoding job. You can also create a custom preset by choosing the MXF container with MPEG-2 video and PCM audio. /elastictranscoder/faqs/;Do you support closed captions?;Yes, you can add, remove, or preserve captions as you transcode your video from one format to another. /elastictranscoder/faqs/;Can you support multiple caption tracks?;Yes, you can add one track per language. /elastictranscoder/faqs/;How do I create content for HLS output?;There are two steps: /elastictranscoder/faqs/;How do I create content for Smooth Streaming?;There are two steps: /elastictranscoder/faqs/;How do I create content for MPEG-DASH streaming?;There are two steps: /elastictranscoder/faqs/;Should I use the HLSv3 or the HLSv4 option?;HLS version 3 has been supported natively on iOS 2+ devices since July 2008 and on Android 4.0+ since Oct. 2011. HLS version 4 has been supported natively on iOS 5+ devices since Oct. 2011 and on Android 4.4+ since Sept. 2013. /elastictranscoder/faqs/;Can I stream HLS directly from S3?;Yes, you can play your HLS renditions directly from S3 by pointing the player to the M3U8 playlist. We recommend you use a CDN such as Amazon CloudFront, which provides a better end user experience with improved scalability and performance. See Configuring On-Demand Apple HTTP Live Streaming (HLS). /elastictranscoder/faqs/;Do I need a streaming server to deliver my Smooth Streaming content?;Usually playing back Smooth Streaming requires an IIS origin server, and you cannot stream directly from S3. However, if you distribute your content with CloudFront you can simply configure a CloudFront Smooth Streaming distribution, eliminating the need for a streaming server. See Configuring On-Demand Smooth Streaming. /elastictranscoder/faqs/;Why is the codec parameter that I want to change not exposed by the API?;In designing Amazon Elastic Transcoder, we wanted to create a service that was simple to use. Therefore, we expose the most frequently used codec parameters. If there is a parameter that you require, please let us know by letting us know through our forum. /elastictranscoder/faqs/;What settings do I use to preserve the dimensions of my video?;"Use the following settings in your custom preset: MaxWidth: auto; MaxHeight: auto; SizingPolicy: ShrinkToFit; PaddingPolicy: NoPad; DisplayAspectRatio: auto" /elastictranscoder/faqs/;How do I scale my output to a specified width and set the height to preserve the aspect ratio of the source content?;"Use the following settings in your custom preset: MaxWidth: [Desired Width]; MaxHeight: auto; SizingPolicy: Fit; PaddingPolicy: NoPad; DisplayAspectRatio: auto" /elastictranscoder/faqs/;How do I limit the height or width of a video without stretching the output to fit my set limit while preserving the input aspect ratio?;"Use the following settings in your custom preset: MaxWidth: [Desired Width Limit]; MaxHeight: [Desired Height Limit]; SizingPolicy: ShrinkToFit; PaddingPolicy: NoPad; DisplayAspectRatio: auto" /elastictranscoder/faqs/;"What settings should I use to create a preset that causes the output video to fill the screen without distortion, if necessary cropping some of the edges (""center cut"")?";"Use the following settings in your custom preset: MaxWidth: [Desired Width]; MaxHeight: [Desired Height]; SizingPolicy: Fill; PaddingPolicy: NoPad; DisplayAspectRatio: auto" /elastictranscoder/faqs/;"What settings should I use to create a preset that causes the output video to fill the screen without cropping any image area, if necessary distorting the image (""squeeze"" or ""stretch"")?";"Use the following settings in your custom preset: MaxWidth: [Desired Width]; MaxHeight: [Desired Height]; SizingPolicy: Stretch; PaddingPolicy: NoPad; DisplayAspectRatio: auto" /elastictranscoder/faqs/;How do I make my watermark scale with my video?;In the watermark settings of your transcoding preset, set the HorizontalAlign, VerticalAlign, and Target parameters as desired. Then set the HorizontalOffset and VerticalOffset with relative parameters. For example, to place the watermark 10% away from the edges, set both values to 10%. /elastictranscoder/faqs/;How do I avoid distorting my watermark?;If you do not want your watermark to be distorted when the video output is resized, set the SizingPolicy to ShrinkToFit while setting MaxWidth and MaxHeight to 100%. With these settings, Elastic Transcoder will never up-sample, expand, or distort your watermark. /elastictranscoder/faqs/;What are the settings for placing my watermark over the active video region rather than over the matte?;To place your watermark so that it is always over the active video content, use relative size for the MaxWidth and MaxHeight settings, and set the Target to be Content. For example, to fix the watermark size to 10% of the active output video size, set both MaxWidth and MaxHeight to 10%. /elastictranscoder/faqs/;How do I use multiple watermarks?;Presets specify placement settings for up to four watermarks. Each setting has an associated watermark ID. You can create a job with up to four watermarks by specifying an array of watermarks in the job creation call. Each element of the array specifies the Id of the watermark setting to use, and the watermark image file. /elastictranscoder/faqs/;Can I generate NTSC or PAL outputs?;Yes, you can generate both NTSC and PAL compliant outputs. The easiest way to generate NTSC and PAL compliant outputs is to specify the NTSC or PAL system preset when creating a transcoding job. Via the console, this is done by the preset drop down for each output of your transcoding job. /elastictranscoder/faqs/;How much does Amazon Elastic Transcoder cost to use?;Pricing for Amazon Elastic Transcoder is described here. Our pricing does not require any commitment or minimum volume of jobs. We also offer a free tier that enables you to explore the service and transcode up to up to 20 minutes of audio-only output, 20 minutes of SD video output and 10 minutes of HD video output a month free of charge. To see terms and additional information on the free tier program, please visit the AWS Free Usage Tier page. /elastictranscoder/faqs/;How are jobs charged?;Transcoding jobs are charged according to the duration of the content. For example, media that lasts 60 minutes costs twice as much as media that lasts 30 minutes. High definition (HD) content costs twice as much as standard definition (SD). Audio-only output is priced lower than standard definition (SD) output. The minimum charge for a job is one minute. We do not charge for thumbnail generation, for API calls, or for Amazon S3 transfer within the same region. For more information, please refer to the Amazon Elastic Transcoder pricing page. /elastictranscoder/faqs/;How are fractional minutes charged?;Fractional minutes are rounded up. For example, if your output duration is less than a minute, you are charged for one minute. If your output duration is 1 minutes and 10 seconds, you are charged for 2 minutes. /elastictranscoder/faqs/;Do you charge for failed jobs?;Our policy is to forgive customers for failed jobs unless the number of failed jobs becomes excessive. /elastictranscoder/faqs/;Is it cheaper to use multiple outputs per job than to use separate jobs?;When you use multiple outputs per job, transcoding costs remain the same as if you had submitted multiple jobs for each output. However, the processing time will be quicker for larger jobs since the source file is only being transferred from your S3 bucket to Amazon Elastic Transcoder once. /elastictranscoder/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /elastictranscoder/faqs/;Are my media assets secure?;You are in complete control of your media assets because they are stored in your own Amazon S3 buckets. You use IAM roles to grant us access to your specific Amazon S3 bucket. /elastictranscoder/faqs/;Can I set S3 permissions and storage options?;Amazon Elastic Transcoder enables you to specify which users, groups, and canonical IDs you want to grant access to your transcoded files, thumbnails and playlists, as well as the type of access that you want them to have. You can also specify whether to store transcoded content using Standard or Reduced Redundancy Storage. Please refer to Amazon Elastic Transcoder documentation for further information. /elastictranscoder/faqs/;Can I use encrypted input media files or encrypt my output files?;Yes. You can use encrypted mezzanine files as input to Amazon Elastic Transcoder, or protect your transcoded files by letting the service encrypt the output. Supported options range from fully managed integration with Amazon S3's Server-Side Encryption, to keys that you manage on your own and protect using AWS Key Management Service (KMS). Furthermore, encryption support is not limited to your video files. You can protect thumbnails, captions, and even watermarks. /elastictranscoder/faqs/;Do you support DRM?;Yes, we support packaging for Microsoft PlayReady DRM. Our Smooth Streaming packaging is compatible with the Microsoft PIFF 1.1, and our HLSv3 packaging is compatible with the Discretix 3.0.1 specification for Microsoft PlayReady. /elastictranscoder/faqs/;Can I get a history of all Amazon Elastic Transcoder API calls made on my account for security, operational or compliance auditing?;Yes. To start receiving a history of all Elastic Transcoder API calls made on your account, you simply turn on AWS CloudTrail in CloudTrail's AWS Management Console. For more information, visit the AWS CloudTrail home page. /elastictranscoder/faqs/;Do I need to setup AWS KMS before using the Elastic Transcoder encryption and DRM packaging features?;Yes. You must first create a master AWS KMS key and add the role used by Elastic Transcoder as an authorized user of that key. Elastic Transcoder uses your KMS master key to protect the data encryption keys that it exchanges with you. /elastictranscoder/faqs/;Can I save the keys used to encrypt my HLS streams to S3?;Yes. If you elect to store your keys in S3, Elastic Transcoder will write your keys to the same folders as your playlist files, and your keys will be protected using Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3). /elastictranscoder/faqs/;Can I rotate the keys used for HLS with AES-128 encryption?;Key rotation is not supported. All renditions and file segments share the same key. /mediaconvert/faqs/;What is AWS Elemental MediaConvert?;What is AWS Elemental MediaConvert? AWS Elemental MediaConvert is a file-based video processing service that formats and compresses offline content for delivery to televisions or connected devices. With AWS Elemental MediaConvert's high-quality video transcoding, you can create on-demand video assets for playback on virtually any device. The service combines advanced video and audio capabilities with a simple web services interface and pay-as-you-go pricing. With AWS Elemental MediaConvert, you can focus on delivering compelling media experiences without having to worry about the complexity of building and operating your own video processing infrastructure. /mediaconvert/faqs/;What is file-based video processing?;What is file-based video processing? In a video processing workflow, a file-based video transcoding solution processes video files, creating compressed versions of the original content to reduce its size, change its format, or increase playback device compatibility. A file-based video transcoding solution can convert any video input source, ranging from high-quality studio masters to videos captured on a mobile device, and produce content ready for distribution to viewers. File-based transcoding processes media as fast as the underlying infrastructure allows, which may be faster or slower than real-time depending on the input type, output format(s), and transcoder settings. File-based transcoding solutions are also expected to handle packaging and protection of content by preparing assets for on-demand delivery to multiple device types with integrated digital rights management (DRM). /mediaconvert/faqs/;Who can use AWS Elemental MediaConvert?;Who can use AWS Elemental MediaConvert? AWS Elemental MediaConvert is designed to meet the needs of all types of video providers. Video is increasingly important to companies from small businesses to global enterprises, as well as government agencies, nonprofit organizations, and schools, all of which can use AWS Elemental MediaConvert to improve the efficiency and effectiveness of their video operations. Companies in the media and entertainment industry, including film and TV studios, broadcast networks, pay TV channels and system operators, programming distributors, internet service providers, online video platforms, and professional sports leagues and teams, can all benefit from AWS Elemental MediaConvert as part of their broadcast, streaming, and over-the-top video offerings. /mediaconvert/faqs/;Why should I choose AWS Elemental MediaConvert over an on-premises file-based encoding solution?;Why should I choose AWS Elemental MediaConvert over an on-premises file-based encoding solution? On-premises solutions have long lead times to become production-ready, have limited agility, and lack the scale to meet demand as workloads change over time. These solutions require customers to own, host, and manage all of the required infrastructure, which equates to significant capital investment up front, and may require interfacing with multiple vendors for billing and support, which is prohibitive for individuals and small companies. Long term, the total cost of ownership only increases when factoring in ongoing expense to update, patch, and maintain on-premises systems. AWS Elemental MediaConvert, on the other hand, is a pay-as-you-go service with predictable costs that offers a high degree of availability, reliability, and scalability without the burden of infrastructure management. Because AWS Elemental MediaConvert is an AWS service, it’s simple for video providers to build end-to-end workflows using other AWS services, and benefit from integrated billing, monitoring, and support. /mediaconvert/faqs/;Why should I choose AWS Elemental MediaConvert over other file-based cloud encoding services?;Why should I choose AWS Elemental MediaConvert over other file-based cloud encoding services? AWS Elemental MediaConvert gives you access to a full broadcast-grade feature set. Other cloud-based video processing services often do not provide the premium capabilities that content owners and distributors require. With AWS Elemental MediaConvert you can take advantage of features to create broadcast-quality video including static graphic overlays, audio loudness normalization, ad insertion via SCTE-35 support, manifest decoration, DRM integration, and broadcast and OTT closed captioning. AWS Elemental MediaConvert leverages video processing technology built from the ground up by AWS Elemental and used by video providers of all kinds today. It includes ingest and output support for the highest quality codecs (MPEG-2, AVC, AV1, and HEVC including support for 10-bit 4:2:2 color sampling), extensive adaptive bitrate packaging formats (CMAF, HLS, DASH, and MSS), and processing and conversion of HDR content (Dolby Vision, HDR 10, and HLG BT.2020). With AWS Elemental MediaConvert, you can get started with simple jobs while gaining access to a deep set of configurable parameters to precisely control video output quality as required. AWS Elemental MediaConvert charges jobs based on the duration and the characteristics of the outputs generated, which means that you always only pay for what you use. Finally, AWS Elemental MediaConvert scales elastically with the rate of incoming jobs so you benefit from short turnaround times even when your incoming load varies. /mediaconvert/faqs/;Is AWS Elemental MediaConvert a standalone service or is it dependent on other services?;Is AWS Elemental MediaConvert a standalone service or is it dependent on other services? AWS Elemental MediaConvert can function as a standalone service or within a larger video workflow that includes other AWS Media Services. AWS Media Services are a family of services that form the foundation of cloud-based video workflows, which offer customers the capabilities they need to create, package, and deliver video, all accessible through the AWS Management Console and APIs. Working together with other AWS services, AWS Media Services offer a complete solution for processing and delivery of live or on-demand video content to consumers around the world, cost efficiently and with high quality. /mediaconvert/faqs/;Why should I choose AWS Elemental MediaConvert over Amazon Elastic Transcoder?;Why should I choose AWS Elemental MediaConvert over Amazon Elastic Transcoder? AWS Elemental MediaConvert should be the first option you consider for file-based video processing. MediaConvert provides a comprehensive suite of transcoding features that addresses the needs of the majority of use cases. It is optimized to reduce turnaround time and improve scalability which allows you to process more files in parallel. And you benefit from a flexible pricing structure where you only pay for functionality that you use. You can also choose to use reserved pricing, to get a discount with daily encode jobs, or to use Accelerated Transcoding, to complete jobs up to 25 times faster than normal transcodes. Finally, MediaConvert continues to add new capabilities including video quality improvements, codecs, and add-on features that won't be available in Elastic Transcoder. /mediaconvert/faqs/;Can I use AWS Elemental MediaConvert for live video?;Can I use AWS Elemental MediaConvert for live video? AWS Elemental MediaConvert is a file-based transcoding service and does not support live video. For encoding live video for broadcast and streaming to any device, you should use AWS Elemental MediaLive. /medialive/faqs/;What is AWS Elemental MediaLive?;What is AWS Elemental MediaLive? AWS Elemental MediaLive is a cloud-based live video encoding service that offers fast, reliable and easy-to-use delivery of high-quality live video streams without the need to manage infrastructure. AWS Elemental MediaLive streamlines live video operations by automating the configuration and management of ingest and encoding components for highly reliable delivery of live streams. The service provides broadcast quality features, configurable capability and support for industry standard formats and technologies. Combining broadcast-grade encoding capabilities with the scale and elasticity of AWS, video providers can efficiently deliver live stream to their audiences and focus on their content and differentiated viewing experiences. /medialive/faqs/;What is real-time live video encoding?;What is real-time live video encoding? In a video processing workflow, an encoder compresses a video stream—taking high-quality video as an input, and outputting smaller-sized versions—with as little loss as possible to the resulting picture quality. While this is a complicated task when working with pre-recorded video files, for live video it is made even more difficult as the video processing needs to work in real-time: the encoder must be powerful enough to produce exactly one second of video every second it runs without fail so that viewers see an uninterrupted video stream. /medialive/faqs/;Who can use AWS Elemental MediaLive?;Who can use AWS Elemental MediaLive? AWS Elemental MediaLive is designed to meet the needs of all types of video providers. Video is increasingly important to companies from small businesses to global enterprises, as well as government agencies, nonprofit organizations, and schools, all of which can use AWS Elemental MediaLive to improve the efficiency and effectiveness of their video operations. Companies in the media and entertainment industry, including film and TV studios, broadcast networks, pay TV channels and system operators, programming distributors, internet service providers, online video platforms, and professional sports leagues and teams, can all benefit from AWS Elemental MediaLive as part of their broadcast, streaming, and over-the-top video offerings. /medialive/faqs/;Why should I choose AWS Elemental MediaLive over an on-premises live encoding solution?;Why should I choose AWS Elemental MediaLive over an on-premises live encoding solution? AWS Elemental MediaLive provides a secure, flexible and highly available live encoding solution. It enables push button deployment of live channels and handles resource provisioning, service orchestration, scaling, healing, resiliency failover, monitoring, and reporting. AWS Elemental MediaLive enables users to set up a channel in minutes and does all the heavy lifting behind the scenes to provision and start the various resources required. /medialive/faqs/;Why should I choose AWS Elemental MediaLive over other cloud-based live encoding services?;Why should I choose AWS Elemental MediaLive over other cloud-based live encoding services? AWS Elemental MediaLive gives you access to a broadcast-grade feature sets, complete control over encoding settings, and is built with industry-leading technology for encoding that supports standard codecs (MPEG-2, AVC, HEVC, etc.), resolutions (SD, HD), and broadcast features (ad insertion using the SCTE35 standard, captioning, audio descriptors, loudness correction, etc.). AWS Elemental MediaLive is easy to use and scales with AWS resources globally. /medialive/faqs/;Does AWS Elemental MediaLive support statistical multiplexing (statmux)?;Does AWS Elemental MediaLive support statistical multiplexing (statmux)? Yes. Statmux for MediaLive enables broadcasters and content owners to implement flexible and scalable workflows in AWS, generating content for distribution to headends via traditional broadcast methods. Content owners and broadcasters can use MediaLive to support distribution systems that rely on statistical multiplexing. Combined with the advanced video encoding features and built-in resiliency of MediaLive, Statmux extracts more bandwidth capacity from the network, ensures reliable 24/7 operations, and reduces total cost of ownership for linear video delivery when deploying hundreds of channels. /medialive/faqs/;Is AWS Elemental MediaLive a standalone service or is it dependent on other services?;Is AWS Elemental MediaLive a standalone service or is it dependent on other services? AWS Elemental MediaLive can work as a standalone service or within a larger video workflow that includes other AWS Media Services. AWS Media Services are a family of services that form the foundation of cloud-based video workflows, which offer customers the capabilities they need to create, package, and deliver video, all accessible through the AWS Management Console and APIs. Working together with other AWS services, AWS Media Services offer a complete solution for processing and delivery of live or on-demand video content to consumers around the world, cost efficiently and with high quality. /medialive/faqs/;How does AWS Elemental MediaLive ensure the security of channels in progress?;How does AWS Elemental MediaLive ensure the security of channels in progress? AWS Elemental MediaLive automatically protects video content as it moves between components by natively employing AWS security capabilities. The service uses customer identity and access management (IAM) roles and security groups within their own AWS environments. You can also add input security groups to whitelist IP addresses for input types that push content to the service. /medialive/faqs/;How does AWS Elemental MediaLive ensure reliability of service?;How does AWS Elemental MediaLive ensure reliability of service? AWS Elemental MediaLive automates the provisioning, configuration and management of AWS-based resources for ingesting and encoding live video. The service gives customers highly reliable live video encoding without managing infrastructure. When you create a channel in AWS Elemental MediaLive, the service deploys redundant infrastructure in two AWS availability zones (AZs). Each component is monitored for health and the service detects any degraded resources and replaces them with new ones. /medialive/faqs/;How is AWS Elemental MediaLive billed?;"How is AWS Elemental MediaLive billed? Pricing is based on a straightforward per-minute model that simplifies budgeting and allows users to forecast exactly what they will spend on each channel. The pricing scales as more inputs/outputs are selected and is pay-as-you-go based on the codec (MPEG-2, AVC, HEVC); resolution (SD, HD, UHD), bitrate (less than 10Mbps, between 10 and 20Mbps, and over 20Mbps); and frame-rate (less than 30fps, between 30 and 60 fps, and over 60fps) you use. There are no minimum commitments or long-term contracts. In addition to on demand pricing, there is also a monthly option with an annual commitment for 24x7 channels. Visit the AWS Elemental MediaLive Pricing page for more information." /medialive/faqs/;What are the resolutions that correspond to SD, HD, and UHD?;"What are the resolutions that correspond to SD, HD, and UHD? SD is less than 1280x720; HD is equal to, or greater than 1280x720, up to and including 1920x1080, and UHD is greater than 1920x1080; up to 4096x2160." /medialive/faqs/;What options does AWS Elemental MediaLive support for ingesting live video?;"What options does AWS Elemental MediaLive support for ingesting live video? AWS Elemental MediaLive will accept video using any of the following standards: RTP with forward error correction (FEC); RTMP (in a push or pull mode); and HLS. The supported codecs are MPEG-2, h.264/AVC, and h.265/HEVC. The service can also be paired with AWS Elemental Live encoding appliances that can be used on-premises at production facilities or remote event venues to ingest content for delivery as an input to your channels. These appliances support a range of compressed and uncompressed live input sources, including SDI, HDMI, ASI, MPEG-TS over IP, and SDI over IP." /medialive/faqs/;What options does AWS Elemental MediaLive support for outputting live video?;What options does AWS Elemental MediaLive support for outputting live video? AWS Elemental MediaLive supports HLS, RTP, RTMP/S, and Microsoft Smooth Streaming (MSS) streaming outputs. It will also archive to file. AWS Elemental MediaLive can deliver to an origin and just-in-time packaging service like AWS Elemental MediaPackage or a 3rd party packager, and it can deliver to AWS Elemental MediaStore for simple origination. /medialive/faqs/;How do I use AWS Elemental MediaLive with my other workflow vendors?;How do I use AWS Elemental MediaLive with my other workflow vendors? AWS Elemental MediaLive provides a robust and flexible API that ecosystem partners such as content management systems (CMS) and packagers can integrate with. AWS Elemental MediaLive runs within customers’ AWS accounts, so other services, components or software that runs or interfaces with customer-maintained VPCs are also easily incorporated in video workflows. /medialive/faqs/;How do I use AWS Elemental MediaLive with AWS Elemental MediaPackage?;How do I use AWS Elemental MediaLive with AWS Elemental MediaPackage? AWS Elemental MediaLive can work with a range of just-in-time packing products including AWS Elemental MediaPackage. To configure a AWS Elemental MediaLive channel with AWS Elemental Mediapackage, simply create a channel with AWS Elemental MediaPackage to get a destination address, then select HLS WebDAV as the output for your AWS Elemental Media Live channel profile and add the destination address. With AWS Elemental MediaPackage you can create output groups for multiple delivery protocols like HLS and DASH, add DRM and content protection, and a live archive window for DVR-like features. /medialive/faqs/;Which DRM providers does AWS Elemental MediaLive support?;Which DRM providers does AWS Elemental MediaLive support? AWS Elemental MediaLive does not support DRM providers on its own. It can be used with AWS Elemental MediaPackage and a published DRM API to work with several DRM providers. This way, a combination of AWS Elemental MediaLive and AWS Elemental MediaPackage can efficiently encrypt and protect multiple channels using multiple DRM standards and multiple DRM providers. /mediapackage/faqs/;What is AWS Elemental MediaPackage?;What is AWS Elemental MediaPackage? AWS Elemental MediaPackage is a highly scalable, video origination and just-in-time packaging service that helps video providers securely, reliably and cost-efficiently package and deliver live video streams. Video providers can improve the viewing experience, easily integrate advanced, broadcast-grade capabilities, increase workflow resiliency, and better protect and monetize their multiscreen content. The service uses just-in-time packaging to cost effectively output multiple standards-based streaming protocols and DRM types in different combinations to support an array of multiscreen devices. It supports consistent quality of service by elastically scaling to meet demand and manage failover within a highly available managed service. AWS Elemental MediaPackage does not limit customers’ choices of video players, CDNor ad providers, and works seamlessly with other AWS services to build a solution for high-quality, resilient live streaming for 24/7 channels or live events. /mediapackage/faqs/;What is just-in-time packaging and origination?;What is just-in-time packaging and origination? In a video processing workflow, a just-in-time packaging and origination product customizes live video streams or VOD assets for delivery in a format compatible with the device making the request. An advanced origin is used to convert incoming content on-the-fly from a single format to multiple delivery formats while applying DRM standards, allowing it to serve streaming video content in response to requests from users to devices such as tablets, smartphones, connected TVs, or set-top boxes. /mediapackage/faqs/;Who can use AWS Elemental MediaPackage?;Who can use AWS Elemental MediaPackage? AWS Elemental MediaPackage is designed to meet the needs of all types of video providers. Video is increasingly important to companies from small businesses to global enterprises, as well as government agencies, nonprofit organizations, and schools, all of which can use AWS Elemental MediaPackage to improve the efficiency and effectiveness of their video operations. Companies in the media and entertainment industry, including film and TV studios, broadcast networks, pay TV channels and system operators, programming distributors, internet service providers, online video platforms, and professional sports leagues and teams, can all benefit from AWS Elemental MediaPackage as part of their broadcast, streaming, and over-the-top video offerings. /mediapackage/faqs/;How does AWS Elemental MediaPackage compare to Cloud Streaming Services?;How does AWS Elemental MediaPackage compare to Cloud Streaming Services? AWS Elemental MediaPackage is available in multiple regions, while with other cloud services, some options are only available in a subset of all the AWS regions on which their service is deployed. AWS Elemental MediaPackage can record all the renditions in an adaptive bitrate (ABR) stream and supports up to 4K resolution with high frame rate using HEVC, while other cloud services restrict recordings to only the highest bitrate. Competing services may also provide a broad set of limitations on the delivered content, including limiting resolution to 1080p and frame rate to 30 frames per second. AWS Elemental MediaPackage also supports a wide range of OTT ABR standards and provides standards-based subtitles for all OTT formats, which is broader support than other cloud services. /mediapackage/faqs/;How does AWS Elemental MediaPackage compare to third-party packaging software running on AWS?;"How does AWS Elemental MediaPackage compare to third-party packaging software running on AWS? AWS Elemental MediaPackage packages and archives in the same workflow using the same formats that are streamed; with other providers, you need a separate workflow. AWS Elemental MediaPackage provides more control over what is exposed to subscribers, helping restrict access. In addition, AWS Elemental MediaPackage offers an embedded redundancy model, removing the complexity of implementing and managing instances from the customer, and separates ingest from egress for even greater scalability and redundancy. It is natively designed as a service, not as software running on a virtual server, and customers don't have to monitor the load on their EC2 instances or manually scale to accommodate more channels and more end-user connections." /mediapackage/faqs/;How does AWS Elemental MediaPackage compare to packaging services from Content Delivery Networks (CDNs)?;How does AWS Elemental MediaPackage compare to packaging services from Content Delivery Networks (CDNs)? AWS Elemental MediaPackage offers highly customizable packaging options and parameters, providing flexibility for streaming protocols, segment sizes, manifest manipulation, subtitles, and other metadata handling along with broad DRM support. With some CDNs, the options for packaging are limited to a given set of parameters. Unlike CDN-based packaging services that only work with their specific CDNAWS Elemental MediaPackage is CDN agnostic, offering customers an easy way to implement a multi-CDN strategy and improve audience quality of service using third-party tools. /mediapackage/faqs/;Can I use a CDN other than Amazon CloudFront with AWS Elemental MediaPackage?;Can I use a CDN other than Amazon CloudFront with AWS Elemental MediaPackage? Yes. A customer can connect any CDN that delivers content in pull mode as a CDN output from AWS Elemental MediaPackage. Using Amazon CloudFront provides the benefit of staying in the AWS Cloud, and saves on data transfer rates compared to external 3rd-party CDNs. However, AWS Elemental MediaPackage is designed to work with Amazon CloudFront and non-Amazon CDNs, giving customers the ability to run multi-CDN or hybrid-CDN strategies. /mediapackage/faqs/;Is AWS Elemental MediaPackage a standalone service or is it dependent on other services?;Is AWS Elemental MediaPackage a standalone service or is it dependent on other services? AWS Elemental MediaPackage can function as a standalone service or within a larger video workflow that includes other AWS Media Services. AWS Media Services are a family of services that form the foundation of cloud-based video workflows, which offer customers the capabilities they need to create, package, and deliver video, all accessible through the AWS Management Console and APIs. Working together with other AWS services, AWS Media Services offer a complete solution for processing and delivery of live or on-demand video content to consumers around the world, cost efficiently and with high quality. /mediapackage/faqs/;Does AWS Elemental MediaPackage work with AWS Elemental MediaLive?;Does AWS Elemental MediaPackage work with AWS Elemental MediaLive? AWS Elemental MediaLive is deeply integrated with AWS Elemental MediaPackage so customers can easily combine live encoding with content origination, dynamic packaging, and live-to-VOD capabilities. To configure an AWS Elemental MediaLive channel with AWS Elemental MediaPackage, simply create a channel with AWS Elemental MediaPackage to get a destination address, then select HLS WebDAV as the output for your AWS Elemental MediaLive channel profile and add the destination address. With AWS Elemental MediaPackage you can create output groups for multiple delivery protocols like HLS and DASH, add DRM and content protection, and a live archive window for DVR-like features. /mediapackage/faqs/;What is the difference between AWS Elemental MediaPackage and AWS Elemental MediaStore?;What is the difference between AWS Elemental MediaPackage and AWS Elemental MediaStore? AWS Elemental MediaPackage provide just-in-time package and DVR-like features as well as origination for live video streams. If a customer does not require packaging to different or multiple formats, DRM, or DVR-like features, customers can use AWS Elemental MediaStore as a pass-through video origination and storage service that offers the high performance and immediate consistency required for delivering media combined with the security and durability that AWS offers across its services. /mediapackage/faqs/;Does AWS Elemental MediaPackage work with on-premises AWS Elemental Live encoders?;Does AWS Elemental MediaPackage work with on-premises AWS Elemental Live encoders? MediaPackage supports HLS as an input over HTTPS. On-premises AWS Elemental Live customers can benefit from the improved scalability and resiliency of MediaPackage compared to an on-premises origin, even if they run encoding on site. If you use Elemental Live appliances on-premises, you can use the authenticated WebDAV HLS output to feed MediaPackage. /mediapackage/faqs/;How can I connect AWS Elemental MediaPackage to a DRM Key Management System?;How can I connect AWS Elemental MediaPackage to a DRM Key Management System? AWS Elemental MediaPackage has a published DRM API based on the Content Protection Information Exchange (CPIX) standard that makes integrating with DRM Key providers easier. Many providers, like Verimatrix, Irdeto, BuyDRM, castLabs EZDRM, and Conax have already implemented the API, with others coming on board soon. /mediastore/faqs/;What is AWS Elemental MediaStore?;What is AWS Elemental MediaStore? AWS Elemental MediaStore is a media origin and storage service that offers the performance, predictable low latency, and consistency required for delivery and processing workloads like live streaming video. The service provides a write-behind cache, designed for performance, in front of object storage. It is an inexpensive method for pass-through and low-latency segmented content delivery, with predictable pay-as-you-go pricing. /mediastore/faqs/;Who can use AWS Elemental MediaStore?;Who can use AWS Elemental MediaStore? AWS Elemental MediaStore is designed to meet the needs of all types of content providers. Video is increasingly important to companies from small businesses to global enterprises, as well as government agencies, nonprofit organizations, and schools, all of which can use AWS Elemental MediaStore to improve the efficiency and effectiveness of their video operations. Companies in the media and entertainment industry, including film and TV studios, broadcast networks, pay TV channels and system operators, programming distributors, internet service providers, online video platforms, and professional sports leagues and teams, can all benefit from AWS Elemental MediaStore as part of their broadcast, streaming, and over-the-top video offerings. /mediastore/faqs/;How does AWS Elemental MediaStore improve performance?;How does AWS Elemental MediaStore improve performance? When you write content to AWS Elemental MediaStore, it is automatically held in a replicated cache for the first few minutes after creation, and again after each update. This replicated cache gives performance, predictable low latency, and consistency, even with the high request loads and with the frequent updates common with files like streaming video manifests during live video streams. /mediastore/faqs/;What is the difference between AWS Elemental MediaStore and AWS Elemental MediaPackage?;What is the difference between AWS Elemental MediaStore and AWS Elemental MediaPackage? AWS Elemental MediaPackage provides just-in-time packaging and live-to-VOD features as well as origination for live streams. If multiple formats and DRMs are required, or DVR-like features, you can use AWS Elemental MediaPackage. If the live streams are already in the correct formats and have any required DRM applied, you can use AWS Elemental MediaStore as a pass-through video origination and storage service that offers the performance and consistency required for delivering live streaming media combined with the security and durability AWS offers across its services. /mediastore/faqs/;Is AWS Elemental MediaStore a standalone service or is it dependent on other services?;Is AWS Elemental MediaStore a standalone service or is it dependent on other services? AWS Elemental MediaStore can function as a standalone service or within a larger video workflow that includes other AWS Media Services. AWS Media Services are a family of services that form the foundation of cloud-based video workflows, which offer customers the capabilities they need to create, package, and deliver video, all accessible through the AWS Management Console and APIs. Working together with other AWS services, AWS Media Services offer a complete solution for processing and delivery of live or on-demand video content to consumers around the world, cost efficiently and with high quality. /mediastore/faqs/;How does AWS Elemental MediaStore work with AWS Elemental MediaLive?;How does AWS Elemental MediaStore work with AWS Elemental MediaLive? AWS Elemental MediaLive is a video service that allows easy and reliable creation of live outputs for broadcast and streaming delivery at scale. An AWS Elemental MediaStore container can be selected as the output destination for an AWS Elemental MediaLive channel. /mediastore/faqs/;How is AWS Elemental MediaStore billed?;How is AWS Elemental MediaStore billed? With AWS Elemental MediaStore, you are charged a per GB Media Ingest Optimization Fee when content enters the service and charged per GB price for content Storage (per month) for content that you keep in the service.. Request costs are based on the request type, and are charged on the quantity of requests. Visit the AWS Elemental MediaStore Pricing page for more information. /mediastore/faqs/;Which media workflow use cases are best suited to AWS Elemental MediaStore?;Which media workflow use cases are best suited to AWS Elemental MediaStore? Serving live adaptive bit-rate video streams that require a HTTP origin are ideal use cases for AWS Elemental MediaStore. With predictable low latency and performance along with immediate read-after-write and read-after-update consistency AWS Elemental MediaStore is optimized to originate fragmented video and ensures the latest versions of manifests for live video streams, which are constantly updated, yet retain the same name, are always the ones that get delivered to players. And requests for video segments are served quickly and reliably providing a better quality of experience with less buffering. /mediastore/faqs/;Can on-premises workflows benefit from AWS Elemental MediaStore?;Can on-premises workflows benefit from AWS Elemental MediaStore? Yes, on-premises encoders such as AWS Elemental Live can write to MediaStore and get the benefits of the high performance media origination. /mediastore/faqs/;What is the durability of content written to AWS Elemental MediaStore?;What is the durability of content written to AWS Elemental MediaStore? Objects ingested into AWS Elemental MediaStore are sent to a replicated write-behind cache that transitions objects to storage backed by Amazon S3 shortly after they are written. While objects are normally transitioned quickly to Amazon S3, there is some chance that this process could be delayed or not occur at all. If your workload requires immediate durability, we recommend using Amazon S3 directly. /mediatailor/faqs/;What is AWS Elemental MediaTailor?;AWS Elemental MediaTailor is a channel assembly and personalized ad insertion service that lets video providers create live OTT (internet delivered) channels using existing video content and monetize those channels, or other live streams and VOD content, with personalized advertising. Live streams maintain a TV-like experience across multiscreen video applications. With MediaTailor, virtual live channels are created without the expense, complexity, and management of real-time live encoding. Adverts are seamlessly stitched into the content and can be tailored to individual viewers, maximizing the monetization opportunity for every ad break and mitigating ad blocking schemes. /mediatailor/faqs/;Who can use AWS Elemental MediaTailor?;AWS Elemental MediaTailor is designed to meet the needs of all types of video providers. Video is increasingly important to companies from small businesses to global enterprises, as well as government agencies, nonprofit organizations, and schools, all of which can use AWS Elemental MediaTailor to improve the efficiency and effectiveness of their video operations. Companies in the media and entertainment industry, including film and TV studios, broadcast networks, pay TV channels and system operators, programming distributors, internet service providers, online video platforms, and professional sports leagues and teams, can all benefit from AWS Elemental MediaTailor as part of their broadcast, streaming, and over-the-top video offerings. /mediatailor/faqs/;Why should I use AWS Elemental MediaTailor Channel Assembly?;With Channel Assembly, you can create linear channels that are delivered over-the-top (OTT) in a cost-efficient way, even for channels with low viewership. Virtual live streams are created with a low running cost by using existing multi-bitrate encoded and packaged VOD content. You can also easily monetize Channel Assembly linear streams by inserting ad breaks in your programs without having to condition the content with SCTE-35 markers. /mediatailor/faqs/;Why should I use AWS Elemental MediaTailor ad insertion over other server-side ad insertion solutions?;Other server-side ad insertion solutions typically do not provide detailed client-side viewing metrics. Server-side solutions generally report on CDN server logs of requests to ad server, which doesn’t offer the granularity of client-based viewing metrics that advertisers require. Other solutions may require SDK or specific player integration to handle server-side stitched manifests. In contrast, AWS Elemental MediaTailor does not require specific player or SDK integration to work. In addition, AWS Elemental MediaTailor makes callbacks to a common endpoint for both content and ads rather than known ad serving entities, bypassing ad blocking strategies. AWS Elemental MediaTailor uses client request information in real-time to communicate with ad decision servers and dynamically generates personalized manifests and ad content. And with AWS Elemental MediaTailor, there is no need for customers to scale origin infrastructure to cope with delivering personalized manifests. /mediatailor/faqs/;Why should I use AWS Elemental MediaTailor ad insertion over other client-side ad insertion solutions?;Client-side ad insertion solutions are susceptible to ad blocking and can deliver poor playback quality. As most client-side ad blockers work by blacklisting known ad serving domain names, video clients can end up skipping ad segments entirely, jeopardizing business models that rely on ad revenue from internet-delivered video offerings. AWS Elemental MediaTailor delivers ads in a way that makes ads indistinguishable from content for a seamless viewing experience. Video providers want viewers to experience the same playback quality as traditional broadcast TV. However, since ads are served by external ad decision services with their own encoders, video processing pipeline, and CDNit’s impossible for client-side ad insertion solutions to ensure ads match the format of the content. This results in increased rebuffer rates and discontinuous transitions from content to ads and back as clients retrieve ads on a best-effort basis. In other cases, customers wish to ensure that advertisements are compliant with the latest technology regulations such as audio loudness levels. Providers try to solve these issues by preprocessing and preloading ads while working with numerous ad decision services or directly with advertisers, resulting in scaling challenges as the number of ad decision servers and permutations of ad formats increases. AWS Elemental MediaTailor overcomes each of these challenges by pulling down mezzanine-quality assets from ad decision servers and transcoding assets on the fly to the same specifications as the primary content stream. As a result, viewers enjoy the same seamless experience from internet-delivered video as traditional broadcast TV. /mediatailor/faqs/;How does AWS Elemental MediaTailor simplify my advertising workflow?;Since the same video processing pipeline is used for content and for ads, there is no need to orchestrate complicated ad signaling between the origin server, manifest manipulation service, and ad decision server. In the past, customers had to implement client library changes across all of their supported devices as new video formats, ad insertion specs, or compliance standards emerged. With AWS Elemental MediaTailor, customers can use the same ad insertion workflow to reach all devices, eliminating the need to make custom changes across all client applications to account for new standards. /mediatailor/faqs/;How does AWS Elemental MediaTailor enable content personalization?;Video content publishers have more information in OTT environments about their end viewers' demographic profile, viewing habits, and other relevant data than traditional broadcast TV. This allows for increased rates for advertising spots. However, traditional server-side ad insertion solutions don’t typically communicate with and use client device information when stitching together ads and content, for two reasons: one, they rely on proprietary extensions to pass through client information to ad servers, resulting in communications failures between different implementations and services, and two, ad personalization not only decreases the ability to cache the manifest file, but also greatly increases compute resources required for rewriting manifest files and transcoding unique ad content on-the-fly. Unlike those solutions, AWS Elemental MediaTailor uses the VAST standard when interfacing with external ad servers, while scaling compute resources for manifest manipulation, transcoding, and delivery occurs automatically using the elastic AWS Cloud. /mediatailor/faqs/;How does AWS Elemental MediaTailor enable accurate reporting of viewing behavior across devices?;Both advertisers and the Interactive Advertising Bureau (IAB) call for granular playback metrics that are measured from end-viewer devices. This requires clients to make numerous HTTP requests to tracking URLs managed by external ad servers as ads are played back. This beacon information allows impressions to be rewarded based on quartile (25 percent, 50 percent, 75 percent) of an ad video that has been played. By default, AWS Elemental MediaTailor reports these metrics from the server-side without additional integration efforts required. AWS Elemental MediaTailor also provides a client API endpoint to identify when ad content is playing and can be used to implement client-side ad reporting as well as advanced player features to stop scrubbing during ad break, or ad duration countdowns. /mediatailor/faqs/;How does AWS Elemental MediaTailor improve my viewers’ experience when watching my content?;AWS Elemental MediaTailor has a transcode service that works to ensure there are no jarring discontinuities in aspect ratio, resolutions, and video bitrate for transitions between ads and content during playback. AWS Elemental MediaTailor uses standard VAST and VMAP responses from ad servers to pull down a high-quality version of the ad asset and provisions real-time transcoding and packaging resources to format it to the same video and audio parameters as your content. /mediatailor/faqs/;Is AWS Elemental MediaTailor a standalone service or is it dependent on other services?;AWS Elemental MediaTailor can function as a standalone service or within a larger video workflow that includes other AWS Media Services. AWS Media Services are a family of services that form the foundation of cloud-based video workflows, which offer customers the capabilities they need to create, package, and deliver video, all accessible through the AWS Management Console. Working together with other AWS services, AWS Media Services offer a complete solution for processing and delivery of live or on-demand video content to consumers around the world, cost efficiently and with high quality. /mediatailor/faqs/;Can I use AWS Elemental MediaTailor with on-premises deployments?;AWS Elemental MediaTailor's manifest manipulation and other features run in AWS, but can access any origin server hosted on-premises and accesible over HTTP. AWS Elemental MediaTailor itself is an AWS Cloud service. /mediatailor/faqs/;Can I use my own CDN with AWS Elemental MediaTailor?;Yes, AWS Elemental MediaTailor is CDN agnostic. Ad insertion and Channel Assembly can work with multiple CDNs, either through direct integration or via Amazon CloudFront’s Origin Shield. /mediatailor/faqs/;Can I use my own origin server for AWS Elemental MediaTailor?;Yes, AWS Elemental MediaTailor ad insertion works with origin servers that are accessible over HTTP and can produce manifests decorated with CUE-IN and CUE-OUT ad markers. Channel Assembly can work with 3rd party source locations, as long as they are leveraging a supported authentication scheme and are generating DASH and HLS manifests that include the expected tags and markers. Visit AWS Elemental MediaTailor Documentation pages for more information. /mediatailor/faqs/;How is AWS Elemental MediaTailor billed?;Pricing for AWS Elemental MediaTailor ad insertion is based on the number of ads inserted. For example, if 1000 people are viewing a stream and there are 5 ads in an ad break, you would be charged for 5000 ad insertions. 10 free ad creative transcodes are included per 1000 ad insertions. MediaTailor Channel Assembly costs are based on the number of hours the channel is active. Visit the AWS Elemental MediaTailor Pricing page for more information. /mediatailor/faqs/;Is the there a minimum or maximum number of channels required to use AWS Elemental MediaTailor?;There is no minimum or limit to the number of channels supported by AWS Elemental MediaTailor. Pricing is pay as you go, with no minimum commitment, so there is no penalty of running just one channel. Equally, if there are hundreds of channels or channels with huge expected peaks in concurrent views, AWS Elemental MediaTailor will automatically scale. /polly/faqs/;What is Amazon Polly?;Amazon Polly is a service that turns text into lifelike speech. Amazon Polly enables existing applications to speak as a first class feature and creates the opportunity for entirely new categories of speech-enabled products, from mobile apps and cars, to devices and appliances. Amazon Polly includes dozens of lifelike voices and support for multiple languages, so you can select the ideal voice and distribute your speech-enabled applications in many geographies. Amazon Polly is easy to use – you just send the text you want converted into speech to the Amazon Polly API, and Amazon Polly immediately returns the audio stream to your application so you can play it directly or store it in a standard audio file format, such as MP3. Amazon Polly supports Speech Synthesis Markup Language (SSML) tags like prosody so you can adjust the speech rate, pitch, or volume. Amazon Polly is a secure service that delivers all of these benefits at high scale and at low latency. You can cache and replay Amazon Polly’s generated speech at no additional cost. Amazon Polly lets you convert millions of characters per month for free during the first year, upon sign-up. Amazon Polly’s pay-as-you-go pricing, low cost per request, and lack of restrictions on storage and reuse of voice output make it a cost-effective way to enable speech synthesis everywhere. /polly/faqs/;Why should I use Amazon Polly?;You can use Amazon Polly to power your application with high-quality spoken output. This cost-effective service has very low response times, and is available for virtually any use case, with no restrictions on storing and reusing generated speech. /polly/faqs/;What features are available?;You can control various aspects of speech such as pronunciation, volume, pitch, speech rate, etc. using standardized Speech Synthesis Markup Language (SSML). You can synthesize speech for certain Neural voices using the Newscaster style, to make them sound like a TV or Radio newscaster. You can detect when specific words or sentences in the text are being spoken to the user based on the metadata included in the audio stream. This allows the developer to synchronize graphical highlighting and animations, such as the lip movements of an avatar, with the synthesized speech. You can modify the pronunciation of particular words, such as company names, acronyms, foreign words and neologisms, e.g. “P!nk”, “ROTFL”, “C’est la vie” (when spoken in a non-French voice) using custom lexicons. /polly/faqs/;What are Speech Marks?;Speech Marks are designed to complement the synthesized speech that is generated from the input text. Using this metadata alongside the synthesized speech audio stream, customers can provide their application with an enhanced visual experience such as speech-synchronized animation or karaoke-style highlighting. /polly/faqs/;What are the most common use cases for this service?;With Amazon Polly, you can bring your applications to life, by adding life-like speech capabilities. For example, in E-learning and education, you can build applications leveraging Amazon Polly’s Text-to-Speech (TTS) capability to help people with reading disabilities. Amazon Polly can be used to help the blind and visually impaired consume digital content (eBooks, news etc). Amazon Polly can be used in announcement systems in public transportation and industrial control systems for notifications and emergency announcements. There are a wide range of devices such as set-top boxes, smart watches, tablets, smartphones and IoT devices, which can leverage Amazon Polly for providing audio output. Amazon Polly can be used in telephony solutions to voice Interactive Voice Response systems. Applications such as quiz games, animations, avatars or narration generation are common use-cases for cloud-based TTS solution like Amazon Polly. /polly/faqs/;How does this product work with other AWS products?;When combined with Amazon Lex, developers can create full-blown Voice User Interfaces for their applications. Within Amazon Connect, Amazon Polly speech is used to create self-service, cloud-based contact center services. On top of that, developers of mobile applications and Internet-of-Things (IoT) solutions can leverage Amazon Polly to add spoken output to their own systems. /polly/faqs/;What are the advantages of a cloud-based Text-to-Speech solution over an on-device one?;On-device text-to-speech solutions require significant computing resources, notably CPU power, RAM, and disk space to be available on the device. This can result in higher development cost and higher power consumption on devices such as tablets, smartphones, etc. In contrast, text-to-speech conversion done in the cloud dramatically reduces local resource requirements. This makes it possible to support all of the available languages and voices at the highest possible quality. Moreover, speech corrections and enhancements are instantly available to all end-users and do not require additional updates for all devices. Cloud-based text-to-speech (TTS) is platform independent, so it minimizes development time and effort. /polly/faqs/;How do I get started with Amazon Polly?;Simply login to your AWS account and navigate to the Amazon Polly console (which is a part of the AWS Console). You can then use the console to type in any text and listen to generated speech or save it as an audio file. /polly/faqs/;In which regions is the service available?;Please refer to the AWS Regional Services List for all regions supporting Amazon Polly’s Standard voices. Neural voices are supported in the following subset of these regions: US East (NVirginia), US West (Oregon), Canada (Central), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Africa (Cape Town), EU (London), EU (Frankfurt), EU (Ireland) and AWS GovCloud (US-West). /polly/faqs/;Which programming languages are supported?;Amazon Polly supports all the programming languages included in the AWS SDK (Java, Node.js, .NET, PHP, Python, Ruby, Go, and C++) and AWS Mobile SDK (iOS/Android). Amazon Polly also supports an HTTP API so you can implement your own access layer. /polly/faqs/;Which audio formats are supported?;With Amazon Polly, you can stream audio to your users in near real time. You can also choose from various sampling rates to optimize bandwidth and audio quality for your application. Amazon Polly supports MP3, Vorbis, and raw PCM audio stream formats. /polly/faqs/;What languages are supported?;Please refer to documentation for the complete list of languages supported by Amazon Polly. /polly/faqs/;Does Amazon Polly have AWS service limits?;To help guarantee the availability of AWS resources and to minimize billing risk for new customers, AWS maintains service limits for each account. When using Amazon Polly to power your application with high-quality spoken output, there are default service limits including limitations on throttling, operations, and Speech Synthesis Markup Language (SSML) use. For details, see Limits in Amazon Polly in the Amazon Polly Developer Guide. Combining Amazon Polly with other AWS services, such as AWS Batch for efficient batch processing, can help you make the most of Amazon Polly within those service limits. /polly/faqs/;How do I get started with Amazon Polly Brand Voice?;If you are interested in building a Brand Voice using Amazon Polly, please reach out to your AWS Account Manager or contact us for more information. /polly/faqs/;What is the cost and timeline to build a Brand Voice?;Every voice is unique, so it’s important that we learn more about your goals to accurately scope a Brand Voice engagement. If you are interested in building a Brand Voice using Amazon Polly, please reach out to your AWS Account Manager or contact us for more information. /polly/faqs/;How much does Amazon Polly cost?;Please see the Amazon Polly Pricing Page for current pricing information. /polly/faqs/;Can I use the service for generating static voice prompts that will be replayed multiple times?;Yes, you can. The service does not restrict this and there are no additional costs for doing so. /polly/faqs/;Can I use the service to generate content that will be used in mass notification systems (for example on train station)?;Yes, you can. The service does not restrict this and there are no additional costs for doing so. /polly/faqs/;If I request 1,000 characters to be synthesized and request Speech Marks with the same 1,000 characters, will I be charged for 2,000 characters?;Yes. You will be charged for every request for speech or Speech Marks based on the number of characters you send to the service. /polly/faqs/;Does Amazon Polly participate in the AWS Free Tier?;Yes, as part of the AWS Free Usage Tier, you can get started with Amazon Polly for free. Upon sign-up, new Amazon Polly customers can synthesize millions of characters for free each month for the first 12 months. Please see the Amazon Polly Pricing Page for current pricing information. /polly/faqs/;Do your prices include taxes?;For details on taxes, please see Amazon Web Services Tax Help. /polly/faqs/;Are text inputs processed by Amazon Polly stored, and how are they used by AWS?;Amazon Polly may store and use text inputs processed by the service solely to provide and maintain the service and to improve and develop the quality of Amazon Polly and other Amazon machine-learning/artificial-intelligence technologies. Use of your content is important for continuous improvement of your Amazon Polly customer experience, including the development and training of related technologies. We do not use any personally identifiable information that may be contained in your content to target products, services or marketing to you or your end users. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. You may opt out of having your content used to improve and develop the quality of Amazon Polly and other Amazon machine-learning/artificial-intelligence technologies by using an AWS Organizations opt-out policy. For information about how to opt out, see Managing AI services opt-out policy. /polly/faqs/;Who has access to my content that is processed and stored by Amazon Polly?;Only authorized employees will have access to your content that is processed by Amazon Polly. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. /polly/faqs/;Do I still own my content that is processed and stored by Amazon Polly?;You always retain ownership of your content and we will only use your content with your consent. /polly/faqs/;Is the content processed by Amazon Polly moved outside the AWS region where I am using Amazon Polly?;Any content processed by Amazon Polly is encrypted and stored at rest in the AWS region where you are using Amazon Polly. Some portion of content processed by Amazon Polly may be stored in another AWS region solely in connection with the continuous improvement and development of your Amazon Polly customer experience and other Amazon machine-learning/artificial-intelligence technologies. If you opt out of having your content used to develop the quality of Amazon Polly and other Amazon machine-learning/artificial-intelligence technologies by contacting AWS Support, your content will not be stored in another AWS region. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. /polly/faqs/;Can I use Amazon Polly in connection with websites, programs or other applications that are directed or targeted to children under age 13 and subject to the Children’s Online Privacy Protection Act (COPPA)?;Yes, subject to your compliance with the Amazon Polly Service Terms, including your obligation to provide any required notices and obtain any required verifiable parental consent under COPPA, you may use Amazon Polly in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13. /polly/faqs/;How do I determine whether my website, program, or application is subject to COPPA?;For information about the requirements of COPPA and guidance for determining whether your website, program, or other application is subject to COPPA, please refer directly to the resources provided and maintained by the United States Federal Trade Commission. This site also contains information regarding how to determine whether a service is directed or targeted, in whole or in part, to children under age 13. /polly/faqs/;Who owns Polly recordings?;As between you and AWS, your Polly output belongs to you. If you input text into Polly that belongs to a third party, we require that you have the rights to do so. For more information, please see our Customer Agreement and how it handles “Your Content” /rekognition/faqs/;What is Amazon Rekognition?;Amazon Rekognition is a service that makes it easy to add powerful visual analysis to your applications. Rekognition Image lets you easily build powerful applications to search, verify, and organize millions of images. Rekognition Video lets you extract motion-based context from stored or live stream videos and helps you analyze them. /rekognition/faqs/;What is deep learning?;Deep learning is a sub-field of Machine Learning and a significant branch of Artificial Intelligence. It aims to infer high-level abstractions from raw data by using a deep graph with multiple processing layers composed of multiple linear and non-linear transformations. Deep learning is loosely based on models of information processing and communication in the brain. Deep learning replaces handcrafted features with ones learned from very large amounts of annotated data. Learning occurs by iteratively estimating hundreds of thousands of parameters in the deep graph with efficient algorithms. /rekognition/faqs/;Do I need any deep learning expertise to use Amazon Rekognition?;No. With Amazon Rekognition, you don’t have to build, maintain or upgrade deep learning pipelines. /rekognition/faqs/;What are the most common use cases for Amazon Rekognition?;The most common use-cases for Rekognition Image include: /rekognition/faqs/;How do I get started with Amazon Rekognition?;"If you are not already signed up for Amazon Rekognition, you can click the ""Try Amazon Rekognition"" button on the Amazon Rekognition page and complete the sign-up process. You must have an Amazon Web Services account; if you do not already have one, you will be prompted to create one during the sign-up process. Once you are signed up, try out Amazon Rekognition with your own images and videos using the Amazon Rekognition Management Console or download the Amazon Rekognition SDKs to start creating your own applications. Please refer to our step-by-step Getting Started Guide for more information." /rekognition/faqs/;How can I get Amazon Rekognition predictions reviewed by humans?;Amazon Rekognition is directly integrated with Amazon Augmented AI (Amazon A2I) so you can easily route low confidence predictions from Amazon Rekognition Image to human reviewers. Using the Amazon Rekognition API for content moderation or the Amazon A2I console, you can specify the conditions under which Amazon A2I routes predictions to reviewers, which can be either a confidence threshold or a random sampling percentage. If you specify a confidence threshold, Amazon A2I routes only those predictions that fall below the threshold for human review. You can adjust these thresholds at any time to achieve the right balance between accuracy and cost-effectiveness. Alternatively, if you specify a sampling percentage, Amazon A2I routes a random sample of the predictions for human review. This can help you implement audits to monitor the prediction accuracy regularly. Amazon A2I also provides reviewers with a web interface consisting of all the instructions and tools they need to complete their review tasks. For more information about implementing human review with Amazon Rekognition, see the Amazon A2I webpage. /rekognition/faqs/;What is a label?;A label is an object, scene, or concept found in an image based on its contents. For example, a photo of people on a tropical beach may contain labels such as ‘Person’, ‘Water’, ‘Sand’, ‘Palm Tree’, and ‘Swimwear’ (objects), ‘Beach’ (scene), and ‘Outdoors’ (concept). /rekognition/faqs/;;A confidence score is a number between 0 and 100 that indicates the probability that a given prediction is correct. In the tropical beach example, if the object and scene detection process returns a confidence score of 99 for the label ‘Water’ and 35 for the label ‘Palm Tree’, then it is more likely that the image contains water but not a palm tree. /rekognition/faqs/;What is Object and Scene Detection?;Object and Scene Detection refers to the process of analyzing an image or video to assign labels based on its visual content. Amazon Rekognition Image does this through the DetectLabels API. This API lets you automatically identify thousands of objects, scenes, and concepts and returns a confidence score for each label. DetectLabels uses a default confidence threshold of 50. Object and Scene detection is ideal for customers who want to search and organize large image libraries, including consumer and lifestyle applications that depend on user-generated content and ad tech companies looking to improve their targeting algorithms. /rekognition/faqs/;Can Amazon Rekognition detect object locations and return bounding boxes?;Yes, Amazon Rekognition can detect the location of many common objects such as ‘Person’, ‘Car’, ‘Gun’, or ‘Dog’ in both images and videos. You get the coordinates of the bounding rectangle for each instance of the object found, as well as a confidence score. For more details on the API response structure for object bounding boxes, please refer to the documentation. /rekognition/faqs/;Does Amazon Rekognition provide information on the relationship between detected labels?;"Yes, for every label found, Amazon Rekognition returns its parent, alias, and category if they exist. Parents are returned in the ""parents"" field in hierarchical order. The first parent label is the immediate parent, while the following labels are parents of parents. For example, when a 'Car' is identified, Amazon Rekognition returns two parent labels 'Vehicle' (parent), and 'Transportation' (parent's parent). Aliases are labels with the same meaning as the primary labels and returned in the ""aliases"" field. For example, since 'Cell Phone' is an alias of 'Mobile Phone’, Amazon Rekognition returns 'Cell Phone' in the ""aliases"" field of a 'Mobile Phone' label. Categories group labels based on common themes and returned in the ""categories"" field. For example, since ’Dog’ is a label under the 'Animals and Pets' category, Amazon Rekognition returns 'Animal and Pets' in the ""categories"" field of a 'Dog' label. For more details on the full list of supported labels and their taxonomy, please visit Amazon Rekognition Label Detection documentation." /rekognition/faqs/;How is Object and Scene Detection different for video analysis?;Rekognition Video enables you to automatically identify thousands of objects - such as vehicles or pets - and activities - such as celebrating or dancing - and provides you with timestamps and a confidence score for each label. It also relies on motion and time context in the video to accurately identify complex activities, such as “blowing a candle” or “extinguishing fire”. /rekognition/faqs/;;Please send us your label requests through the Amazon Rekognition Console by typing the label name in the input field of the 'Search all labels' section and click 'Request Rekognition to detect' the requested label. Amazon Rekognition continuously expands its catalog of labels based on customer feedback. /rekognition/faqs/;;Image Properties is a feature of Amazon Rekognition Image to detect dominant colors and image quality. Image Properties detects dominant colors of the entire image, image foreground, image background, and objects with localized bounding boxes. Image Properties also measure image quality through brightness, sharpness, and contrast scores. Image Properties can be called through the DetectLabels API using IMAGE_PROPERTIES as input parameter, with or without GENERAL_LABEL input parameter for label detection. Visit the Amazon Rekognition Label Detection documentation to learn more. /rekognition/faqs/;;Image Properties returns dominant colors in four formats: RGB, hexcode, CSS color, and simplified colors. Amazon Rekognition first identifies the dominant colors of by pixel percentage, and then maps these colors to the 140 CSS color palette, RGB, hex code, and 12 simplified colors (i.e., 'green', 'pink', 'black', 'red', 'yellow', 'cyan', 'brown', 'orange', 'white', 'purple', 'blue', 'grey'). By default, Image Properties returns ten (10) dominant colors unless customers specify the number of colors to return. The maximum number of dominant colors the API can return is 12. /rekognition/faqs/;;Image Properties provides a value ranging from 0 to 100 for each brightness, sharpness, and contrast score. For example, an underexposed image will return a low brightness score, while a brightly lit image will return a high brightness score. /rekognition/faqs/;How can you check if Amazon Rekognition has updated its models?;Amazon Rekognition returns a LabelModelVersion parameter that lets you know whether the model has been updated. Object and Scene detection models are updated frequently based on customer feedback. /rekognition/faqs/;Can I use Custom Labels for analyzing faces, customized text detection?;No. Custom Labels is meant for finding objects and scenes in images. Custom Labels is not designed for analyzing faces, customized text detection. You should use other Rekognition APIs for these tasks. Please refer to the documentation for face analysis, Text detection. /rekognition/faqs/;Can I use Custom Labels for finding unsafe image content?;Yes. Custom Labels is meant for finding objects and scenes in images. Custom Labels, when trained to detect your use case specific unsafe image content, can detect unsafe image content specific to your use case. Please also refer to the documentation for Moderation API to detect generic unsafe image content. /rekognition/faqs/;How many images are needed to train a custom model?;The number of images required to train a custom model depends on the variability of the custom labels you want the model to predict and the quality of the training data. For example, a distinct logo overlaid on an image can be detected with 1-2 training images, while a more subtle logo required to be detected under many variations (scale, viewpoint, deformations) may need in the order of tens to hundreds of training examples with high quality annotations. If you already have a high number of labeled images, we recommend training a model with as many images as you have available. Please refer to the documentation for limits on maximum training dataset size. /rekognition/faqs/;How many inference compute resources should I provision for my custom model?;The number of parallel inference compute resources needed depends on how many images you need to process at a given point in time. The throughput of a single resource will depend factors including the size of the images, the complexity of those images (how many detected objects are visible), and the complexity of your custom model. We recommend that you monitor the frequency at which you need provision your custom model, and the number of images that need to be processed at a single time, in order to schedule provisioning of your custom model most efficiently. If you expect to process images periodically (e.g. once a day or week, or scheduled times during the day), you should Start provisioning your custom model at a scheduled time, process all your images, and then Stop provisioning. If you don’t stop provisioning, you will be charged even if no images are processed. /rekognition/faqs/;My training has failed. Will I be charged?;No. You will not be charged for the compute resources if your training fails. /rekognition/faqs/;What is Content Moderation?;Amazon Rekognition’s Content Moderation API uses deep learning to detect explicit or suggestive adult content, violent content, weapons, visually disturbing content, drugs, alcohol, tobacco, hate symbols, gambling, and rude gestures in image and videos. Beyond flagging an image or video based on presence of inappropriate or offensive content, Amazon Rekognition also returns a hierarchical list of labels with confidence scores. These labels indicate specific sub-categories of the type of content detected, thus providing more granular control to developers to filter and manage large volumes of user generated content (UGC). This API can be used in moderation workflows for applications such as social and dating sites, photo sharing platforms, blogs and forums, apps for children, e-commerce site, entertainment and online advertising services. /rekognition/faqs/;What types of inappropriate, offensive, and unwanted content does Amazon Rekognition detect?;You can find a full list of content categories detected by Amazon Rekognition here. /rekognition/faqs/;How can I know which model version I am currently using?;Amazon Rekognition makes regular improvement to its models. To keep track of model version, you can use the 'ModerationModelVersion' field in the API response. /rekognition/faqs/;How can I ensure that Amazon Rekognition meets accuracy goals for my image or video moderation use case?;Amazon Rekognition’s Content Moderation models have been and tuned and tested extensively, but we recommend that you measure the accuracy on your own data sets to gauge performance. /rekognition/faqs/;How can I give feedback to Rekognition to improve its Content Moderation APIs?;Please send us your requests through AWS Customer Support. Amazon Rekognition continuously expands the types of inappropriate content detected based on customer feedback. Please note that illegal content (such as child pornography) will not be accepted through this process. /rekognition/faqs/;What is Facial Analysis?;Facial analysis is the process of detecting a face within an image and extracting relevant face attributes from it. Amazon Rekognition Image takes returns the bounding box for each face detected in an image along with attributes such as gender, presence of sunglasses, and face landmark points. Rekognition Video will return the faces detected in a video with timestamps and, for each detected face, the position and a bounding box along with face landmark points. /rekognition/faqs/;What face attributes can I get from Amazon Rekognition?;Amazon Rekognition returns the following facial attributes for each face detected, along with a bounding box and confidence score for each attribute: /rekognition/faqs/;What is face pose?;Face pose refers to the rotation of a detected face on the pitch, roll, and yaw axes. Each of these parameters is returned as an angle between -180 and +180 degrees. Face pose can be used to find the orientation of the face bounding polygon (as opposed to a rectangular bounding box), to measure deformation, to track faces accurately, and more. /rekognition/faqs/;What is face quality?;Face quality describes the quality of the detected face image using two parameters: sharpness and brightness. Both parameters are returned as values between 0 and 1. You can apply a threshold to these parameters to filter well-lit and sharp faces. This is useful for applications that benefit from high-quality face images, such as face comparison and face recognition. /rekognition/faqs/;How many faces can I detect in an image?;You can detect up to 100 faces in an image using Amazon Rekognition. /rekognition/faqs/;How is Facial Analysis different for video analysis?;With Rekognition Video, you can locate faces across a video and analyze face attributes, such as whether the face is smiling, eyes are open, or showing emotions. Rekognition Video will return the detected faces with timestamps and, for each detected face, the position and a bounding box along with landmark points such as left eye, right eye, nose, left corner of the mouth, and right corner of the mouth. This position and time information can be used to easily track user sentiment over time and deliver additional functionality such as automatic face frames, highlights, or crops. /rekognition/faqs/;In addition to Video resolution, what else can affect the quality of the Rekognition Video APIs?;Besides video resolution, the quality and representative faces, part of the face collections to search, has major impact. Using multiple face instances per person with variations like beard, glasses, poses (profile and frontal) will significantly improve the performance. Typically very fast moving people and blurred videos may experience lower quality. /rekognition/faqs/;How many faces can I compare against?;You can compare one face in the source image with up to 15 detected faces in the target image. /rekognition/faqs/;What is Facial Recognition?;Facial recognition is the process of identifying or verifying a person’s identity by searching for their face in a collection of faces. Using facial recognition, you can easily build applications such as multi-factor authentication for bank payments, automated building entry for employees, and more. /rekognition/faqs/;How is Facial Recognition different for video analysis?;Rekognition Video allows you to perform real time face searches against collections with tens of millions of faces. First, you create a face collection, where you can store faces, which are vector representations of facial features. Rekognition then searches the face collection for visually similar faces throughout your video. Rekognition will return a confidence score for each of the faces in your video, so you can display likely matches in your application. /rekognition/faqs/;In addition to Video resolution what else can affect the quality of the Video APIs ?;Besides video resolution, the quality and representative faces part of the face collections to search has major impact. Using multiple face instances per person with variations like beard, glasses, poses (profile and frontal) will significantly improve the performance. Typically very fast moving people may experience low recall. In addition, blurred videos may also experience lower quality. /rekognition/faqs/;What is Celebrity Recognition?;Amazon Rekognition’s Celebrity Recognition is a deep learning based easy-to-use API for detection and recognition of individuals who are famous, noteworthy, or prominent in their field. The RecognizeCelebrities API has been built to operate at scale and recognize celebrities across a number of categories, such as politics, sports, business, entertainment, and media. Our Celebrity Recognition feature is ideal for customers who need to index and search their digital image libraries for celebrities based on their particular interest. /rekognition/faqs/;Who can be identified by the Celebrity Recognition API?;Amazon Rekognition can only identify celebrities that the deep learning models have been trained to recognize. Please note that the RecognizeCelebrities API is not an authority on, and in no way purports to be, an exhaustive list of celebrities. The feature has been designed to include as many celebrities as possible, based on the needs and feedback of our customers. We are constantly adding new names, but the fact that Celebrity Recognition does not recognize individuals that may be deemed prominent by any other groups or by our customers is not a reflection of our opinion of their celebrity status. If you would like to see additional celebrities identified by Celebrity Recognition, please submit feedback. /rekognition/faqs/;Can a celebrity identified through the Amazon Rekognition API request to be removed from the feature?;Yes. If a celebrity wishes to be removed from the feature, he or she can send an email to AWS Customer Support and we will process the removal request. /rekognition/faqs/;What sources are supported to provide additional information about a Celebrity ?;The API supports an optional list of sources to provide additional information about the celebrity as a part of the API response. We currently provide the IMDB URL, when it is available. We may add other sources at a later date. /rekognition/faqs/;How can I give feedback to Rekognition to improve its text recognition?;Please send us your requests through AWS Customer Support. Amazon Rekognition continuously expands the types of text content recognized based on customer feedback. /rekognition/faqs/;What personal protective equipment (PPE) can Amazon Rekognition detect?;Amazon Rekognition “DetectProtectiveEquipment” can detect common types of face covers, hand covers, and head covers. To learn more, please refer to feature documentation. You can also use Amazon Rekognition Custom Labels to detect PPE such as high-visibility vests, safety goggles, and other PPE unique to your business. To learn about how you can use Amazon Rekognition Custom Labels for custom PPE detection, visit this github repo. /rekognition/faqs/;Can Amazon Rekognition detect protective equipment locations and return bounding boxes?;Yes, Amazon Rekognition “DetectProtectiveEquipment” API can detect the location of protective equipment such as face covers, hand covers, and head covers on persons in images. You get the coordinates of the bounding box rectangle for each item of protective equipment detected, as well as a confidence score. For more details on the API response, please refer to documentation. /rekognition/faqs/;Can the service detect if the mask is worn properly?;Amazon Rekognition “DetectProtectiveEquipment” API output provides “CoversBodyPart” value (true/false) and confidence value for the Boolean value for each detected item of protective equipment. This provides information on whether the protective equipment is on the corresponding body part of the person. The prediction about the presence of the protective equipment on the corresponding body part helps filter out cases where the PPE is in the image but not actually on the person. It does not, however, indicate or imply that the person is adequately protected by the protective equipment or that the protective equipment itself is properly worn. /rekognition/faqs/;Can Amazon Rekognition PPE detection identify detected persons?;No, Amazon Rekognition PPE detection does not perform facial recognition or facial comparison and cannot identify the detected persons. /rekognition/faqs/;Where can I find more information about API limits and latency?;Please refer to Amazon Rekognition PPE detection documentation to get the latest details on API limits and latency. /rekognition/faqs/;How do I send images from my workplace cameras to Amazon Rekognition?;You have multiple options to sample images from your workplace cameras. Please refer to the Amazon Rekognition PPE detection blog to learn more. /rekognition/faqs/;How is PPE detection priced?;Amazon Rekognition PPE detection is priced similarly to other Amazon Rekognition Image APIs on a per image basis. To learn more, visit the Amazon Rekogntion pricing page. /rekognition/faqs/;What are Amazon Rekognition Streaming Video Events?;Send Smart Alerts to your end users such as “a package was detected at the front door.” Provide home automation capabilities such as “turning on the garage light when a person is detected.” Integrate with smart assistants such as Echo devices to provide Alexa announcements when an object is detected. Provide Smart Search capabilities such as search for all video clips where a package was detected. /rekognition/faqs/;How does Amazon Rekognition Streaming Video Events work?;When motion is detected on a connected camera, you send a notification to Rekognition to start processing the video stream. Rekognition processes the corresponding Kinesis Video Stream, post motion detection, to look for the desired objects specified by you. As soon as a desired object is detected, Amazon Rekognition will send you a notification. This notification includes the object detected, bounding box, zoomed in image of the object, and the time stamp. /rekognition/faqs/;What labels can Amazon Rekognition Streaming Video Events support?; Amazon Rekognition Streaming Video Event APIs support dogs and cats for pet detection. The API can detect medium and large cardboard boxes with high accuracy. The API also detects smaller boxes, bubble mailer envelopes, and folders but may miss some of these objects occasionally. /rekognition/faqs/;What pets and package types can Amazon Rekognition Streaming Video APIs detect?; No, you will not be charged separately for each label. You will be charged for the duration of streaming video processed by Rekognition. You can either opt into specific labels (pet, package) or choose to opt in to all three labels (people, pet, package) while configuring your stream processing settings. /rekognition/faqs/;Will I be charged separately for each label detected? Can I choose which labels to opt into?; No, you do not have to stream video continuously to Amazon Rekognition. /rekognition/faqs/;Do I need to stream video continuously to Amazon Rekognition?; Amazon Rekognition Streaming Video Events works with both new and existing Kinesis Video Streams. Simply integrate the relevant KVS streams with Amazon Rekognition Streaming Video Events API to get started with video analysis on KVS streams. /rekognition/faqs/;Should I create new Kinesis Video Streams (KVS) to use Streaming Video Events?; Amazon Rekognition starts processing the video stream post motion detection. You can configure the duration for processing this video stream (up to 120 seconds per event). As soon as Amazon Rekognition detects the object of interest in the video stream, Rekognition will send you a notification. This notification includes the type of object detected, the bounding box, a zoomed in image of the object detected, and a time stamp. /rekognition/faqs/;When will Amazon Rekognition send me a notification?; In order to keep costs and latency low Amazon Rekognition Streaming Video Events support 1080p or lower resolution video streams. Rekognition processes the video stream at 5 fps. /rekognition/faqs/;What resolution and fps is support for label detection?; Amazon Rekognition Video supports H.264 files in MPEG-4 (.mp4) or MOV format. /rekognition/faqs/;What codecs and file format is supported for streaming video?; You can process up to 120 seconds of video per event. /rekognition/faqs/;What is the maximum duration of the video processed per event?; Yes, as a part of configuring your StreamProcessor you can choose the region of interest that you want to process on your frame. Amazon Rekognition will only process that particular area of the frame. /rekognition/faqs/;Can I choose a particular area of the frame to be processed for my video stream?; Amazon Rekognition Streaming Video Events can support 600 concurrent sessions per AWS customer. Please reach out to your account manager if you need to increase this limit. /rekognition/faqs/;What types of media analysis segments can Amazon Rekognition Video detect?;Amazon Rekognition Video can detect the following types of segments or entities for media analysis: /rekognition/faqs/;How do I get started with media analysis using Amazon Rekognition Video?;Media analysis features are available through the Amazon Rekognition Video segment detection API. This is an asynchronous API composed of two operations: StartSegmentDetection to start the analysis, and GetSegmentDetection to get the analysis results. To get started, please refer to the documentation. /rekognition/faqs/;What is a frame accurate timecode?;Frame accurate timecodes provide the exact frame number for a relevant segment of video or entity. Media companies commonly process timecodes using the SMPTE (Society of Motion Picture and Television Engineers) format hours:minutes:seconds:frame number, for example, 00:24:53:22. /rekognition/faqs/;Is Amazon Rekognition Video segment detection frame accurate?;Yes, the Amazon Rekognition Video segment detection API provides frame accurate SMPTE timecodes, as well as millisecond timestamps for the start and end of each detection. /rekognition/faqs/;What types of frame rate formats can Amazon Rekognition Video segment detection handle?;Amazon Rekognition Video segment detection automatically handles integer, fractional and drop frame standards for frame rates between 15 and 60fps. For example, common frame rates such as 23.976 fps, 25fps, 29.97 fps and 30fps are supported by segment detection. Frame rate information is utilized to provide frame accurate timecodes in each case. /rekognition/faqs/;What filtering options can I apply?;You can specify the minimum confidence for each segment type while making the API request. For example, you can filter out any segment below 70% confidence score. For black frame detection, you can also control the maximum pixel luminance that you consider to be a black pixel, for example, a value of 40 for a color range of 0 to 255. Further, you can also control what percentage of pixels in a frame need to meet this black pixel luminance criteria for the frame to be classified as a black frame, for example, 99%. These filters allow you to account for varied video quality and formats when detecting black frames. For example, videos reclaimed from tape archives might be noisy and have a different black level compared to a modern digital video. For more details, please refer to this page. /rekognition/faqs/;How does Amazon Rekognition count the number of images processed?;For APIs that accept images as inputs, Amazon Rekognition counts the actual number of images analyzed as the number of images processed. DetectLabels, DetectModerationLabels, DetectFaces, IndexFaces, RecognizeCelebrities, and SearchFaceByImage, and Image Properties belong to this category. For the CompareFaces API, where two images are passed as input, only the source image is counted as a unit of images processed. /rekognition/faqs/;How does Amazon Rekognition count the number of minutes of videos processed?;For archived videos, Amazon Rekognition counts the minutes of video that is successfully processed by the API and meters them for billing. For Live stream videos you get charged in chunks of every five seconds of video that we successfully process. /rekognition/faqs/;Which APIs does Amazon Rekognition charge for?;Amazon Rekognition Image charges for the following APIs: DetectLabels, DetectModerationLabels, DetectText, DetectFaces, IndexFaces, RecognizeCelebrities, SearchFaceByImage, CompareFaces, SearchFaces, and Image Properties. Amazon Rekognition Video charges are based on duration of video in minutes, successfully processed by StartLabelDetection, StartFaceDetection, StartFaceDetection, StartTextDetection, StartContentModeration, StartPersonTracking, StartCelebrityRecognition, StartFaceSearch and StartStreamProcessor APIs. /rekognition/faqs/;How much does Amazon Rekognition cost?;Please see the Amazon Rekognition Pricing Page for current pricing information. /rekognition/faqs/;Will I be charged for the feature vectors I store in my face collections?;Yes. Amazon Rekognition charges $0.01 per 1,000 face vectors per month. For details, please see the pricing page. /rekognition/faqs/;Does Amazon Rekognition participate in the AWS Free Tier?;Yes. As part of the AWS Free Usage Tier, you can get started with Amazon Rekognition for free. Upon sign-up, new Amazon Rekognition customers can analyze up to 5,000 images for free each month for the first 12 months. You can use all Amazon Rekognition APIs, except for Image Properties, with this free tier, and also store up to 1,000 faces without any charge. In addition, Amazon Rekognition Video customers can analyze 1,000 minutes of Video free, per month, for the first year. /rekognition/faqs/;Do your prices include taxes?;For details on taxes, please see Amazon Web Services Tax Help. /rekognition/faqs/;Does Amazon Rekognition Video work with images stored on Amazon S3?;Yes. You can start analyzing images stored in Amazon S3 by simply pointing the Amazon Rekognition API to your S3 bucket. You don’t need to move your data. For more details of how to use S3 objects with Amazon Rekognition API calls, please see our Detect Labels exercise. /rekognition/faqs/;Can I use Amazon Rekognition with images stored in an Amazon S3 bucket in another region?;No. Please ensure that the Amazon S3 bucket you want to use is in the same region as your Amazon Rekognition API endpoint. /rekognition/faqs/;How do I process multiple image files in a batch using Amazon Rekognition?;You can process your Amazon S3 images in bulk using the steps described in our Amazon Rekognition Batch Processing example on GitHub. /rekognition/faqs/;How can I use AWS Lambda with Amazon Rekognition?;Amazon Rekognition provides seamless access to AWS Lambda and allows you bring trigger-based image analysis to your AWS data stores such as Amazon S3 and Amazon DynamoDB. To use Amazon Rekognition with AWS Lambda, please follow the steps outlined here and select the Amazon Rekognition blueprint. /rekognition/faqs/;Does Amazon Rekognition work with AWS CloudTrail?;Yes. Amazon Rekognition supports logging the following actions as events in CloudTrail log files: CreateCollection, DeleteCollection, CreateStreamProcessor, DeleteStreamProcessor, DescribeStreamProcessor, ListStreamProcessors, and ListCollections. For more details on the Amazon Rekognition API calls that are integrated with AWS CloudTrail, see Logging Amazon Rekonition API Calls with AWS CloudTrail. /rekognition/faqs/;Are image and video inputs processed by Amazon Rekognition stored, and how are they used by AWS?;Amazon Rekognition may store and use image and video inputs processed by the service solely to provide and maintain the service and, unless you opt out as provided below, to improve and develop the quality of Amazon Rekognition and other Amazon machine-learning/artificial-intelligence technologies. Use of your content is important for continuous improvement of your Amazon Rekognition customer experience, including the development and training of related technologies. We do not use any personally identifiable information that may be contained in your content to target products, services or marketing to you or your end users. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. You may opt out of having your image and video inputs used to improve or develop the quality of Amazon Rekognition and other Amazon machine-learning/artificial-intelligence technologies by using an AWS Organizations opt-out policy. For information about how to opt out, see Managing AI services opt-out policy. /rekognition/faqs/;Can I delete image and video inputs stored by Amazon Rekognition?;Yes. You can request deletion of image and video inputs associated with your account by contacting AWS Support. Deleting image and video inputs may degrade your Amazon Rekognition experience. /rekognition/faqs/;Who has access to my content that is processed and stored by Amazon Rekognition?;Only authorized employees will have access to your content that is processed by Amazon Rekognition. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. /rekognition/faqs/;Do I still own my content that is processed and stored by Amazon Rekognition?;You always retain ownership of your content and we will only use your content with your consent. /rekognition/faqs/;Is the content processed by Amazon Rekognition moved outside the AWS region where I am using Amazon Rekognition?;Any content processed by Amazon Rekognition is encrypted and stored at rest in the AWS region where you are using Amazon Rekognition. Unless you opt out as provided below, some portion of content processed by Amazon Rekognition may be stored in another AWS region solely in connection with the continuous improvement and development of your Amazon Rekognition customer experience and other Amazon machine-learning/artificial-intelligence technologies. You can request deletion of image and video inputs associated with your account by contacting AWS Support. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. Your content will not be stored in another AWS region if you opt out of having your content used to improve and develop the quality of Amazon Rekognition and other Amazon machine-learning/artificial-intelligence technologies. For information about how to opt out, see Managing AI services opt-out policy. /rekognition/faqs/;Can I use Amazon Rekognition in connection with websites, programs or other applications that are directed or targeted to children under age 13 and subject to the Children’s Online Privacy Protection Act (COPPA)?;Yes, subject to your compliance with the Amazon Rekognition Service Terms, including your obligation to provide any required notices and obtain any required verifiable parental consent under COPPA, you may use Amazon Rekognition in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13. /rekognition/faqs/;How do I determine whether my website, program, or application is subject to COPPA?;For information about the requirements of COPPA and guidance for determining whether your website, program, or other application is subject to COPPA, please refer directly to the resources provided and maintained by the United States Federal Trade Commission. This site also contains information regarding how to determine whether a service is directed or targeted, in whole or in part, to children under age 13. /rekognition/faqs/;Is Amazon Rekognition a HIPAA Eligible Service?;Amazon Rekognition is a HIPAA Eligible Service covered under the AWS Business Associate Addendum (AWS BAA). If you have an AWS BAA in place, Amazon Rekognition will use, disclose, and maintain your Protected Health Information (PHI) only as permitted by the terms of your AWS BAA. /rekognition/faqs/;How do I control user access for Amazon Rekognition?;Amazon Rekognition is integrated with AWS Identity and Access Management (IAM). AWS IAM policies can be used to ensure that only authorized users have access to Amazon Rekognition APIs. For more details, please see the Amazon Rekognition Authentication and Access Control page. /rekognition/faqs/;How can I report potential Amazon Rekognition abuse?;If you suspect that Amazon Rekognition is being used in manner that is abusive or illegal, or infringes on your rights or the rights of other people, please report this use and AWS will investigate the issue. /ground-station/faqs/;Where are Ground Station antennas located?;AWS Ground Station continues to expand the service to AWS Regions and locations around the world. To see a list of supported regions, see the Global Infrastructure Region Table. /ground-station/faqs/;What transmit and receive operational frequencies does Ground Station support?;Existing Ground Station antenna systems are capable of supporting the following frequencies: /kendra/faqs/;What is Amazon Kendra?;Amazon Kendra is a highly accurate and easy-to-use enterprise search service that’s powered by machine learning (ML). It allows developers to add search capabilities to their applications so their end users can discover information stored within the vast amount of content spread across their company. This includes data from manuals, research reports, FA” and Amazon Kendra will map to the relevant documents and return a specific answer (such as “2%”). Kendra provides sample code so you can get started quickly and easily integrate highly accurate search into your new or existing applications. /kendra/faqs/;How does Amazon Kendra work with other AWS services?;Amazon Kendra provides ML-powered search capabilities for all unstructured data that you store in AWS. Amazon Kendra offers easy-to-use native connectors to popular AWS repository types such as Amazon S3 and Amazon RDS databases. Other AI services such as Amazon Comprehend, Amazon Transcribe, and Amazon Comprehend Medical can be used to pre-process documents, generate searchable text, extract entities, and enrich metadata for more-specialized search experiences. /kendra/faqs/;What types of questions can I ask Amazon Kendra?;Amazon Kendra supports the following common types of questions: /kendra/faqs/;What if my data doesn’t contain the precise answer Amazon Kendra is looking for?;When your data doesn’t contain a precise answer to a question, Amazon Kendra returns a list of the most-relevant documents ranked by its deep learning models. /kendra/faqs/;What types of questions will Amazon Kendra be unable to answer?;Amazon Kendra does not yet support questions where the answers require cross-document passage aggregation or calculations. /kendra/faqs/;How do I get up and running with Amazon Kendra?;The Amazon Kendra console provides the easiest way to get started. You can point Amazon Kendra at unstructured and semi-structured documents such as FAQs stored in Amazon S3. After ingestion, you can start testing Kendra by typing queries directly in the “search” section of the console. You can then deploy Amazon Kendra search in two easy ways: (1) use the visual UI editor in our Experience Builder (no code required), or (2) implement the Amazon Kendra API using a few lines of code for more-precise control. Code samples are also provided in the console to speed up API implementation. /kendra/faqs/;How can I customize Amazon Kendra to better fit my company’s domain or business specialty?;Amazon Kendra offers domain-specific expertise for IT, pharma, insurance, energy, industrial, financial services, legal, media and entertainment, travel and hospitality, health, human resources, news, telecommunications, and automotive. You can further fine-tune and extend Kendra's domain-specific understanding by providing your own synonym lists. Simply upload a file with your specific terminology, and Amazon Kendra will use these synonyms to enrich user searches. /kendra/faqs/;What file types does Amazon Kendra support?;Amazon Kendra supports unstructured and semi-structured data in .html, MS Office (.doc, .ppt), PDF, and text formats. With the MediaSearch solution, you can also use Amazon Kendra to search audio and video files. /kendra/faqs/;How does Amazon Kendra handle incremental data updates?;Amazon Kendra provides two methods of keeping your index up to date. First, connectors provide scheduling to automatically sync your data sources on a regular basis. Second, the Amazon Kendra API allows you to build your own connector to send data directly to Amazon Kendra from your data source via your existing ETL jobs or applications. /kendra/faqs/;What languages does Amazon Kendra support?;For information on language support, refer to this documentation page. /kendra/faqs/;What code changes do I need to make to use Amazon Kendra?;Ingesting content does not require coding when using the native connectors. You can also write your own custom connectors to integrate with other data sources, using the Amazon Kendra SDK. You can deploy Amazon Kendra search in two easy ways: (1) use the visual UI editor in our Experience Builder (no code required), or (2) implement the Kendra API using a few lines of code for more flexibility. Code samples are also provided in the console to speed up API implementation. The SDK provides full control and flexibility of the end-user experience. /kendra/faqs/;In what regions is Amazon Kendra available?;See the AWS Regional Services page for more details. /kendra/faqs/;Can I add custom connectors?;You can write your own connectors using the Amzon Kendra Custom Data Source API. In addition, Amazon Kendra has a search-expert partner ecosystem that can help build connectors currently not available from AWS. Please contact us for more details on our partner network. /kendra/faqs/;How does Amazon Kendra handle security?;Amazon Kendra encrypts your data in transit and at rest. You have three choices for encryption keys for data at rest: AWS-owned KMS key, AWS-managed KMS key in your account, or a customer-managed KMS key. For data in transit, Amazon Kendra uses the HTTPS protocol to communicate with your client application. API calls to access Amazon Kendra through the network use Transport Layer Security (TLS) that must be supported by the client. /personalize/faqs/;Why should I use Amazon Personalize?;Amazon Personalize has helped numerous customers create personalized experiences for their users and has helped customers drive material improvements to business outcomes. When using Personalize, customers are able to deploy their models in days not months. See our customer references for examples. /personalize/faqs/;What are the key use cases supported by Amazon Personalize?;Amazon Personalize supports the following key use cases: /personalize/faqs/;What are some of the common business applications for Amazon Personalize?;"Amazon Personalize can be used to personalize the end-user experience over any digital channel. Examples include product recommendations for e-commerce, news articles and content recommendation for publishing, media and social networks, hotel recommendations for travel websites, credit card recommendations for banks, and match recommendations for dating sites. These recommendations and personalized experiences can be delivered over websites, mobile apps, or email/messaging. Amazon Personalize can also be used to customize the user experience when user interaction is over a physical channel; e.g., a meal delivery company could personalize weekly meals to users in a subscription plan." /personalize/faqs/;How do I get started with Amazon Personalize?;Developers get started by creating an account and accessing the Amazon Personalize developer console, which walks them through an intuitive set-up wizard. Developers have the option of using a JavaScript API and Server-Side SDKs to send real-time activity stream data to Amazon Personalize or bootstrapping the service using a historical log of user events. Developers can also import their catalog (item dataset) and user data via Amazon Simple Storage Service (S3). Then, with only a few API calls, developers can train a personalization model, either by letting the service choose the right algorithm for their dataset with AutoML or manually choosing one of the several algorithm options available. Once trained, the models can be deployed with a single API call and can then be used by production applications. When deployed, developers call the service from their production services to get real-time recommendations, and Amazon Personalize will automatically scale to meet demand. /personalize/faqs/;What data do I have to provide to Amazon Personalize?;Developers should provide the following data to Amazon Personalize: /personalize/faqs/;How do I apply/export Amazon Personalize recommendations to my business workflows or applications?;Amazon Personalize provides customers two inference APIs: getRecommendations and getPersonalizedRanking. These APIs return a list of recommended itemIDs for a user, a list of similar items for an item or a reranked list of items for a user. The itemID can be a product identifier, videoID, etc. The customers are then expected to use these itemIDs to generate the end user experience through steps, such as fetching image and description, and then rendering a display. In some cases, customers might integrate with AWS or third party email delivery services, or notification services etc. to generate the end user experience. /personalize/faqs/;Will my data be secure and private?;"All models are unique to the customers’ data set, and are not shared across other AWS accounts. Ndata is used to train or propagate models for other customers; the customer’s model inputs and outputs are entirely owned by the account. Every interaction customers have with Amazon Personalize is protected by encryption. Any data processed by Amazon Personalize can be further encrypted with customer keys through AWS Key Management Service, and encrypted at rest in the AWS Region where the customer is using the service. Administrators can also control access to Amazon Personalize through an AWS Identity and Access Management (IAM) permissions policy, ensuring that sensitive information is kept secure and confidential." /personalize/faqs/;What does Amazon Personalize cost?;Refer to the Amazon Personalize pricing page to learn more. /textract/faqs/;What is Amazon Textract?;Amazon Textract is a document analysis service that detects and extracts printed text, handwriting, structured data (such as fields of interest and their values) and tables from images and scans of documents. Amazon Textract's machine learning models have been trained on millions of documents so that virtually any document type you upload is automatically recognized and processed for text extraction. When information is extracted from documents, the service returns a confidence score for each element it identifies so that you can make informed decisions about how you want to use the results. For instance, if you are extracting information from tax documents you can set custom rules to flag any extracted information with a confidence score lower than 95%. Also, all extracted data are returned with bounding box coordinates, which is a rectangular frame that fully encompasses each piece of data identified, so that you can quickly identify where a word or number appears on a document. You can access these features with the Amazon Textract API, in the AWS Management Console, or using the AWS command-line interface (CLI). /textract/faqs/;What are the most common use cases for Amazon Textract?;The most common use cases for Amazon Textract include: /textract/faqs/;What type of text can Amazon Textract detect and extract?;Amazon Textract can detect printed text and handwriting from the Standard English alphabet and ASCII symbols. Amazon Textract can extract printed text, forms and tables in English, German, French, Spanish, Italian and Portuguese. Amazon Textract also extracts explicitly labeled data, implied data, and line items from an itemized list of goods or services from almost any invoice or receipt in English without any templates or configuration. Amazon Textract can also extract specific or implied data such as names and addresses from identity documents in English such as U.S. passports and driver’s licenses without the need for templates or configuration. Finally, Amazon Textract can extract any specific data from documents without worrying about the structure or variations of the data in the document using Queries in English. /textract/faqs/;What document formats does Amazon Textract support?;Amazon Textract currently supports PNG, JPEG, TIFF, and PDF formats. For synchronous APIs, you can submit images either as an S3 object or as a byte array. For asynchronous APIs, you can submit S3 objects. If your document is already in one of the file formats that Amazon Textract supports (PDF, TIFF, JPG, PNG), don't convert or downsample it before uploading it to Amazon Textract. /textract/faqs/;How do I get started with Amazon Textract?;"To get started with Amazon Textract, you can click the “Get Started with Amazon Textract” button on the Amazon Textract page. You must have an Amazon Web Services account; if you do not already have one, you will be prompted to create one during the process. Once you are signed in to your AWS account, try out Amazon Textract with your own images or PDF documents using the Amazon Textract Management Console. You can also download the Amazon Textract SDKs to start creating your own applications. Please refer to our step-by-step Getting Started Guide for more information." /textract/faqs/;What APIs does Amazon Textract offer?;Amazon Textract offers APIs that detect and extract printed text and handwriting from scanned images of documents, extract structured data such as tables, perform key-value pairing on extracted text, and separate APIs focused on extracting data from invoices, receipts, and identity documents. /textract/faqs/;What features does the Analyze Document API have?;Analyze Document API has three features – Forms, Tables and ”) and receive the answer (e.g., “Jane Doe”) as part of the response. /textract/faqs/;How should customers construct/craft/word queries?;We have published detailed guidance on best practices for crafting Queries as part of our API Documentation on the Textract Resources page. In general, customers should try to ask a natural language question utilizing words from the document to construct a query. /textract/faqs/;Are there any limits to the number of Queries I can ask per document?;Queries are processed on a per page basis and information can be extracted using Queries via both synchronous or asynchronous operations. For synchronous operations, a maximum of 15 Queries per page is supported. For asynchronous operations, a maximum of 30 queries per page is supported. /textract/faqs/;How can I get the best results from Amazon Textract?;Amazon Textract uses machine learning to read virtually any type of document in order to extract printed text, handwriting, and structured information. Keep the following tips in mind in order to get the best results: /textract/faqs/;How do I use the confidence score Amazon Textract provides?;A confidence score is a number between 0 and 100 that indicates the probability that a given prediction is correct. With Amazon Textract, all extracted printed text, handwriting, and structured data are returned with bounding box coordinates, which is a rectangular frame that fully encompasses each piece of data identified. This allows you to identify the score for each extracted entity so that you can make informed decisions on how you want to use the results. /textract/faqs/;How can I get Amazon Textract predictions reviewed by humans?;Amazon Textract is directly integrated with Amazon Augmented AI (A2I) so you can easily get low confidence predictions from Amazon Textract reviewed by humans. Using Amazon Textract’s API for form data extraction and the Amazon A2I console, you can specify the conditions under which Amazon A2I routes predictions to reviewers, which can be either a confidence threshold or a random sampling percentage. If you specify a confidence threshold, Amazon A2I routes only those predictions that fall below the threshold for human review. You can adjust these thresholds at any time to achieve the right balance between accuracy and cost-effectiveness. Alternatively, if you specify a sampling percentage, Amazon A2I routes a random sample of the predictions for human review. This can help you implement audits to monitor the prediction accuracy regularly. Amazon A2I also provides reviewers a web interface consisting of all the instructions and tools they need to complete their review tasks. For more information about implementing human review with Amazon Textract, see the Amazon A2I website. /textract/faqs/;In which AWS regions is Amazon Textract available?;Amazon Textract is currently available in the US East (Northern Virginia), US East (Ohio), US West (Oregon), US West (NCalifornia), AWS GovCloud (US-West), AWS GovCloud (US-East), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), EU (Paris), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai) Regions. /textract/faqs/;Does Amazon Textract work with AWS CloudTrail?;Yes. Amazon Textract supports logging of the following actions as CloudTrail events - DetectDocumentText, AnalyzeDocument, StartDocumentTextDetection, StartDocumentAnalysis, GetDocumentTextDetection, and GetDocumentAnalysis. For more details, please see Logging Amazon Textract API Calls with AWS CloudTrail. /textract/faqs/;How does Amazon Textract count the number of pages processed?;An image (PNG, TIFF, or JPEG) counts as a single page. For PDFs, each page in the document is counted as a page processed. /textract/faqs/;Which APIs am I charged for with Amazon Textract?;Refer to the Amazon Textract pricing page to learn more about pricing. /textract/faqs/;How much does Amazon Textract cost?;Amazon Textract charges you based on the number of pages and images processed. For more information, visit the pricing page. /textract/faqs/;Does Amazon Textract participate in the AWS Free Tier?;Yes. As part of the AWS Free Tier, you can get started with Amazon Textract for free. The Free Tier lasts for three months, and new AWS customers can analyze up to: Detect Document Text API: 1,000 pages per month Analyze Document API: /textract/faqs/;Do your prices include taxes?;For details on taxes, please see Amazon Web Services Tax Help. /textract/faqs/;Are document and image inputs processed by Amazon Textract stored, and how are they used by AWS?;Amazon Textract may store and use document and image inputs processed by the service solely to provide and maintain the service and to improve and develop the quality of Amazon Textract and other Amazon machine-learning/artificial-intelligence technologies. Use of your content is necessary for continuous improvement of your Amazon Textract customer experience, including the development and training of related technologies. We do not use any personally identifiable information that may be contained in your content to target products, services or marketing to you or your end users. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. You may opt out of having your document and image inputs used to improve or develop the quality of Amazon Textract and other Amazon machine-learning/artificial-intelligence technologies using an AWS Organizations opt-out policy. For information about how to opt out, see Managing AI services opt-out policy. /textract/faqs/;Is the content processed by Amazon Textract moved outside the AWS region where I am using Amazon Textract?;Any content processed by Amazon Textract is encrypted and stored at rest in the AWS region where you are using Amazon Textract. Unless you opt out as provided below, some portion of content processed by Amazon Textract may be stored in another AWS region solely in connection with the continuous improvement and development of your Amazon Textract customer experience and other Amazon machine-learning/artificial-intelligence technologies. You can request deletion of image and video inputs associated with your account by contacting AWS Support. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. Your content will not be stored in another AWS region if you opt out of having your content used to improve and develop the quality of Amazon Textract and other Amazon machine-learning/artificial-intelligence technologies. For information about how to opt out, see Managing AI services opt-out policy. /textract/faqs/;Can I delete images and documents stored by Amazon Textract?;Yes. You can request deletion of document and image inputs associated with your account by contacting AWS Support. Deleting image and document inputs may degrade your Amazon Textract experience. /textract/faqs/;Who has access to my content that is processed and stored by Amazon Textract?;Only authorized employees will have access to your content that is processed by Amazon Textract. Your trust, privacy, and the security of your content are our highest priority, and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. /textract/faqs/;Is Amazon Textract HIPAA eligible?;Yes, AWS has expanded its HIPAA compliance program to include Amazon Textract as a HIPAA eligible service. If you have an executed Business Associate Agreement (BAA) with AWS, you can use Amazon Textract to extract text including protected health information (PHI) from images. /textract/faqs/;What Compliance Programs are in scope for Amazon Textract?;Textract is HIPAA eligible, and compliant with PCI, ISO, and SOC. For more information please visit AWS Artifact in the AWS Management Console, or visit https://aws.amazon.com/compliance/services-in-scope/. Textract also supports Amazon Virtual Private Cloud (Amazon VPC) endpoints via AWS PrivateLink, enabling customers to securely initiate API calls to Amazon Textract from within their VPC and avoid using the public internet. /comprehend/faqs/;What is Natural Language Processing?;Natural Language Processing (NLP) is a way for computers to analyze, understand, and derive meaning from textual information in a smart and useful way. By utilizing NLP, you can extract important phrases, sentiment, syntax, key entities such as brand, date, location, person, etc., and the language of the text. /comprehend/faqs/;What is Amazon Comprehend?;Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find meaning and insights in text. /comprehend/faqs/;What can I do with Amazon Comprehend?;You can use Amazon Comprehend to identify the language of the text, extract key phrases, places, people, brands, or events, understand sentiment about products or services, and identify the main topics from a library of documents. The source of this text could be web pages, social media feeds, emails, or articles. You can also feed Amazon Comprehend a set of text documents, and it will identify topics (or group of words) that best represent the information in the collection. The output from Amazon Comprehend can be used to understand customer feedback, provide a better search experience through search filters and uses topics to categorize documents. /comprehend/faqs/;How do I get started with Amazon Comprehend?;You can get started with Amazon Comprehend from the AWS console. Your free tier for 12 months starts from the time you submit your first request. See product documentation on how to use Amazon Comprehend APIs in your application. /comprehend/faqs/;What are the most common use cases of Amazon Comprehend?;The most common use cases include: /comprehend/faqs/;Do I have to be a natural language processing expert to use Amazon Comprehend?;No, you don’t need NLP expertise to use Amazon Comprehend. You only need to call Amazon Comprehend’s API, and the service will handle the machine learning required to extract the relevant data from the text. /comprehend/faqs/;Is Amazon Comprehend a managed service?;Amazon Comprehend is a fully managed and continuously trained service, so you don’t have to manage the scaling of resources, maintenance of code, or maintaining the training data. /comprehend/faqs/;Does Amazon Comprehend learn over time?;Yes, Amazon Comprehend uses machine learning is continuously being trained to make it better for your use cases. /comprehend/faqs/;Which AWS regions is Amazon Comprehend available?;For a list of the supported Amazon Comprehend AWS regions, please visit the AWS Region Table for all AWS global infrastructure. Also for more information, see Regions and Endpoints in the AWS General Reference. /comprehend/faqs/;What security measures does Amazon Comprehend have?;Requests to the Amazon Comprehend API and console are made over a secure (SSL) connection. You can use AWS Identity and Access Management (AWS IAM) to control which IAM users have access to specific Amazon Comprehend actions and resources. /comprehend/faqs/;Where do I store my data?;You can use user Amazon Comprehend to read your data from Amazon S3. You can also write the results from Amazon Comprehend to a storage service, database, or data warehouse. /comprehend/faqs/;How do I know if the service can process my data?;For text analysis APIs, you will receive an HTTP status code of 200 indicating successful processing. If your data can't be processed or exceeds service limits, you will get an appropriate HTTP error code. /comprehend/faqs/;How do I know if Amazon Comprehend is giving accurate results?;The service will return a confidence score for each result. Low confidence scores mean that the service’s confidence is low that it is correct. Conversely, if the service is highly confident, the score will be closer to 1. /comprehend/faqs/;Can I import or use my own NLP model with Amazon Comprehend?;No. At the current time, Comprehend does not support custom models. /comprehend/faqs/;How is Amazon Comprehend priced?;Refer to the Amazon Comprehend pricing page to learn more about pricing tiers and discounts. /comprehend/faqs/;Who has access to my content that is processed and stored by Amazon Comprehend?;Only authorized employees will have access to your content that is processed by Amazon Comprehend. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see the AWS data privacy FAQs for more information. /comprehend/faqs/;Do I still own my content that is processed and stored by Amazon Comprehend?;You always retain ownership of your content and we will only use your content with your consent. /comprehend/faqs/;Is the content processed by Amazon Comprehend moved outside the AWS region where I am using Amazon Comprehend?;Any content processed by Amazon Comprehend is encrypted and stored at rest in the AWS region where you are using Amazon Comprehend. Some portion of content processed by Amazon Comprehend may be stored in another AWS region solely in connection with the continuous improvement and development of your Amazon Comprehend customer experience and other Amazon machine-learning/artificial-intelligence technologies. This does not apply Amazon Comprehend Medical. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see the https://aws.amazon.com/compliance/data-privacy-faq/ for more information. /comprehend/faqs/;Can I use Amazon Comprehend in connection with websites, programs or other applications that are directed or targeted to children under age 13 and subject to the Children’s Online Privacy Protection Act (COPPA)?;Yes, subject to your compliance with the Amazon Comprehend Service Terms, including your obligation to provide any required notices and obtain any required verifiable parental consent under COPPA, you may use Amazon Comprehend in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13. /comprehend/faqs/;How do I determine whether my website, program, or application is subject to COPPA?;For information about the requirements of COPPA and guidance for determining whether your website, program, or other application is subject to COPPA, please refer directly to the resources provided and maintained by the United States Federal Trade Commission. This site also contains information regarding how to determine whether a service is directed or targeted, in whole or in part, to children under age 13. /deeplens/faqs/;What happens to my AWS DeepLens resources after the end of life (EOL) date?;After January 31, 2024, all references to AWS DeepLens models, projects, and device information are deleted from the AWS DeepLens service. You can no longer discover or access the AWS DeepLens service from your AWS console and applications that call the AWS DeepLens API no longer work. /deeplens/faqs/;Will I be billed for AWS DeepLens resources remaining in my account after the EOL date?;Resources created by AWS DeepLens, such as Amazon S3 buckets, AWS Lambda functions, AWS IoT things, and AWS Identity and Access Management (IAM) roles continue to exist on their respective services after January 31, 2024. To avoid being billed after AWS DeepLens is no longer supported, follow all of these steps to delete these resources. /deeplens/faqs/;Can I deploy my AWS DeepLens projects after the end of life (EOL) date?;You can deploy AWS DeepLens projects until January 31, 2024. After that date, you do not have access to the AWS DeepLens console or API and any application that calls on the AWS DeepLens API does not work. /deeplens/faqs/;Will my AWS DeepLens device continue to receive security updates?;AWS DeepLens is not updated after January 31, 2024. While some applications deployed on AWS DeepLens devices might continue to run after the EOL date, AWS does not provide remedies related to and is not responsible for any issue arising from AWS DeepLens software or hardware. /deeplens/faqs/;How can I continue to get hands-on experience with AWS AI/ML?;We suggest you try our other hands-on machine learning tools. With AWS DeepRacer, use a cloud-based 3D racing simulator to create reinforcement learning models for an autonomous 1/18th scale race car. Learn and experiment in a no-setup, free development environment with Amazon SageMaker Studio Lab. Automate your image and video analysis with Amazon Rekognition, or use AWS Panorama to improve your operations with computer vision at the edge. /deeplens/faqs/;What should I do with my AWS DeepLens device?;We encourage you to recycle your AWS DeepLens device through the Amazon Recycling Program. Amazon covers the costs associated with shipping and recycling. /deeplens/faqs/;What is AWS DeepLens?;AWS DeepLens is the world’s first deep-learning enabled video camera for developers of all skill levels to grow their machine learning skills through hands-on computer vision tutorials, example code, and pre-built models. /deeplens/faqs/;What sample projects are available?;There are 7 sample projects available: /deeplens/faqs/;Does AWS DeepLens include Alexa?;No, AWS DeepLens does not have Alexa or any far-field audio capabilities. However, AWS DeepLens has a 2D microphone array that is capable of running custom audio models, with additional programming required. /deeplens/faqs/;What are the product specifications of the device?;Intel Atom® Processor Gen9 graphics Ubuntu OS 16.04 LTS 100 GFLOPS performance Dual band Wi-Fi 8GB RAM 16GB storage Expandable storage via microSD card 4MP camera with MJPEG H.264 encoding at 1080p resolution 2 USB ports Micro HDMI Audio out /deeplens/faqs/;"Why do I have ""v1.1"" marked on the bottom of my device?";AWS DeepLens (2019 Edition) is marked with “v1.1” on the bottom of the device. We have made significant improvements to the user experience, including onboarding, tutorials and additional sensor compatibility support such as depth sensor from Intel Real Sense. /deeplens/faqs/;What deep learning frameworks can I run on the device?;AWS DeepLens (2019 Edition) is optimized for Apache MXNet, TensorFlow and Caffe. /deeplens/faqs/;What MXNet network architecture layers does AWS DeepLens support?;AWS DeepLens offers support for 20 different network architecture layers. The layers supported are: /deeplens/faqs/;What comes in the box and how do I get started?;Inside the box, developers will find a Getting Started guide, the AWS DeepLens device, a region specific power cord and adapter, USB cable and a 32GB microSD card. Setup and configuration of the DeepLens device can be done in minutes using the AWS DeepLens console, and by configuring the device through a browser on your laptop or PC. /deeplens/faqs/;Why is an USB port marked as registration?;On AWS DeepLens (2019 Edition) the USB port marked as registration will be used during the onboarding process to register your AWS DeepLens to your AWS account. /deeplens/faqs/;Can I train my models on the device?;No, AWS DeepLens is capable of running inference or predictions using trained models. You can train your models in Amazon SageMaker, a machine learning platform to train and host your models. AWS DeepLens offers a simple 1-click deploy feature to publish trained models from Amazon SageMaker. /deeplens/faqs/;What AWS services are integrated with AWS DeepLens?;DeepLens is pre-configured for integration with AWS Greengrass, Amazon SageMaker and Amazon Kinesis Video Streams. You can integrate with many other AWS services, such as Amazon S3, Amazon Lambda, Amazon Dynamo, Amazon Rekognition using AWS DeepLens. /deeplens/faqs/;Can I SSH into AWS DeepLens?;Yes, we have designed AWS DeepLens to be easy to use, yet accessible for advanced developers. You can SSH into the device using the command: ssh aws_cam@ /deeplens/faqs/;What programming languages are supported by AWS DeepLens?;You can define and run models on the camera data stream locally in Python 2.7. /deeplens/faqs/;Do I need to be connected to internet to run the models?;No. You can run the models that you have deployed to AWS DeepLens without being connected to the internet. However, you need internet to deploy the model from the cloud to the device initially. After transferring your model, AWS DeepLens can perform inference on the device locally without requiring cloud connectivity. However, if you have components in your project that require interaction with cloud, you will need to have internet for those components. /deeplens/faqs/;Can I run my own custom models on AWS DeepLens?;Yes. You can also create your own project from scratch, using the AWS SageMaker platform to prepare data and train a model using a hosted notebook, and then publish the trained model to your AWS DeepLens for testing and refinement. You can also import an externally-trained model into AWS DeepLens by specifying the S3 location for model architecture and network weights files. /lex/faqs/;What is Amazon Lex?;Amazon Lex is a service for building conversational interfaces using voice and text. Powered by the same conversational engine as Alexa, Amazon Lex provides high quality speech recognition and language understanding capabilities, enabling addition of sophisticated, natural language ‘chatbots’ to new and existing applications. Amazon Lex reduces multi-platform development effort, allowing you to easily publish your speech or text chatbots to mobile devices and multiple chat services, like Facebook Messenger, Slack, Kik, or Twilio SMS. Native interoperability with AWS Lambda and Amazon CloudWatch and easy integration with many other services on the AWS platform including Amazon Cognito, and Amazon DynamoDB makes bot development effortless. /lex/faqs/;How can I get started with Amazon Lex?;To start using Amazon Lex, simply sign into the AWS Management Console and navigate to “Lex” under the “Artificial Intelligence” category. You must have an Amazon Web Services account to start using Amazon Lex. If you do not already have one, you will be prompted to create one during the sign-up process. Please refer to the Amazon Lex V2 Getting Started Guide for more information. /lex/faqs/;What are the most common use cases for Amazon Lex?;The most common use-cases include: /lex/faqs/;How does Amazon Lex work with other AWS services?;Amazon Lex leverages AWS Lambda for Intent fulfillment, Amazon Cognito for user authentication, and Amazon Polly for text to speech. /lex/faqs/;Do I have to be a machine learning expert to use Amazon Lex?;Nmachine learning expertise is necessary to use Amazon Lex. Developers can declaratively specify the conversation flow and Amazon Lex will take care of the speech recognition and natural language understanding functionality. Developers provide some sample utterances in plain English and the different parameters (slots) that they would like to collect from their user with the corresponding prompts. The language model gets built automatically. /lex/faqs/;In which AWS regions is Amazon Lex available?;For a list of the supported Amazon Lex AWS regions, please visit the AWS Region Table for all AWS global infrastructure. Also for more information, see Regions and Endpoints in the AWS General Reference. /lex/faqs/;Is Amazon Lex a managed service?;Amazon Lex is a completely managed service so you don’t have to manage scaling of resources or maintenance of code. Your interaction schema and language models are automatically backed up. We also provide comprehensive versioning capability for easy rollback. Amazon Lex architecture does not require storage or backups of end user data. /lex/faqs/;When do I use Amazon Polly vs. Amazon Lex?;Amazon Polly converts text inputs to speech. Amazon Lex is a service for building conversational interfaces using voice and text. /lex/faqs/;Does Amazon Lex get more intelligent over time?;Yes. Amazon Lex uses deep learning to improve over time. /lex/faqs/;How do I use the automated chatbot designer?;The automated chatbot designer helps you create a bot design in just a few clicks. You first provide a link to the S3 location that contains your conversation transcripts via the Lex Console (or the SDK). The automated chatbot designer will then process these transcripts to surface a chatbot design that includes user intents, sample phrases associated with those intents, and a list of all the information required to fulfill them. You can then review the results provided by the automated chatbot designer and add the intents and slot types that are best suited to your bot. /lex/faqs/;What transcript formats are supported by the automated chatbot designer?;The transcripts must contain conversations between a caller and an agent in standardized JSON format. You can find a sample transcript in this format in Amazon Lex documentation. Amazon Connect customers using Contact Lens can directly use conversation transcripts in their original format. Conversation transcripts from other transcription services may require a simple conversion. You can find details about the conversion process here. /lex/faqs/;Which languages are supported by the automated chatbot designer?;All English locales (US, UK, AU, INSA) supported by Amazon Lex are supported by the automated chatbot designer. At preview, the automated chatbot designer supports US English. /lex/faqs/;What are the usability improvements offered in the V2 enhanced console and APIs?;Lex V2 console and APIs use an updated information architecture (IA) to deliver simplified versioning, support for multiple languages in a bot, and streaming capabilities. Additional improvements include the saving of partially completed bot configurations, renaming of resources, simplified navigation, bulk upload of utterances, and granular debugging. /lex/faqs/;How can I use the streaming capability?;You can use the streaming API to conduct a continually streaming conversation with a Lex bot. With streaming conversation, the bot continuously listens and can be designed to respond proactively to user interruptions and pauses. For example, you can configure the bot to keep a conversation going when a user needs more time to respond by sending periodic messages such as “Take your time. Let me know once you are ready.” /lex/faqs/;What are the pricing details for the V2 APIs?;Amazon Lex bots are designed for a request and response interaction or a continuous streaming conversation. With the request and response interaction, each user input (voice or text) is processed as a separate API call. In a streaming conversation, all user inputs across multiple turns are processed in one streaming API call. Please refer to the Amazon Lex pricing page for more details. /lex/faqs/;Can I integrate bots created using V2 APIs with Amazon Connect contact flows?;Yes, Amazon Connect contact flows work with both Lex V2 and V1 APIs. You can use the Lex V2 console to create and integrate bots with Amazon Connect. /lex/faqs/;Can I take advantage of V2 API features for my existing bots?;No. If you want to take advantage of V2 features, you will need to recreate your bot with V2 APIs. The Lex V1 APIs are not compatible because V2 APIs use an updated information architecture to enable simplified resource versioning and support for multiple languages within a bot. Converting to V2 APIs is easy, so get started with this step by step migration guide. /lex/faqs/;Which regions and languages do the V2 APIs support?;The Amazon Lex V2 APIs and enhanced console experience is available in all existing 8 regions and languages including US English, Spanish, French, German, Italian, Japanese, Australian English, British English, Canadian French, Latin American Spanish, and US Spanish. For a list of the supported Amazon Lex AWS regions, please visit the AWS Region Table. /lex/faqs/;Will the support for new features such as simplified versioning and multiple languages in a bot be available in the existing APIs?;No. These features are only available in the V2 APIs. If you want to take advantage of these features, you can migrate to V2 APIs by following this migration guide. /lex/faqs/;Will I be able to access the V1 console?;Yes, you can access the V1 console in the AWS Management Console. Once in the Lex console, you can navigate between the V1 and V2 console. The bots created in the V1 console will only be visible within the V1 Console. You will not be able to access your V1 bots in the V2 console until you recreate them in the V2 console. Migrating your bots to V2 is easy, here is a step by step migration guide. /lex/faqs/;How do I access the V2 console?;You can click on the link in the left navigation bar to choose V1 or V2 as your console. /lex/faqs/;Can I still use Lex V1 APIs?;Yes. The existing Lex V1 APIs are still supported. You can continue to use them to build and conduct your bot conversations. /lex/faqs/;How is this different from Alexa Skills Kit?;Alexa Skills Kit (ASK) is used to build skills for use in the Alexa ecosystem and devices and lets developers take advantage of all Alexa capabilities such as the Smart Home and Flash Briefing API, streaming audio and rich GUI experiences. Amazon Lex bots support both voice and text and can be deployed across mobile and messaging platforms. /lex/faqs/;Do I need a wake word to invoke an Amazon Lex intent?;Amazon Lex does not support wake word functionality. The app that integrates with Amazon Lex will be responsible for triggering the microphone, i.e. push to talk. /lex/faqs/;Can an Amazon Lex bot respond using Alexa’s voice?;Currently we do not support the Alexa voice for Amazon Lex responses. However, there are 7 other voices from which to choose. /lex/faqs/;Can I create an Alexa Skill from an Amazon Lex bot?;"Amazon Lex provides the ability for you to export your Amazon Lex bot schema into a JSON file that is compatible with Amazon Alexa. Once downloaded as JSONyou need to log in to the Alexa developer portal, navigate to the ‘Interaction Model’ tab, launch the Alexa Skill Builder, and paste the bot schema into the Code Editor of your Alexa Skill.  More details and steps can be found in the Amazon Lex documentation." /lex/faqs/;When exporting my Amazon Lex bot schema to use in an Alexa skill, are my AWS Lambda functions exported and included in the bot schema?;No. Only the bot definition will be downloaded. /lex/faqs/;I have created an Alexa Skill from an Amazon Lex bot using the schema export feature. Which Alexa platforms support the Amazon Lex bot schema?;All Alexa platforms that support Alexa skills can be used: The Amazon Echo, Amazon Dot, Amazon Look, Amazon Tap, Amazon Echo Show and any third-party Alexa-enabled devices. /lex/faqs/;Are voice and text inputs processed by Amazon Lex stored, and how are they used by AWS?;Amazon Lex may store and use voice and text inputs processed by the service solely to provide and maintain the service and to improve and develop the quality of Amazon Lex and other Amazon machine-learning/artificial-intelligence technologies. Use of your content is necessary for continuous improvement of your Amazon Lex customer experience, including the development and training of related technologies. We do not use any personally identifiable information that may be contained in your content to target products, services or marketing to you or your end users. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. You may opt out of having your content used to improve and develop the quality of Amazon Lex and other Amazon machine-learning/artificial-intelligence technologies by using an AWS Organizations opt-out policy. For information about how to opt out, see Managing AI services opt-out policy. /lex/faqs/;Can I delete voice and text inputs stored by Amazon Lex?;Yes. You can request deletion of voice and text inputs associated with your account by contacting opting out. Deleting voice and text inputs may degrade your Amazon Lex experience. For information about how to opt out, see Managing AI services opt-out policy. /lex/faqs/;Who has access to my content that is processed and stored by Amazon Lex?;Only authorized employees will have access to your content that is processed by Amazon Lex. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. /lex/faqs/;Do I still own my content that is processed and stored by Amazon Lex?;You always retain ownership of your content and we will only use your content with your consent. /lex/faqs/;Can I use Amazon Lex in connection with websites, programs or other applications that are directed or targeted to children under age 13 and subject to the Children’s Online Privacy Protection Act (COPPA)?;Yes, subject to your compliance with the Amazon Lex Service Terms, including your obligation to provide any required notices and obtain any required verifiable parental consent under COPPA, you may use Amazon Lex in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13. Amazon Lex does not store or retain voice or text utterance information from websites, programs, or applications that are identified by customers in accordance with the Amazon Lex Service Terms as being directed or targeted, in whole or in part, to children under age 13 and subject to COPPA. /lex/faqs/;How do I determine whether my website, program, or application is subject to COPPA?;For information about the requirements of COPPA and guidance for determining whether your website, program, or other application is subject to COPPA, please refer directly to the resources provided and maintained by the United States Federal Trade Commission. This site also contains information regarding how to determine whether a service is directed or targeted, in whole or in part, to children under age 13. whole or in part, to children under age 13. /lex/faqs/;What SDKs are supported for Amazon Lex?;Amazon Lex currently supports SDKs for runtime services. IoS and Android SDKs, as well as Java, JS, Python, CLI, .Net, Ruby, PHP, Go, and CPP support both text and speech input. /lex/faqs/;Can I use SDKs to build bots?;You can build bots using SDKs: Java, JavaScript, Python, CLI, .NET, Ruby on Rails, PHP, Go, and CPP. /lex/faqs/;How does Amazon Lex count the number of requests?;Every input to an Amazon Lex bot is counted as a request. For example, if an end user provides 5 inputs to the bot as part of conversation, these are billed as 5 requests. Usage is metered and billed per request. /sagemaker/faqs/;What is Amazon SageMaker?;Amazon SageMaker is a fully managed service to prepare data and build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. /sagemaker/faqs/;In which Regions is Amazon SageMaker available?;For a list of the supported Amazon SageMaker AWS Regions, please visit the AWS Regional Services page. Also, for more information, see Regional endpoints in the AWS general reference guide. /sagemaker/faqs/;What is the service availability of Amazon SageMaker?;Amazon SageMaker is designed for high availability. There are no maintenance windows or scheduled downtimes. SageMaker APIs run in Amazon’s proven, high-availability data centers, with service stack replication configured across three facilities in each AWS Region to provide fault tolerance in the event of a server failure or Availability Zone outage. /sagemaker/faqs/;How does Amazon SageMaker secure my code?;Amazon SageMaker stores code in ML storage volumes, secured by security groups and optionally encrypted at rest. /sagemaker/faqs/;What security measures does Amazon SageMaker have?;Amazon SageMaker ensures that ML model artifacts and other system artifacts are encrypted in transit and at rest. Requests to the SageMaker API and console are made over a secure (SSL) connection. You pass AWS Identity and Access Management roles to SageMaker to provide permissions to access resources on your behalf for training and deployment. You can use encrypted Amazon Simple Storage Service (Amazon S3) buckets for model artifacts and data, as well as pass an AWS Key Management Service (KMS) key to SageMaker notebooks, training jobs, and endpoints, to encrypt the attached ML storage volume. Amazon SageMaker also supports Amazon Virtual Private Cloud (VPC) and AWS PrivateLink support. /sagemaker/faqs/;Does Amazon SageMaker use or share models, training data, or algorithms?;Amazon SageMaker does not use or share customer models, training data, or algorithms. We know that customers care deeply about privacy and data security. That's why AWS gives you ownership and control over your content through simple, powerful tools that allow you to determine where your content will be stored, secure your content in transit and at rest, and manage your access to AWS services and resources for your users. We also implement responsible and sophisticated technical and physical controls that are designed to prevent unauthorized access to or disclosure of your content. As a customer, you maintain ownership of your content, and you select which AWS services can process, store, and host your content. We do not access your content for any purpose without your consent. /sagemaker/faqs/;How am I charged for Amazon SageMaker?;"You pay for ML compute, storage, and data processing resources you use for hosting the notebook, training the model, performing predictions, and logging the outputs. Amazon SageMaker allows you to select the number and type of instance used for the hosted notebook, training, and model hosting. You pay only for what you use, as you use it; there are no minimum fees and no upfront commitments. See the Amazon SageMaker pricing page and the Amazon SageMaker Pricing calculator for details." /sagemaker/faqs/;How can I optimize my Amazon SageMaker costs, such as detecting and stopping idle resources in order to avoid unnecessary charges?;"There are several best practices you can adopt to optimize your Amazon SageMaker resource utilization. Some approaches involve configuration optimizations; others involve programmatic solutions. A full guide on this concept, complete with visual tutorials and code samples, can be found in this blog post." /sagemaker/faqs/;What if I have my own notebook, training, or hosting environment?;Amazon SageMaker provides a full end-to-end workflow, but you can continue to use your existing tools with SageMaker. You can easily transfer the results of each stage in and out of SageMaker as your business requirements dictate. /sagemaker/faqs/;Is R supported with Amazon SageMaker?;Yes, R is supported with Amazon SageMaker. You can use R within SageMaker notebook instances, which include a preinstalled R kernel and the reticulate library. Reticulate offers an R interface for the Amazon SageMaker Python SDK, enabling ML practitioners to build, train, tune, and deploy R models. /sagemaker/faqs/;How can I check for imbalances in my model?;Amazon SageMaker Clarify helps improve model transparency by detecting statistical bias across the entire ML workflow. SageMaker Clarify checks for imbalances during data preparation, after training, and ongoing over time, and also includes tools to help explain ML models and their predictions. Findings can be shared through explainability reports. /sagemaker/faqs/;What is Amazon SageMaker Studio?;Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps. SageMaker Studio gives you complete access, control, and visibility into each step required to prepare data and build, train, and deploy models. You can quickly upload data, create new notebooks, train and tune models, move back and forth between steps to adjust experiments, compare results, and deploy models to production all in one place, making you much more productive. All ML development activities including notebooks, experiment management, automatic model creation, debugging and profiling, and model drift detection can be performed within the unified SageMaker Studio visual interface. /sagemaker/faqs/;What is RStudio on Amazon SageMaker?;RStudio on Amazon SageMaker is the first fully managed RStudio Workbench in the cloud. You can quickly launch the familiar RStudio integrated development environment (IDE) and dial up and down the underlying compute resources without interrupting your work, making it easy to build machine learning (ML) and analytics solutions in R at scale. You can seamlessly switch between the RStudio IDE and Amazon SageMaker Studio notebooks for R and Python development. All your work, including code, datasets, repositories, and other artifacts, is automatically synchronized between the two environments to reduce context switch and boost productivity. /sagemaker/faqs/;How does Amazon SageMaker Studio pricing work?;There is no additional charge for using Amazon SageMaker Studio. You pay only for the underlying compute and storage charges on the services you use within Amazon SageMaker Studio. /sagemaker/faqs/;In which Regions is Amazon SageMaker Studio supported?;You can find the Regions where Amazon SageMaker Studio is supported in the documentation here. /sagemaker/faqs/;What ML governance tools does Amazon SageMaker provide?;Amazon SageMaker provides purpose-built ML governance tools across the ML lifecycle. With SageMaker Role Manager, administrators can define minimum permissions in minutes. SageMaker Model Cards makes it easier to capture, retrieve, and share essential model information from conception to deployment, and SageMaker Model Dashboard keeps you informed on production model behavior, all in one place. View more details. /sagemaker/faqs/;What does Amazon SageMaker Role Manager do?;You can define minimum permissions in minutes with Amazon SageMaker Role Manager. SageMaker Role Manager provides a baseline set of permissions for ML activities and personas with a catalog of pre-built IAM policies. You can keep the baseline permissions, or customize them further based on your specific needs. With a few self-guided prompts, you can quickly input common governance constructs such as network access boundaries and encryption keys. SageMaker Role Manager will then generate the IAM policy automatically. You can discover the generated role and associated policies through the AWS IAM console. To further tailor the permissions to your use case, attach your managed IAM policies to the IAM role that you create with SageMaker Role Manager. You can also add tags to help identify the role and organize across AWS services. /sagemaker/faqs/;What does Amazon SageMaker Model Cards do?;Amazon SageMaker Model Cards helps you centralize and standardize model documentation throughout the ML lifecycle by creating a single source of truth for model information. SageMaker Model Cards auto-populates training details to accelerate the documentation process. You can also add details such as the purpose of the model and the performance goals. You can attach model evaluation results to your model card and provide visualizations to gain key insights into model performance. SageMaker Model Cards can easily be shared with others by exporting to a pdf format. /sagemaker/faqs/;What does Amazon SageMaker Model Dashboard do?;Amazon SageMaker Model Dashboard gives you a comprehensive overview of deployed models and endpoints, letting you track resources and model behavior violations through one pane. It allows you to monitor model behavior in four dimensions, including data and model quality, and bias and feature attribution drift through its integration with Amazon SageMaker Model Monitor and Amazon SageMaker Clarify. SageMaker Model Dashboard also provides an integrated experience to set up and receive alerts for missing and inactive model monitoring jobs, and deviations in model behavior for model quality, data quality, bias drift, and feature attribution drift. You can further inspect individual models and analyze factors impacting model performance over time. Then, you can follow up with ML practitioners to take corrective measures. /sagemaker/faqs/;What is Amazon SageMaker Autopilot?;Amazon SageMaker Autopilot is the industry’s first automated machine learning capability that gives you complete control and visibility into your ML models. SageMaker Autopilot automatically inspects raw data, applies feature processors, picks the best set of algorithms, trains and tunes multiple models, tracks their performance, and then ranks the models based on performance, all with just a few clicks. The result is the best-performing model that you can deploy at a fraction of the time normally required to train the model. You get full visibility into how the model was created and what’s in it, and SageMaker Autopilot integrates with Amazon SageMaker Studio. You can explore up to 50 different models generated by SageMaker Autopilot inside SageMaker Studio so it’s easy to pick the best model for your use case. SageMaker Autopilot can be used by people without ML experience to easily produce a model, or it can be used by experienced developers to quickly develop a baseline model on which teams can further iterate. /sagemaker/faqs/;What built-in algorithms are supported in Amazon SageMaker Autopilot?;Amazon SageMaker Autopilot supports 2 built-in algorithms: XGBoost and Linear Learner. /sagemaker/faqs/;Can I stop an Amazon SageMaker Autopilot job manually?;Yes. You can stop a job at any time. When an Amazon SageMaker Autopilot job is stopped, all ongoing trials will be stopped and no new trial will be started. /sagemaker/faqs/;What solutions come pre-built with Amazon SageMaker JumpStart?;SageMaker JumpStart includes solutions that are preconfigured with all necessary AWS services to launch a solution into production. Solutions are fully customizable so you can easily modify them to fit your specific use case and dataset. You can use solutions for over 15 use cases including demand forecasting, fraud detection, and predictive maintenance, and readily deploy solutions with just a few clicks. For more information about all solutions available, visit the SageMaker getting started page. /sagemaker/faqs/;How can I share ML artifacts with others within my organization?;With Amazon SageMaker JumpStart, data scientists and ML developers can easily share ML artifacts, including notebooks and models, within their organization. Administrators can set up a repository that is accessible by a defined set of users. All users with permission to access the repository can browse, search, and use models and notebooks as well as the public content inside of SageMaker JumpStart. Users can select artifacts to train models, deploy endpoints, and execute notebooks in SageMaker JumpStart. /sagemaker/faqs/;Why should I use Amazon SageMaker JumpStart to share ML artifacts with others within my organization?;With Amazon SageMaker JumpStart, you can accelerate time-to-market when building ML applications. Models and notebooks built by one team inside of your organization can be easily shared with other teams within your organization with just a few clicks. Internal knowledge sharing and asset reuse can significantly increase the productivity of your organization. /sagemaker/faqs/;How does Amazon SageMaker JumpStart pricing work?;You are charged for the AWS services launched from Amazon SageMaker JumpStart, such as training jobs and endpoints, based on SageMaker pricing. There is no additional charge for using SageMaker JumpStart. /sagemaker/faqs/;What is Amazon SageMaker Canvas?;Amazon SageMaker Canvas is a no-code service with an intuitive, point-and-click interface that lets you create highly accurate ML-based predictions from your data. SageMaker Canvas lets you access and combine data from a variety of sources using a drag-and-drop user interface, automatically cleaning and preparing data to minimize manual cleanup. SageMaker Canvas applies a variety of state-of-the-art ML algorithms to find highly accurate predictive models and provides an intuitive interface to make predictions. You can use SageMaker Canvas to make much more precise predictions in a variety of business applications and easily collaborate with data scientists and analysts in your enterprise by sharing your models, data, and reports. To learn more about SageMaker Canvas, please visit the SageMaker Canvas FAQ page. /sagemaker/faqs/;How does Amazon SageMaker Canvas pricing work?;With Amazon SageMaker Canvas, you pay based on usage. SageMaker Canvas lets you interactively ingest, explore, and prepare your data from multiple sources, train highly accurate ML models with your data, and generate predictions. There are two components that determine your bill: session charges based on the number of hours for which SageMaker Canvas is used or logged into, and charges for training the model based on the size of the dataset used to build the model. For more information see the SageMaker Canvas pricing page. /sagemaker/faqs/;How can I build a continuous integration and delivery (CI/CD) pipeline with Amazon SageMaker?;Amazon SageMaker Pipelines helps you create fully automated ML workflows from data preparation through model deployment so you can scale to thousands of ML models in production. SageMaker Pipelines comes with a Python SDK that connects to Amazon SageMaker Studio so you can take advantage of a visual interface to build each step of the workflow. Then using a single API, you can connect each step to create an end-to-end workflow. SageMaker Pipelines takes care of managing data between steps, packaging the code recipes, and orchestrating their execution, reducing months of coding to a few hours. Every time a workflow executes, a complete record of the data processed and actions taken is kept so data scientists and ML developers can quickly debug problems. There is no additional charge for Amazon SageMaker Pipelines. You pay only for the underlying compute or any separate AWS services you use within SageMaker Pipelines. /sagemaker/faqs/;How do I view all my trained models to choose the best model to move to production?; There is no additional charge for using Amazon SageMaker Components for Kubeflow Pipelines. /sagemaker/faqs/;What components of Amazon SageMaker can be added to Amazon SageMaker Pipelines?; There is no additional charge for using Amazon SageMaker Components for Kubeflow Pipelines. /sagemaker/faqs/;How do I track my model components across the entire ML workflow?; There is no additional charge for using Amazon SageMaker Components for Kubeflow Pipelines. /sagemaker/faqs/;How can I train machine learning models with data prepared in Amazon SageMaker Data Wrangler?;Amazon SageMaker Data Wrangler provides a unified experience enabling you to prepare data and seamlessly train a machine learning model in Amazon SageMaker Autopilot. SageMaker Autopilot automatically builds, trains, and tunes the best ML models based on your data. With SageMaker Autopilot, you still maintain full control and visibility of your data and model. You can also use features prepared in SageMaker Data Wrangler with your existing models. You can configure Amazon SageMaker Data Wrangler processing jobs to run as part of your SageMaker training pipeline either by configuring the job in the user interface (UI) or exporting a notebook with the orchestration code. /sagemaker/faqs/;How does Amazon SageMaker Data Wrangler handle new data when I have prepared my features on historical data?;You can configure and launch Amazon SageMaker processing jobs directly from the SageMaker Data Wrangler UI, including scheduling your data processing job and parametrizing your data sources to easily transform new batches of data at scale. /sagemaker/faqs/;How does Amazon SageMaker Data Wrangler work with my CI/CD processes?;Once you have prepared your data, Amazon SageMaker Data Wrangler provides different options for promoting your SageMaker Data Wrangler flow to production and integrates seamlessly with MLOps and CI/CD capabilities. You can configure and launch SageMaker processing jobs directly from the SageMaker Data Wrangler UI, including scheduling your data processing job and parametrizing your data sources to easily transform new batches of data at scale. Alternatively, SageMaker Data Wrangler integrates seamlessly with SageMaker processing and the SageMaker Spark container, allowing you to easily use SageMaker SDKs to integrate SageMaker Data Wrangler into your production workflow. /sagemaker/faqs/;What model does Amazon SageMaker Data Wrangler Quick Model use?;In a few clicks of a button, Amazon SageMaker Data Wrangler splits and trains an XGBoost model with default hyperparameters. Based on the problem type, SageMaker Data Wrangler provides a model summary, feature summary, and confusion matrix to quickly give you insight so you can iterate on your data preparation flows. /sagemaker/faqs/;What size data does Amazon SageMaker Data Wrangler support?;Amazon SageMaker Data Wrangler supports various sampling techniques–such as top-K, random, and stratified sampling for importing data—so that you can quickly transform your data using SageMaker Data Wrangler’s UI. If you are using large or wide datasets, you can increase the SageMaker Data Wrangler instance size to improve performance. Once you have created your flow, you can process your full dataset using SageMaker Data Wrangler processing jobs. /sagemaker/faqs/;Does Amazon SageMaker Data Wrangler work with Amazon SageMaker Feature Store?;You can configure Amazon SageMaker Feature Store as a destination for your features prepared in Amazon SageMaker Data Wrangler. This can be done directly in the UI or you can export a notebook generated specifically for processing data with SageMaker Feature Store as the destination. /sagemaker/faqs/;How do I store features for my ML models?;Online features are used in applications required to make real-time predictions. Online features are served from a high-throughput repository with single-digit millisecond latency for fast predictions. /sagemaker/faqs/;How can I reproduce a feature from a given moment in time?;Online features are used in applications required to make real-time predictions. Online features are served from a high-throughput repository with single-digit millisecond latency for fast predictions. /sagemaker/faqs/;What are offline features?;Online features are used in applications required to make real-time predictions. Online features are served from a high-throughput repository with single-digit millisecond latency for fast predictions. /sagemaker/faqs/;What are online features?;Online features are used in applications required to make real-time predictions. Online features are served from a high-throughput repository with single-digit millisecond latency for fast predictions. /sagemaker/faqs/;How does pricing work for Amazon SageMaker Feature Store?;You can get started with Amazon SageMaker Feature Store for free, as part of the AWS Free Tier. With SageMaker Feature Store, you pay for writing into the feature store, and reading and storage from the online feature store. For pricing details, see the SageMaker Pricing Page. /sagemaker/faqs/;What does Amazon SageMaker offer for data labeling?;Amazon SageMaker provides two data labeling offerings, Amazon SageMaker Ground Truth Plus and Amazon SageMaker Ground Truth. Both options allow you to identify raw data, such as images, text files, and videos, and add informative labels to create high-quality training datasets for your ML models. To learn more, visit the SageMaker Data Labeling webpage. /sagemaker/faqs/;What is geospatial data?;Geospatial data represents features or objects on the Earth’s surface. The first type of geospatial data is vector data which uses two-dimensional geometries such as, points, lines, or polygons to represent objects like roads and land boundaries. The second type of geospatial data is raster data such as imagery captured by satellite, aerial platforms, or remote sensing data. This data type uses a matrix of pixels to define where features are located. You can use raster formats for storing data that varies. A third type of geospatial data is geo-tagged location data. It includes points of interest—for example, the Eiffel Tower—location tagged social media posts, latitude and longitude coordinates, or different styles and formats of street addresses. /sagemaker/faqs/;What are Amazon SageMaker geospatial capabilities?;Amazon SageMaker geospatial capabilities make it easier for data scientists and machine learning (ML) engineers to build, train, and deploy ML models for making predictions using geospatial data. You can bring your own data, for example, Planet Labs satellite data from Amazon S3, or acquire data from Open Data on AWS, Amazon Location Service, and other Amazon SageMaker geospatial data sources. /sagemaker/faqs/;Why should I use geospatial ML on Amazon SageMaker?;You can use Amazon SageMaker geospatial capabilities to make predictions on geospatial data faster than do-it-yourself solutions. Amazon SageMaker geospatial capabilities make it easier to access geospatial data from your existing customer data lakes, open-source datasets, and other Amazon SageMaker geospatial data sources. Amazon SageMaker geospatial capabilities minimize the need for building custom infrastructure and data pre-processing functions by offering purpose-built algorithms for efficient data preparation, model training, and inference. You can also create and share custom visualizations and data with your organization from Amazon SageMaker Studio. Amazon SageMaker geospatial capabilities include pre-trained models for common uses in agriculture, real estate, insurance, and financial services. /sagemaker/faqs/;What are Amazon SageMaker Studio notebooks?;Amazon SageMaker Studio notebooks are quick start, collaborative, managed Jupyter notebooks. Amazon SageMaker Studio notebooks integrate with purpose-built ML tools in SageMaker and other AWS services for end-to-end ML development in Amazon SageMaker Studio, the fully integrated development environment (IDE) for ML. /sagemaker/faqs/;How are Amazon SageMaker Studio notebooks different from the instance-based notebooks offering?;SageMaker Studio notebooks offer a few important features that differentiate them from the instance-based notebooks. With the Studio notebooks, you can quickly launch notebooks without needing to manually provision an instance and waiting for it to be operational. The startup time of launching the UI to read and execute a notebook is faster than the instance-based notebooks. /sagemaker/faqs/;How do Amazon SageMaker Studio notebooks work?;Amazon SageMaker Studio notebooks are one-click Jupyter notebooks that can be spun quickly. The underlying compute resources are fully elastic, so you can easily dial up or down the available resources and the changes take place automatically in the background without interrupting your work. SageMaker also enables one-click sharing of notebooks. You can easily share notebooks with others and they’ll get the exact same notebook, saved in the same place. /sagemaker/faqs/;What are the shared spaces in Amazon SageMaker?;Machine learning practitioners can create a shared workspace where teammates can read and edit Amazon SageMaker Studio notebooks together. By using the shared paces, teammates can coedit the same notebook file, run notebook code simultaneously, and review the results together to eliminate back and forth and streamline collaboration. In the shared spaces, ML teams will have built-in support for services like BitBucket and AWS CodeCommit, so they can easily manage different versions of their notebook and compare changes over time. Any resources created from within the notebooks, such as experiments and ML models, are automatically saved and associated with the specific workspace where they were created so teams can more easily stay organized and accelerate ML model development. /sagemaker/faqs/;How do Amazon SageMaker Studio notebooks work with other AWS services?;Amazon SageMaker Studio notebooks give you access to all SageMaker features, such as distributed training, batch transform, hosting, and experiment management. You can access other services such as datasets in Amazon S3, Amazon Redshift, AWS Glue, Amazon EMR, or AWS Lake Formation from SageMaker notebooks. /sagemaker/faqs/;How does Amazon SageMaker Studio notebooks pricing work?;You pay for both compute and storage when you use SageMaker Studio notebooks. See Amazon SageMaker Pricing for charges by compute instance type. Your notebooks and associated artifacts such as data files and scripts are persisted on Amazon EFS. See Amazon EFS Pricing for storage charges. As part of the AWS Free Tier, you can get started with Amazon SageMaker Studio notebooks for free. /sagemaker/faqs/;Do I get charged separately for each notebook created and run in SageMaker Studio?;No. You can create and run multiple notebooks on the same compute instance. You pay only for the compute that you use, not for individual items. You can read more about this in our metering guide. /sagemaker/faqs/;How do I monitor and shut down the resources used by my notebooks?;You can monitor and shut down the resources used by your SageMaker Studio notebooks through both SageMaker Studio visual interface and the AWS Management Console. See the documentation for more details. /sagemaker/faqs/;I’m running a SageMaker Studio notebook. Will I still be charged if I close my browser, close the notebook tab, or just leave the browser open?;Yes, you will continue to be charged for the compute. This is similar to starting Amazon EC2 instances in the AWS Management Console and then closing the browser. The Amazon EC2 instances are still running and you still incur charges unless you explicitly shut down the instance. /sagemaker/faqs/;Do I get charged for creating and setting up an Amazon SageMaker Studio domain?;No, you don’t get charged for creating or configuring an Amazon SageMaker Studio domain, including adding, updating, and deleting user profiles. /sagemaker/faqs/;How do I see the itemized charges for Amazon SageMaker Studio notebooks or other Amazon SageMaker services?;"As an admin, you can view the list of itemized charges for Amazon SageMaker, including SageMaker Studio, in the AWS Billing console. From the AWS Management Console for SageMaker, choose Services on the top menu, type ""billing"" in the search box and select Billing from the dropdown, then select Bills on the left panel. In the Details section, you can click on SageMaker to expand the list of Regions and drill down to the itemized charges." /sagemaker/faqs/;What is Amazon SageMaker Studio Lab?;"Amazon SageMaker Studio Lab is a free ML development environment that provides the compute, storage (up to 15 GB), and security—all at no cost—for anyone to learn and experiment with ML. All you need to get started is a valid email ID; you don’t need to configure infrastructure or manage identity and access or even sign up for an AWS account. SageMaker Studio Lab accelerates model building through GitHub integration, and it comes preconfigured with the most popular ML tools, frameworks, and libraries to get you started immediately. SageMaker Studio Lab automatically saves your work so you don’t need to restart between sessions. It’s as easy as closing your laptop and coming back later." /sagemaker/faqs/;Why should I use Amazon SageMaker Studio Lab?;Amazon SageMaker Studio Lab is for students, researchers, and data scientists who need a free notebook development environment with no setup required for their ML classes and experiments. SageMaker Studio Lab is ideal for users who do not need a production environment but still want a subset of the SageMaker functionality to improve their ML skills. SageMaker sessions are automatically saved, enabling users to pick up where they left off for each user session. /sagemaker/faqs/;How does Amazon SageMaker Studio Lab work with other AWS services?;Amazon SageMaker Studio Lab is a service built on AWS and uses many of the same core services as Amazon SageMaker Studio, such as Amazon S3 and Amazon EC2. Unlike the other services, customers will not need an AWS account. Instead, they will create an Amazon SageMaker Studio Lab specific account with an email address. This will give the user access to a limited environment (15 GB of storage, and 12 hour sessions) for them to run ML notebooks. /sagemaker/faqs/;What is Amazon SageMaker Canvas?;Amazon SageMaker Canvas is a visual drag-and-drop service that allows business analysts to build ML models and generate accurate predictions without writing any code or requiring ML expertise. SageMaker Canvas makes it easy to access and combine data from a variety of sources, automatically clean data and apply a variety of data adjustments, and build ML models to generate accurate predictions with a single click. You can also easily publish results, explain and interpret models, and share models with others within your organization to review. /sagemaker/faqs/;What data sources does Amazon SageMaker Canvas support?;Amazon SageMaker Canvas enables you to seamlessly discover AWS data sources that your account has access to, including Amazon S3 and Amazon Redshift. You can browse and import data using the SageMaker Canvas visual drag-and-drop interface. Additionally, you can drag and drop files from your local disk, and use pre-built connectors to import data from third-party sources such as Snowflake. /sagemaker/faqs/;How do I build an ML model to generate accurate predictions in Amazon SageMaker Canvas?;Once you have connected sources, selected a dataset, and prepared your data, you can select the target column that you want to predict to initiate a model creation job. Amazon SageMaker Canvas will automatically identify the problem type, generate new relevant features, test a comprehensive set of prediction models using ML techniques such as linear regression, logistic regression, deep learning, time-series forecasting, and gradient boosting, and build the model that makes accurate predictions based on your dataset. /sagemaker/faqs/;How long does it take to build a model in Amazon SageMaker Canvas? How can I monitor progress during model creation?;The time it takes to build a model depends on the size of your dataset. Small datasets can take less than 30 minutes, and large datasets can take a few hours. As the model creation job progresses, Amazon SageMaker Canvas provides detailed visual updates, including percent job complete and the amount of time left for job completion. /sagemaker/faqs/;What is Amazon SageMaker Experiments?;"Amazon SageMaker Experiments helps you organize and track iterations to ML models. SageMaker Experiments helps you manage iterations by automatically capturing the input parameters, configurations, and results, and storing them as ""experiments"". You can work within the visual interface of Amazon SageMaker Studio, where you can browse active experiments, search for previous experiments by their characteristics, review previous experiments with their results, and compare experiment results visually." /sagemaker/faqs/;What is Amazon SageMaker Debugger?;Amazon SageMaker Debugger automatically captures real-time metrics during training, such as confusion matrices and learning gradients, to help improve model accuracy. The metrics from SageMaker Debugger can be visualized in Amazon SageMaker Studio for easy understanding. SageMaker Debugger can also generate warnings and remediation advice when common training problems are detected. SageMaker Debugger also automatically monitors and profiles system resources such as CPUs, GPUs, network, and memory in real time, and provides recommendations on re-allocation of these resources. This enables you to use your resources efficiently during training and helps reduce costs and resources. /sagemaker/faqs/;What is Amazon SageMaker Training Compiler?;Amazon SageMaker Training Compiler is a deep learning (DL) compiler that accelerates DL model training by up to 50 percent through graph- and kernel-level optimizations to use GPUs more efficiently. SageMaker Training Compiler is integrated with versions of TensorFlow and PyTorch in SageMaker, so you can speed up training in these popular frameworks with minimal code changes. /sagemaker/faqs/;How does Amazon SageMaker Training Compiler work?;Amazon SageMaker Training Compiler accelerates training jobs by converting DL models from their high-level language representation to hardware-optimized instructions that train faster than jobs with the native frameworks. More specifically, SageMaker Training Compiler uses graph-level optimization (operator fusion, memory planning, and algebraic simplification), data flow-level optimizations (layout transformation, common sub-expression elimination), and backend optimizations (memory latency hiding, loop oriented optimizations) to produce an optimized model training job that more efficiently uses hardware resources and, as a result, trains faster. /sagemaker/faqs/;What is Managed Spot Training?;Managed Spot Training with Amazon SageMaker lets you train your ML models using Amazon EC2 Spot instances, while reducing the cost of training your models by up to 90%. /sagemaker/faqs/;How do I use Managed Spot Training?;You enable the Managed Spot Training option when submitting your training jobs and you also specify how long you want to wait for Spot capacity. Amazon SageMaker will then use Amazon EC2 Spot instances to run your job and manages the Spot capacity. You have full visibility into the status of your training jobs, both while they are running and while they are waiting for capacity. /sagemaker/faqs/;When should I use Managed Spot Training?;Managed Spot Training is ideal when you have flexibility with your training runs and when you want to minimize the cost of your training jobs. With Managed Spot Training, you can reduce the cost of training your ML models by up to 90%. /sagemaker/faqs/;How does Managed Spot Training work?;Managed Spot Training uses Amazon EC2 Spot instances for training, and these instances can be pre-empted when AWS needs capacity. As a result, Managed Spot Training jobs can run in small increments as and when capacity becomes available. The training jobs need not be restarted from scratch when there is an interruption, as Amazon SageMaker can resume the training jobs using the latest model checkpoint. The built-in frameworks and the built-in computer vision algorithms with SageMaker enable periodic checkpoints, and you can enable checkpoints with custom models. /sagemaker/faqs/;Do I need to periodically checkpoint with Managed Spot Training?;We recommend periodic checkpoints as a general best practice for long-running training jobs. This prevents your Managed Spot Training jobs from restarting if capacity is pre-empted. When you enable checkpoints, Amazon SageMaker resumes your Managed Spot Training jobs from the last checkpoint. /sagemaker/faqs/;How do you calculate the cost savings with Managed Spot Training jobs?;Once a Managed Spot Training job is completed, you can see the savings in the AWS Management Console and also calculate the cost savings as the percentage difference between the duration for which the training job ran and the duration for which you were billed. /sagemaker/faqs/;Which instances can I use with Managed Spot Training?;Managed Spot Training can be used with all instances supported in Amazon SageMaker. /sagemaker/faqs/;Which AWS Regions are supported with Managed Spot Training?;Managed Spot Training is supported in all AWS Regions where Amazon SageMaker is currently available. /sagemaker/faqs/;Are there limits to the size of the dataset I can use for training?;There are no fixed limits to the size of the dataset you can use for training models with Amazon SageMaker. /sagemaker/faqs/;What algorithms does Amazon SageMaker use to generate models?;Amazon SageMaker includes built-in algorithms for linear regression, logistic regression, k-means clustering, principal component analysis, factorization machines, neural topic modeling, latent dirichlet allocation, gradient boosted trees, sequence2sequence, time-series forecasting, word2vec, and image classification. SageMaker also provides optimized Apache MXNet, Tensorflow, Chainer, PyTorch, Gluon, Keras, Horovod, Scikit-learn, and Deep Graph Library containers. In addition, Amazon SageMaker supports your custom training algorithms provided through a Docker image adhering to the documented specification. /sagemaker/faqs/;What is Automatic Model Tuning?;Most ML algorithms expose a variety of parameters that control how the underlying algorithm operates. Those parameters are generally referred to as hyperparameters and their values affect the quality of the trained models. Automatic model tuning is the process of finding a set of hyperparameters for an algorithm that can yield an optimal model. /sagemaker/faqs/;What models can be tuned with Automatic Model Tuning?;You can run automatic model tuning in Amazon SageMaker on top of any algorithm as long as it’s scientifically feasible, including built-in SageMaker algorithms, deep neural networks, or arbitrary algorithms you bring to SageMaker in the form of Docker images. /sagemaker/faqs/;Can I use Automatic Model Tuning outside of Amazon SageMaker?;Not at this time. The best model tuning performance and experience is within Amazon SageMaker. /sagemaker/faqs/;What is the underlying tuning algorithm for Automatic Model Tuning?;Currently, the algorithm for tuning hyperparameters is a customized implementation of Bayesian Optimization. It aims to optimize a customer-specified objective metric throughout the tuning process. Specifically, it checks the object metric of completed training jobs, and uses the knowledge to infer the hyperparameter combination for the next training job. /sagemaker/faqs/;Does Automatic Model Tuning recommend specific hyperparameters for tuning?;No. How certain hyperparameters impact the model performance depends on various factors, and it is hard to definitively say one hyperparameter is more important than the others and thus needs to be tuned. For built-in algorithms within Amazon SageMaker, we do call out whether or not a hyperparameter is tunable. /sagemaker/faqs/;How long does a hyperparameter tuning job take?;The length of time for a hyperparameter tuning job depends on multiple factors, including the size of the data, the underlying algorithm, and the values of the hyperparameters. Additionally, customers can choose the number of simultaneous training jobs and total number of training jobs. All these choices affect how long a hyperparameter tuning job can last. /sagemaker/faqs/;Can I optimize multiple objectives simultaneously, such as optimizing a model to be both fast and accurate?;Not at this time. Currently, you need to specify a single objective metric to optimize or change your algorithm code to emit a new metric, which is a weighted average between two or more useful metrics, and have the tuning process optimize towards that objective metric. /sagemaker/faqs/;How much does Automatic Model Tuning cost?;There is no charge for a hyperparameter tuning job itself. You will be charged by the training jobs that are launched by the hyperparameter tuning job, based on model training pricing. /sagemaker/faqs/;How do I decide to use Amazon SageMaker Autopilot or Automatic Model Tuning?;Amazon SageMaker Autopilot automates everything in a typical ML workflow, including feature preprocessing, algorithm selection, and hyperparameter tuning, while specifically focusing on classification and regression use cases. Automatic Model Tuning, on the other hand, is designed to tune any model, no matter whether it is based on built-in algorithms, deep learning frameworks, or custom containers. In exchange for the flexibility, you have to manually pick the specific algorithm, hyperparameters to tune, and corresponding search ranges. /sagemaker/faqs/;What is reinforcement learning?;Reinforcement learning is a ML technique that enables an agent to learn in an interactive environment by trial and error using feedback from its own actions and experiences. /sagemaker/faqs/;Can I train reinforcement learning models in Amazon SageMaker?;Yes, you can train reinforcement learning models in Amazon SageMaker in addition to supervised and unsupervised learning models. /sagemaker/faqs/;How is reinforcement learning different from supervised learning?;Though both supervised and reinforcement learning use mapping between input and output, unlike supervised learning where the feedback provided to the agent is the correct set of actions for performing a task, reinforcement learning uses a delayed feedback where reward signals are optimized to ensure a long-term goal through a sequence of actions. /sagemaker/faqs/;When should I use reinforcement learning?;While the goal of supervised learning techniques is to find the right answer based on the patterns in the training data, the goal of unsupervised learning techniques is to find similarities and differences between data points. In contrast, the goal of reinforcement learning (RL) techniques is to learn how to achieve a desired outcome even when it is not clear how to accomplish that outcome. As a result, RL is more suited to enabling intelligent applications where an agent can make autonomous decisions such as robotics, autonomous vehicles, HVAC, industrial control, and more. /sagemaker/faqs/;What type of environments can I use for training RL models?;Amazon SageMaker RL supports a number of different environments for training RL models. You can use AWS services such as AWS RoboMaker, open-source environments or custom environments developed using Open AI Gym interfaces, or commercial simulation environments such as MATLAB and SimuLink. /sagemaker/faqs/;Do I need to write my own RL agent algorithms to train RL models?;No, Amazon SageMaker RL includes RL toolkits such as Coach and Ray RLLib that offer implementations of RL agent algorithms such as DQNPPO, A3C, and many more. /sagemaker/faqs/;Can I bring my own RL libraries and algorithm implementation and run them in Amazon SageMaker RL?;Yes, you can bring your own RL libraries and algorithm implementations in Docker Containers and run those in Amazon SageMaker RL. /sagemaker/faqs/;Can I do distributed rollouts using Amazon SageMaker RL?;Yes. You can even select a heterogeneous cluster where the training can run on a GPU instance and the simulations can run on multiple CPU instances. /sagemaker/faqs/;What is Amazon SageMaker Asynchronous Inference?;Amazon SageMaker Asynchronous Inference queues incoming requests and processes them asynchronously. This option is ideal for requests with large payload sizes and/or long processing times that need to be processed as they arrive. Optionally, you can configure auto-scaling settings to scale down the instance count to zero when not actively processing requests to save on costs. /sagemaker/faqs/;How do I configure auto-scaling settings to scale down the instance count to zero when not actively processing requests?;"You can scale down the Amazon SageMaker Asynchronous Inference endpoint instance count to zero in order to save on costs when you are not actively processing requests. You need to define a scaling policy that scales on the ""ApproximateBacklogPerInstance"" custom metric and set the ""MinCapacity"" value to zero. For step-by-step instructions, please visit the autoscale an asynchronous endpoint section of the developer guide." /sagemaker/faqs/;What is Amazon SageMaker Serverless Inference?;Amazon SageMaker Serverless Inference is a purpose-built serverless model serving option that makes it easy to deploy and scale ML models. SageMaker Serverless Inference endpoints automatically start the compute resources and scale them in and out depending on traffic, eliminating the need for you to choose instance type, run provisioned capacity, or manage scaling. You can optionally specify the memory requirements for your serverless inference endpoint. You pay only for the duration of running the inference code and the amount of data processed, not for idle periods. /sagemaker/faqs/;Why should I use Amazon SageMaker Serverless Inference?;Amazon SageMaker Serverless Inference simplifies the developer experience by eliminating the need to provision capacity up front and manage scaling policies. SageMaker Serverless Inference can scale instantly from tens to thousands of inferences within seconds based on the usage patterns, making it ideal for ML applications with intermittent or unpredictable traffic. For example, a chatbot service used by a payroll processing company experiences an increase in inquiries at the end of the month while for rest of the month traffic is intermittent. Provisioning instances for the entire month in such scenarios is not cost-effective, as you end up paying for idle periods. SageMaker Serverless Inference helps address these types of use cases by providing you automatic and fast scaling out of the box without the need for you to forecast traffic up front or manage scaling policies. Additionally, you pay only for the compute time to run your inference code (billed in milliseconds) and for data processing, making it a cost-effective option for workloads with intermittent traffic. /sagemaker/faqs/;What is Amazon SageMaker Inference Recommender?;Amazon SageMaker Inference Recommender is a new capability of Amazon SageMaker that reduces the time required to get ML models in production by automating performance benchmarking and tuning model performance across SageMaker ML instances. You can now use SageMaker Inference Recommender to deploy your model to an endpoint that delivers the best performance and minimizes cost. You can get started with SageMaker Inference Recommender in minutes while selecting an instance type and get recommendations for optimal endpoint configurations within hours, eliminating weeks of manual testing and tuning time. With SageMaker Inference Recommender, you pay only for the SageMaker ML instances used during load testing, and there are no additional charges. /sagemaker/faqs/;Why should I use Amazon SageMaker Inference Recommender?;"You should use SageMaker Inference Recommender if you need recommendations for the right endpoint configuration to improve performance and reduce costs. Previously, data scientists who wanted to deploy their models had to run manual benchmarks to select the right endpoint configuration. They had to first select the right ML instance type out of the 70+ available instance types based on the resource requirements of their models and sample payloads, and then optimize the model to account for differing hardware. Then, they had to conduct extensive load tests to validate that latency and throughput requirements are met and that the costs are low. SageMaker Inference Recommender eliminates this complexity by making it easy for you to: 1) get started in minutes with an instance recommendation; 2) conduct load tests across instance types to get recommendations on your endpoint configuration within hours; and 3) automatically tune container and model server parameters as well as perform model optimizations for a given instance type. " /sagemaker/faqs/;How does Amazon SageMaker Inference Recommender work with other AWS services?;Data scientists can access Amazon SageMaker Inference Recommender from SageMaker Studio, AWS SDK for Python (Boto3), or AWS CLI. They can get deployment recommendations within SageMaker Studio in the SageMaker model registry for registered model versions. Data scientists can search and filter the recommendations through SageMaker Studio, AWS SDK, or AWS CLI. /sagemaker/faqs/;Can Amazon SageMaker Inference Recommender support multi-model endpoints or multi-container endpoints?;No, we currently support only a single model per endpoint. /sagemaker/faqs/;What type of endpoints does SageMaker Inference Recommender support?;Currently we support only real-time endpoints. /sagemaker/faqs/;Can I use SageMaker Inference Recommender in one Region and benchmark in different Regions?;At launch, we will support all Regions supported by Amazon SageMaker, except the AWS China Regions. /sagemaker/faqs/;Does Amazon SageMaker Inference Recommender support Amazon EC2 Inf1 instances?;Yes, we support all types of containers. Amazon EC2 Inf1, based on the AWS Inferentia chip, requires a compiled model artifact using either the Neuron compiler or Amazon SageMaker Neo. Once you have a compiled model for an Inferentia target and the associated container image URI, you can use Amazon SageMaker Inference Recommender to benchmark different Inferentia instance types. /sagemaker/faqs/;What is Amazon SageMaker Model Monitor?;Amazon SageMaker Model Monitor allows developers to detect and remediate concept drift. SageMaker Model Monitor automatically detects concept drift in deployed models and provides detailed alerts that help identify the source of the problem. All models trained in SageMaker automatically emit key metrics that can be collected and viewed in Amazon SageMaker Studio. From inside SageMaker Studio, you can configure data to be collected, how to view it, and when to receive alerts. /sagemaker/faqs/;Can I access the infrastructure that Amazon SageMaker runs on?;No. Amazon SageMaker operates the compute infrastructure on your behalf, allowing it to perform health checks, apply security patches, and do other routine maintenance. You can also deploy the model artifacts from training with custom inference code in your own hosting environment. /sagemaker/faqs/;How do I scale the size and performance of an Amazon SageMaker model once in production?;Amazon SageMaker hosting automatically scales to the performance needed for your application using Application Auto Scaling. In addition, you can manually change the instance number and type without incurring downtime by modifying the endpoint configuration. /sagemaker/faqs/;How do I monitor my Amazon SageMaker production environment?;Amazon SageMaker emits performance metrics to Amazon CloudWatch Metrics so you can track metrics, set alarms, and automatically react to changes in production traffic. In addition, Amazon SageMaker writes logs to Amazon CloudWatch Logs to let you monitor and troubleshoot your production environment. /sagemaker/faqs/;What kinds of models can be hosted with Amazon SageMaker?;Amazon SageMaker can host any model that adheres to the documented specification for inference Docker images. This includes models created from Amazon SageMaker model artifacts and inference code. /sagemaker/faqs/;How many concurrent real-time API requests does Amazon SageMaker support?;Amazon SageMaker is designed to scale to a large number of transactions per second. The precise number varies based on the deployed model and the number and type of instances to which the model is deployed. /sagemaker/faqs/;What is Batch Transform?;Batch Transform enables you to run predictions on large or small batch data. There is no need to break down the dataset into multiple chunks or manage real-time endpoints. With a simple API, you can request predictions for a large number of data records and transform the data quickly and easily. /sagemaker/faqs/;How do I get started with Amazon SageMaker Edge Manager?; Amazon SageMaker Edge Manager supports common CPU (ARM, x86) and GPU (ARM, Nvidia) based devices with Linux and Windows operating systems. Over time, SageMaker Edge Manager will expand to support more embedded processors and mobile platforms that are also supported by SageMaker Neo. /sagemaker/faqs/;What devices are supported by Amazon SageMaker Edge Manager?; No, you do not. You can train your models elsewhere or use a pre-trained model from open source or from your model vendor. /sagemaker/faqs/;Do I need to use Amazon SageMaker to train my model in order to use Amazon SageMaker Edge Manager?; Yes, you do. Amazon SageMaker Neo converts and compiles your models into an executable that you can then package and deploy on your edge devices. Once the model package is deployed, the Amazon SageMaker Edge Manager agent will unpack the model package and run the model on the device. /sagemaker/faqs/;Do I need to use Amazon SageMaker Neo to compile my model in order to use Amazon SageMaker Edge Manager?; Amazon SageMaker Edge Manager stores the model package in your specified Amazon S3 bucket. You can use the over-the-air (OTA) deployment feature provided by AWS IoT Greengrass or any other deployment mechanism of your choice to deploy the model package from your S3 bucket to the devices. /sagemaker/faqs/;How do I deploy models to the edge devices?; Neo dlr is an open-source runtime that only runs models compiled by the Amazon SageMaker Neo service. Compared to the open source dlr, the SageMaker Edge Manager SDK includes an enterprise grade on-device agent with additional security, model management, and model serving features. The SageMaker Edge Manager SDK is suitable for production deployment at scale. /sagemaker/faqs/;How is Amazon SageMaker Edge Manager SDK different from the SageMaker Neo runtime (dlr)?; Amazon SageMaker Edge Manager and AWS IoT Greengrass can work together in your IoT solution. Once your ML model is packaged with SageMaker Edge Manager, you can use AWS IoT Greengrass’s OTA update feature to deploy the model package to your device. AWS IoT Greengrass allows you to monitor your IoT devices remotely, while SageMaker Edge Manager helps you monitor and maintain the ML models on the devices. /sagemaker/faqs/;How is Amazon SageMaker Edge Manager related to AWS IoT Greengrass?; AWS offers the most breadth and depth of capabilities for running models on edge devices. We have services to support a wide range of use cases, including computer vision, voice recognition, and predictive maintenance. /sagemaker/faqs/;How is Amazon SageMaker Edge Manager related to AWS Panorama? When should I use Amazon SageMaker Edge Manager versus AWS Panorama?;For companies looking to run computer vision on edge devices such as cameras and appliances, you can use AWS Panorama. Panorama offers ready-to-deploy computer vision applications for edge devices. It’s easy to get started with AWS Panorama by logging into the cloud console, specifying the model you would like to use in Amazon S3 or in SageMaker, and then writing business logic as a Python script. AWS Panorama compiles the model for the target device and creates an application package so it can be deployed to your devices with just a few clicks. In addition, independent software vendors who want to build their own custom applications can use the AWS Panorama SDK, and device manufacturers can use the Device SDK to certify their devices for AWS Panorama. /sagemaker/faqs/;What is Amazon SageMaker Neo?;Amazon SageMaker Neo enables ML models to train once and run anywhere in the cloud and at the edge. SageMaker Neo automatically optimizes models built with popular deep learning frameworks that can be used to deploy on multiple hardware platforms. Optimized models run up to 25 times faster and consume less than a tenth of the resources of typical ML models. /sagemaker/faqs/;How do I get started with Amazon SageMaker Neo?;To get started with Amazon SageMaker Neo, log into the Amazon SageMaker console, choose a trained model, follow the example to compile models, and deploy the resulting model onto your target hardware platform. /sagemaker/faqs/;What are the major components of Amazon SageMaker Neo?;Amazon SageMaker Neo contains two major components: a compiler and a runtime. First, the Neo compiler reads models exported by different frameworks. It then converts the framework-specific functions and operations into a framework-agnostic intermediate representation. Next, it performs a series of optimizations. Then, the compiler generates binary code for the optimized operations and writes them to a shared object library. The compiler also saves the model definition and parameters into separate files. During execution, the Neo runtime loads the artifacts generated by the compiler—model definition, parameters, and the shared object library to run the model. /sagemaker/faqs/;Do I need to use Amazon SageMaker to train my model in order to use Amazon SageMaker Neo to convert the model?;No. You can train models elsewhere and use Neo to optimize them for Amazon SageMaker ML instances or AWS IoT Greengrass supported devices. /sagemaker/faqs/;Which models does Amazon SageMaker Neo support?;Currently, Amazon SageMaker Neo supports the most popular deep learning models that power computer vision applications and the most popular decision tree models used in Amazon SageMaker today. Neo optimizes the performance of AlexNet, ResNet, VGG, Inception, MobileNet, SqueezeNet, and DenseNet models trained in MXNet and TensorFlow, and classification and random cut forest models trained in XGBoost. /sagemaker/faqs/;In which AWS Regions is Amazon SageMaker Neo available?;To see a list of supported Regions, view the AWS Regional Services list. /sagemaker/faqs/;What are Amazon SageMaker Savings Plans?;Amazon SageMaker Savings Plans offer a flexible usage-based pricing model for Amazon SageMaker in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a one- or three-year term. Amazon SageMaker Savings Plans provide the most flexibility and help to reduce your costs by up to 64%. These plans automatically apply to eligible SageMaker ML instance usages, including SageMaker Studio notebooks, SageMaker On-Demand notebooks, SageMaker Processing, SageMaker Data Wrangler, SageMaker Training, SageMaker Real-Time Inference, and SageMaker Batch Transform regardless of instance family, size, or Region. For example, you can change usage from a CPU instance ml.c5.xlarge running in US East (Ohio) to an ml.Inf1 instance in US West (Oregon) for inference workloads at any time and automatically continue to pay the Savings Plans price. /sagemaker/faqs/;Why should I use Amazon SageMaker Savings Plans?;If you have a consistent amount of Amazon SageMaker instance usage (measured in $/hour) and use multiple SageMaker components or expect your technology configuration (such as instance family, or Region) to change over time, SageMaker Savings Plans make it simpler to maximize your savings while providing flexibility to change the underlying technology configuration based on application needs or new innovation. The Savings Plans rate applies automatically to all eligible ML instance usage with no manual modifications required. /sagemaker/faqs/;How can I get started with Amazon SageMaker Savings Plans?;You can get started with Savings Plans from AWS Cost Explorer in the AWS Management Console or by using the API/CLI. You can easily make a commitment to Savings Plans by using the recommendations provided in AWS Cost Explorer to realize the biggest savings. The recommended hourly commitment is based on your historical On-Demand usage and your choice of plan type, term length, and payment option. Once you sign up for a Savings Plan, your compute usage will automatically be charged at the discounted Savings Plans prices and any usage beyond your commitment will be charged at regular On-Demand rates. /sagemaker/faqs/;How do Savings Plans work with AWS Organizations/Consolidated Billing?;Savings Plans can be purchased in any account within an AWS Organization/Consolidated Billing family. By default, the benefit provided by Savings Plans is applicable to usage across all accounts within an AWS Organization/Consolidated Billing family. However, you can also choose to restrict the benefit of Savings Plans to only the account that purchased them. /transcribe/faqs/;What is Amazon Transcribe?;Amazon Transcribe is an AWS Artificial Intelligence (AI) service that makes it easy for you to convert speech to text. Using Automatic Speech Recognition (ASR) technology, you can use Amazon Transcribe for a variety of business applications, including transcription of voice-based customer service calls, generation of subtitles on audio/video content, and conduct (text-based) content analysis on audio/video content. /transcribe/faqs/;How does Amazon Transcribe interact with other AWS products?;Amazon Transcribe converts audio input into text, which opens the door for various text analytics applications on voice input. For instance, by using Amazon Comprehend on the converted text data from Amazon Transcribe, you can perform sentiment analysis or extract entities and key phrases. Similarly, by integrating with Amazon Translate and Amazon Polly, you can accept voice input in one language, translate it into another, and generate voice output, effectively enabling multilingual conversations. It is also possible to integrate Amazon Transcribe with Amazon Kendra or Amazon OpenSearch to index and perform text-based search across an audio/video library. To learn more, check out the Live Call Analytics and Agent Assist, Post Call Analytics, MediaSearch, or Content Analysis solution. /transcribe/faqs/;What else should I know before using Amazon Transcribe?;Amazon Transcribe is designed to handle a wide range of speech and acoustic characteristics, including variations in volume, pitch, and speaking rate. The quality and content of the audio signal (including but not limited to factors such as background noise, overlapping speakers, accented speech, or switches between languages within a single audio file) may affect the accuracy of service output. We are constantly updating the service to improve its ability to accommodate additional acoustic variation and content types. /transcribe/faqs/;How will developers access Amazon Transcribe?;The easiest way to get started is to submit a job using the console to transcribe an audio file. You can also call the service directly from the AWS Command Line Interface, or use one of the supported SDKs of your choice to integrate with your applications. Either way, you can start using Amazon Transcribe to generate automated transcripts for your audio files with just a few lines of code. /transcribe/faqs/;Does Amazon Transcribe support real-time transcriptions?;Yes. Amazon Transcribe allows you to open a bidirectional stream over HTTP2. You can send an audio stream to the service while receiving a text stream in return in real time. Please refer to the documentation page for more details. /transcribe/faqs/;What encoding does real-time transcription support?;Supported media types differ between batch transcriptions and streaming transcriptions, though lossless formats are recommended for both. Please refer to the documentation page for more details. /transcribe/faqs/;What languages does Amazon Transcribe support?;For information on language support, please refer to this documentation page. /transcribe/faqs/;What devices does Amazon Transcribe work with?;Amazon Transcribe for the most part is device agnostic. In general, it works with any device that includes an on-device microphone such as phones, PCs, tablets, and IoT devices (such as car audio systems). Amazon Transcribe API will be able to detect the quality of the audio stream being input at the device (8kHz VS 16kHz) and will appropriately select the acoustic models for converting speech to text. Furthermore, developers can call Amazon Transcribe API through their applications to access speech-to-text conversion capability. /transcribe/faqs/;Are there size restrictions on the audio content that Amazon Transcribe can process?;Amazon Transcribe service calls are limited to four hours (or 2 GB) per API call for our batch service. The streaming service can accommodate open connections up to four hours long. /transcribe/faqs/;What programming languages does Amazon Transcribe support?;Amazon Transcribe batch service supports .NET, Go, Java, JavaScript, PHP, Python, and Ruby. Amazon Transcribe real-time service supports Java SDK, Ruby SDK, and C++ SDK. Additional SDK support is coming. For more details, visit the Resources and documentation page. /transcribe/faqs/;Why do I see too many custom words in my output?;"Custom vocabularies are optimized for a small list of targeted words; larger vocabularies may lead to over-generation of custom words, especially when they contain words that are pronounced in a similar way. If you have a large list, please try reducing it to rare words and words that are actually expected to occur in your audio files. If you have a large vocabulary covering multiple use cases, split it into separate lists for different use cases. The words that are short and sound similar to many other words may lead to over-generation (too many custom words appearing in the output). It is preferable to combine these words with surrounding words and list them as hyphen-separated phrases. For example, the custom word “A.D.” could be included as part of a phrase such as “A.D.-converter.”" /transcribe/faqs/;There are two ways of giving pronunciations, IPA or SoundsLike fields in the custom vocabulary table. Which one is better?;IPA allows for more precise pronunciations. You should provide IPA pronunciations if you are able to generate IPA (such as from a lexicon that has IPA pronunciations or an online converter tool). /transcribe/faqs/;I'd like to use IPA but I'm not a linguistic expert. Is there an online tool I can use?;"Several standard dictionaries, such as the Oxford English Dictionary or the Cambridge Dictionary (including their online versions), provide pronunciations in IPA. There are also online converters (for example, easypronunciation.com or tophonetics.com for English); however, note that in most cases these tools are based on underlying dictionaries and may not generate correct IPA for some words, such as proper names. Amazon Transcribe does not endorse any third-party tools." /transcribe/faqs/;Do I need to use different IPA standards that are specific to a different accent of the same language (for example, US English versus British English)?;"You should use the IPA standard that is appropriate for the audio files you will be processing. For example, if you are expecting to process audio from British English speakers, use the British English pronunciation standard. The set of allowed IPA symbols may differ for the different languages and dialects supported by Amazon Transcribe; please make sure that your pronunciations contain only the allowed characters. Details on the IPA character sets can be found in the documentation: Custom Vocabularies" /transcribe/faqs/;How can I provide the pronunciation using SoundsLike field in the custom vocabulary table?;You can break a word or phrase down into smaller pieces and provide a pronunciation for each piece using the standard orthography of the language to mimic the way that the word sounds. For example, in English you can provide pronunciation hints for the phrase Los-Angeles like this: loss-ann-gel-es. The hint for the word Etienne would look like this: eh-tee-en. You separate each part of the hint with a hyphen (-). You can use any of the allowed characters for the input language. For more information, visit the Custom Vocabularies page. /transcribe/faqs/;How do two different ways of providing acronyms (with periods and without periods but with pronunciations) work?;If you use an acronym containing periods, the spelling pronunciation will be generated internally. If you do not use periods, please provide the pronunciation in the pronunciation field. For some acronyms, it is not obvious whether they have a spelling pronunciation or a word-like pronunciation. For example, NATO is often pronounced ‘n eɪ t oʊ’ (nay-toh) rather than ‘ɛn eɪ ti oʊ’ (NA. T. O.). For more information, visit the Custom Vocabularies page. /transcribe/faqs/;Where can I find examples of how to use custom pronunciations?;You can find sample input formats and examples in the documentation here. /transcribe/faqs/;What happens if I use the wrong IPA? If I am uncertain, am I better off not inputting any IPA?;"The system will use the pronunciation you provide; this should increase the likelihood of the word being recognized correctly if the pronunciation is correct and matches what was spoken. If you are not certain you are generating correct IPA, please run a comparison by processing your audio files with a vocabulary that contains your IPA pronunciations, and with a vocabulary that only contains the words (and, optionally, display-as forms). If you do not provide any pronunciations, the service will use an approximation, which may or may not work better than your input." /transcribe/faqs/;When using DisplayAs forms, can I display character sets unrelated to the original language being transcribed (for example, output “Street” as “街道“)?;Yes. While phrases may only use a restricted set of characters for the specific language, UTF-8 characters apart from \t (TAB) are permitted in the DisplayAs column. /transcribe/faqs/;Is automatic content redaction or personally identifiable information (PII) redaction available with both batch and streaming APIs for Transcribe?;Yes, Amazon Transcribe supports automatic content redaction or PII redaction for both batch and streaming APIs. /transcribe/faqs/;What languages are supported for automatic content redaction / PII identification and redaction?;Please refer to the Amazon Transcribe documentation for information on the language availability of automatic content redaction / PII redaction. /transcribe/faqs/;Does Automatic content redaction also redact sensitive personal information from the source audio?;No, this feature does not remove sensitive personal information from the source audio. However, Amazon Transcribe Call Analytics removes sensitive personal information from both the transcripts and the source audio. Visit this link for more details on how call analytics can redact audio. You can also redact personal information from the source audio yourself using the start and end timestamps that are provided in the redacted transcripts for each instance of an identified PII utterance. Please refer to this audio redaction solution for standard Transcribe APIs. /transcribe/faqs/;Can I use automatic content redaction for redacting personal information from the existing text transcripts?;No, automatic content redaction only works on audio as an input. /transcribe/faqs/;What else should I know before using automatic content redaction?;Automatic content redaction is designed to identify and remove personally identifiable information (PII), but due to the predictive nature of machine learning, it may not identify and remove all instances of PII in a transcript generated by the service. You should review any output provided by Automatic content redaction to ensure it meets your needs. /transcribe/faqs/;Are there any differences between automatic content redaction for streaming and batch APIs?;Yes, there are two additional capabilities supported by automatic content redaction for the streaming API that are not supported by the batch API. You can decide to only identify PII and not redact when using content redaction with streaming API. Also you have the ability to identify or redact specific PII types with streaming API. For example, you can redact just the social security number and credit card information and keep other PII like names and email addresses. /transcribe/faqs/;Which APIs support automatic language identification?;Automatic language identification is currently supported for batch and streaming APIs. /transcribe/faqs/;What languages can Amazon Transcribe automatically identify?;Amazon Transcribe can identify any of the languages supported by the batch and streaming APIs. Go here for details on supported languages and language-specific features. /transcribe/faqs/;Does Amazon Transcribe identify multiple languages in the same audio file?;Amazon Transcribe supports multi-language ID for batch. See this link for more details. /transcribe/faqs/;Is there any way to restrict the list of languages to choose from for automatic language identification?;Yes, you can specify a list of languages that might be present in your media library. When you provide a list of languages, the identified language will be chosen from that list. If no languages are specified, the system will process the audio file against all the languages supported by Amazon Transcribe and select the most probable one. The accuracy of language identification is better when a select list of languages is provided. See this link for more details. /transcribe/faqs/;What does it cost?;Refer to the Amazon Transcribe Pricing page to learn more. /transcribe/faqs/;In what AWS Regions is Amazon Transcribe available?;Please refer to the AWS Global Infrastructure Region Table. Go here for additional details on Amazon Transcribe endpoints and quotas. /transcribe/faqs/;Are voice inputs processed by Amazon Transcribe stored, and how are they used by AWS?;Amazon Transcribe may store and use voice inputs processed by the service solely to provide and maintain the service and to improve and develop the quality of Amazon Transcribe and other Amazon machine-learning/artificial-intelligence technologies. Use of your content is important for continuous improvement of your Amazon Transcribe customer experience, including the development and training of related technologies. We do not use any personally identifiable information that may be contained in your content to target products, services, or marketing to you or your end users. Your trust, privacy, and the security of your content are our highest priority, and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. You may opt out of having your content used to improve and develop the quality of Amazon Transcribe and other Amazon machine-learning/artificial-intelligence technologies by using an AWS Organizations opt-out policy. For information about how to opt out, see AI services opt-out policy. /transcribe/faqs/;Can I delete data and artifacts associated with transcription jobs stored by Amazon Transcribe?;Yes. You can use available Delete APIs to delete data and other artifacts associated with transcription jobs. If you have issues doing so, contact AWS support. /transcribe/faqs/;Who has access to my content that is processed and stored by Amazon Transcribe?;Only authorized employees will have access to your content that is processed by Amazon Transcribe. Your trust, privacy, and the security of your content are our highest priority, and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. /transcribe/faqs/;Do I still own my content that is processed and stored by Amazon Transcribe?;You always retain ownership of your content, and we will only use your content with your consent. /transcribe/faqs/;What happens to my data used in training custom language models? Will I still own it?;When submitting text data that is used to train a dedicated model, you have ownership of the original text data and the generated custom model. The text data will neither be stored, nor used to improve our general speech recognition engine. Models produced by using CLM are self-contained and accessibly by only you. /transcribe/faqs/;Since the service will not be retaining my training data, are there any drawbacks or degradation to the transcription quality or overall service experience?;There will be no transcription quality degradation resulting from our service not storing your training data. Once the training data is used to actually produce a custom language model, the model itself becomes available for repeated use at your discretion. The original training set you uploaded is expunged from our systems. The only drawback is if you require technical support. Because we do not retain your original training data, we would not have convenient access to those assets or related intermediate artifacts, should you require support team to investigate potential service issues. Support would still be available, but not as expedient because we may need to ask for additional information from you. /transcribe/faqs/;How can I reuse the data for future model updates or improvements?;Since training data is not stored, the same data set and any additional data will have to be uploaded again to train new models. When there is an update to the base model provided by Amazon Transcribe, you will be notified. To take advantage of the latest base model, you should submit your data to train a new model. You will then have both the original custom model that you previously generated and also the new version to use. /transcribe/faqs/;How do I delete a model?;You can delete any customer language model that you generated, at your discretion. /transcribe/faqs/;Is the content processed by Amazon Transcribe moved outside the AWS region where I am using Amazon Transcribe?;Any content processed by Amazon Transcribe is encrypted and stored at rest in the AWS region where you are using Amazon Transcribe. Some portion of content processed by Amazon Transcribe may be stored in another AWS region solely in connection with the continuous improvement and development of your Amazon Transcribe customer experience and other Amazon machine-learning/artificial-intelligence technologies. If you opt out of having your content used to develop the quality of Amazon Transcribe and other Amazon machine-learning/artificial-intelligence technologies by contacting AWS Support, your content will not be stored in another AWS region. You can request deletion of voice inputs associated with your account by contacting AWS Support. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. /transcribe/faqs/;Can I use Amazon Transcribe in connection with websites, programs or other applications that are directed or targeted to children under age 13 and subject to the Children’s Online Privacy Protection Act (COPPA)?;Yes, subject to your compliance with the Amazon Transcribe Service Terms, including your obligation to provide any required notices and obtain any required verifiable parental consent under COPPA, you may use Amazon Transcribe in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13. /transcribe/faqs/;How do I determine whether my website, program, or application is subject to COPPA?;For information about the requirements of COPPA and guidance for determining whether your website, program, or other application is subject to COPPA, please refer directly to the resources provided and maintained by the United States Federal Trade Commission. This site also contains information regarding how to determine whether a service is directed or targeted, in whole or in part, to children under age 13. /transcribe/faqs/;What is Amazon Transcribe Call Analytics?;Please refer to the AWS regional services documentation for information on AWS Region coverage for Amazon Transcribe Call Analytics. /transcribe/faqs/;What can I do with Amazon Transcribe Call Analytics?;Please refer to the AWS regional services documentation for information on AWS Region coverage for Amazon Transcribe Call Analytics. /transcribe/faqs/;Which languages does Amazon Transcribe Call Analytics support?;Please refer to the AWS regional services documentation for information on AWS Region coverage for Amazon Transcribe Call Analytics. /transcribe/faqs/;In which AWS Regions is Amazon Transcribe Call Analytics available?;Please refer to the AWS regional services documentation for information on AWS Region coverage for Amazon Transcribe Call Analytics. /transcribe/faqs/;What is Amazon Transcribe Medical?;Amazon Transcribe Medical is an automatic speech recognition (ASR) service that makes it easy for developers to add medical speech-to-text capabilities to their applications. Using Amazon Transcribe Medical, you can quickly and accurately transcribe medical dictation and conversational speech into text for a variety of purposes, such as recording physician notes or processing in downstream text analytics to extract meaningful insights. /transcribe/faqs/;What can I do with Amazon Transcribe Medical?;Amazon Transcribe Medical uses advanced machine learning models to accurately transcribe medical speech into text. Transcribe Medical can generate text transcripts that can be used to support a variety of use cases, spanning clinical documentation workflow and drug safety monitoring (pharmacovigilance) to subtitling for telemedicine and even contact center analytics in the healthcare and life sciences domains. /transcribe/faqs/;Do I need to be an expert in automatic speech recognition (ASR) to use Amazon Transcribe Medical?;No, you don’t need any ASR or machine learning expertise to use Amazon Transcribe Medical. You only need to call Transcribe Medical’s API, and the service will handle the required machine learning in the backend to transcribe medical speech to text. /transcribe/faqs/;How do I get started with Amazon Transcribe Medical?;You can get started with Amazon Transcribe Medical from the AWS Management console or by using the SDK. Please refer to this technical documentation page for details. /transcribe/faqs/;Which languages does Amazon Transcribe Medical support?;Amazon Transcribe Medical currently supports medical transcription in US English. /transcribe/faqs/;Which medical specialties does Amazon Transcribe Medical support?;Amazon Transcribe Medical supports transcription for an expanding list of primary care and specialty care specialties. Visit our documentation for a full list of supported medical specialties. /transcribe/faqs/;In which AWS regions is Amazon Transcribe Medical available?;Please refer to the AWS regional services documentation for information on AWS Region coverage for Amazon Transcribe Medical. /transcribe/faqs/;Is Amazon Transcribe Medical HIPAA eligible?;Yes. /transcribe/faqs/;Is the content processed by Amazon Transcribe Medical used for any purpose other than to provide the service?;Amazon Transcribe Medical does not use content processed by the service for any reason other than to provide and maintain the service. Content processed by the service is not used to develop or improve the quality for Amazon Transcribe Medical or any other Amazon machine-learning/artificial-intelligence technologies. /transcribe/faqs/;Does Amazon Transcribe Medical learn over time?;Yes, Amazon Transcribe Medical uses machine learning and is continuously being trained to make it better for customer use cases. Amazon Transcribe medical does not store or use customer data used with the service to train the models /transcribe/faqs/;What else should I know before using the Amazon Transcribe Medical service?;Amazon Transcribe Medical is not a substitute for professional medical advice, diagnosis, or treatment. You and your end users are responsible for exercising your and their own discretion, experience, and judgment in determining the correctness, completeness, timeliness, and suitability of any information provided by Amazon Transcribe Medical. You and your end users are solely responsible for any decisions, advice, actions, and/or inactions based on the use of Amazon Transcribe Medical. /transcribe/faqs/;What functionality does custom language models provide today?;You can use custom language models (CLM) to train and develop language models that are domain-specific. CLM currently supports Australian English, British English, Hindi, US English and US Spanish for batch transcriptions and US English for streaming transcriptions. CLM supports the simultaneous use of custom vocabulary for batch transcriptions. /transcribe/faqs/;How much and what type of training data do I need? How do I obtain the data? Does the data need to have a specific format?;"The text data should be relevant to the audio that will be transcribed using the custom model; it should contain as many of the domain-specific words, phrases, and word combinations as possible. We recommend using at least 100k and at most 10M words of running text. Text data resources can be obtained from any in-house or public sources (e.g., using text from customers’ websites). We recommend each plain text file contain 200,000 words or more, but not exceeding 1 GB in overall file size. The text should be in UTF-8, and use one sentence per line. Each sentence should contain punctuation. Users are responsible for spell-checking, removing formatting characters, and validating the encoding." /transcribe/faqs/;How do I use custom language models (CLM)?;To train a custom language model, customers simply supply the text data in an Amazon S3 bucket. Users can then use the Amazon Transcribe service console to load and process the data to train a custom language model. Training is fully automated and requires minimal intervention from the user. When the final custom model is ready, it is made available in the customer’s AWS account for transcribing domain-specific audio files. Moreover, customers can train multiple custom models to use for a variety of different use cases. /transcribe/faqs/;Are improvements guaranteed? Is it worth expending the effort of collecting text data?;Improvements are not guaranteed – the change in performance will depend on how closely the text data matches the audio, and on the amount of data provided. More data is generally better, but most importantly, the data should cover words and word sequences that are expected to occur in the audio files you intend to transcribe. Improvements to transcription accuracy will depend on the quality of the training data as well as the use case. In some scenarios, general benchmarking indicates as much as 10% to 15% relative accuracy improvement. /transcribe/faqs/;How long does model training take? When will I be able to use it?;Model training usually takes between 6 and 10 hours. The length of training time depends on how large the data set is. The custom model will be available directly after training has been completed. /transcribe/faqs/;How will I be able to use the model? How will I know whether it works better than the generic model provided by Amazon Transcribe?;The model will be made available in your account under a model ID assigned by you prior to the training process. In order to use the model, a flag with the model ID needs to be added to the transcription request. You should test the model on your audio files and compare the output against results obtained from the generic engine. /transcribe/faqs/;How many custom language models can I train? Can I have multiple models enabled concurrently for my account?;You may concurrently train up to 5 different models at any given time per AWS account. For each account, you can store a maximum of 10 models by default. If more are required, service limit increases can be made here. /transcribe/faqs/;Are custom acoustic models supported?;No. Custom acoustic models are not supported. Custom language models are built off of text data that is relevant to your use case or domain. /translate/faqs/;What is Amazon Translate?;Amazon Translate is a Neural Machine Translation (MT) service for translating text between supported languages. Powered by deep learning methods, the service provides high-quality, affordable, and customizable language translation, enabling developers to translate company and user-authored content, or build applications requiring support across multiple languages. The service can be used via an API, enabling either real-time or batch translation of text from the source language to the target language. /translate/faqs/;What are the most common use cases for Amazon Translate?;Amazon Translate is a great solution in cases where the volume of content is high, speed is critical, and a certain level of translation imperfection (usually minor) is acceptable. For example, if you need to extract insights from large volumes of text in many languages, enable customers to search your application in their language of choice, make user-authored content such as forums and support content accessible in languages other than the source, get the gist out of responses to questionnaires and surveys, or publish a first draft – you can use Amazon Translate’s raw output. /translate/faqs/;How can I use the service?;The easiest way to get started with Amazon Translate is to use the console to translate some text. You can also call the service directly from the AWS Command Line Interface, or use one of the SDKs in the programming language of your choice to integrate with your applications. Either way, you can start using Amazon Translate for multilingual text capabilities to translate text with just a few lines of code. /translate/faqs/;Does the service provide automatic source language detection?;Amazon Translate takes plain text input and language flags to indicate the language of the source text and desired target. If the source language is unknown, Amazon Translate will identify the source language using Amazon Comprehend behind the scenes, and report that language back along with the translation to the target language. /translate/faqs/;What kind of inputs does the service support?;Amazon Translate supports plain text input in UTF-8 format. /translate/faqs/;What does it cost?;Refer to the Amazon Translate pricing page to learn more. /translate/faqs/;What AWS regions are available for Amazon Translate?;Please refer to the AWS Global Infrastructure Region Table. /translate/faqs/;Are requests in which no translation occurs charged for?;Requests where the source language equals the target language (whether user designated or automatically identified), and when an error occurs and no translation is returned, are not charged for. Requests where the content is non-translatable (e.g., “&*^%((**&(^”) are charged for. /translate/faqs/;Are text inputs processed by Amazon Translate stored, and how are they used by AWS?;Amazon Translate may store and use text inputs processed by the service solely to provide and maintain the service and to improve and develop the quality of Amazon Translate and other Amazon machine-learning/artificial-intelligence technologies. Use of your content is important for continuous improvement of your Amazon Translate customer experience, including the development and training of related technologies. We do not use any personally identifiable information that may be contained in your content to target products, services or marketing to you or your end users. Your trust, privacy, and the security of your content are our highest priority, and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. You may opt out of having your content used to improve and develop the quality of Amazon Translate and other Amazon machine-learning/artificial-intelligence technologies by using an AWS Organizations opt-out policy. For information about how to opt out, see Managing AI services opt-out policy. /translate/faqs/;Who has access to my content that is processed and stored by Amazon Translate?;Only authorized employees will have access to your content that is processed by Amazon Translate. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. /translate/faqs/;Is the content processed by Amazon Translate moved outside the AWS region where I am using Amazon Translate?;Any content processed by Amazon Translate is encrypted and stored at rest in the AWS region where you are using Amazon Translate. Some portion of content processed by Amazon Translate may be stored in another AWS region solely in connection with the continuous improvement and development of your Amazon Translate customer experience and other Amazon machine-learning/artificial-intelligence technologies. Your trust, privacy, and the security of your content are our highest priority, and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. Please see https://aws.amazon.com/compliance/data-privacy-faq/ for more information. /translate/faqs/;Can I use Amazon Translate in connection with websites, programs or other applications that are directed or targeted to children under age 13 and subject to the Children’s Online Privacy Protection Act (COPPA)?;Yes, subject to your compliance with the AWS Service Terms, including your obligation to provide any required notices and obtain any required verifiable parental consent under COPPA, you may use Amazon Translate in connection with websites, programs, or other applications that are directed or targeted, in whole or in part, to children under age 13. /translate/faqs/;How do I determine whether my website, program, or application is subject to COPPA?;For information about the requirements of COPPA and guidance for determining whether your website, program, or other application is subject to COPPA, please refer directly to the resources provided and maintained by the United States Federal Trade Commission. This site also contains information regarding how to determine whether a service is directed or targeted, in whole or in part, to children under age 13. /athena/faqs/;What is Amazon Athena?;"Athena is an interactive analytics service that makes it easier to analyze data in Amazon Simple Storage Service (S3) using Python or standard SQL. Athena is serverless, so there is no infrastructure to set up or manage, and you can start analyzing data immediately. You don’t even need to load your data into Athena; it works directly with data stored in Amazon S3. Amazon Athena for SQL uses Trino and Presto with full standard SQL support and works with various standard data formats, including CSV, JSONApache ORC, Apache Parquet, and Apache Avro. Athena for Apache Spark supports Python and allows you to use Apache Spark, an open-source, distributed processing system used for big data workloads. To get started, log in to the Athena Management Console and start interacting with your data using the query editor or notebooks." /athena/faqs/;What can I do with Athena?;With Athena, you can analyze data stored in S3 and 25-plus data sources, including on-premises data sources or other cloud systems. You can use Athena to run interactive analytics using ANSI SQL or Python without the need to aggregate or load the data into Athena. Athena can process unstructured, semi-structured, and structured datasets. Examples include CSV, JSONAvro, or columnar data formats such as Parquet and ORC. Amazon Athena for SQL integrates with Amazon QuickSight for visualizing your data or creating dashboards. You can also use Athena to generate reports or explore data with business intelligence tools or SQL clients, connected with an ODBC or JDBC driver. /athena/faqs/;How do I get started with Athena?;To get started with Athena, log in to the AWS Management Console for Athena and create your schema by writing Data Definition Language (DDL) statements on the console or by using a create table wizard. You can then start querying data using a built-in query editor. Athena queries data directly from S3, so there’s no loading required. /athena/faqs/;How do you access Athena?;Amazon Athena for SQL can be accessed through the AWS Management Console, an API, or an ODBC or JDBC driver. You can programmatically run queries, add tables, or partitions using the ODBC or JDBC driver. /athena/faqs/;How does Athena for SQL store table definitions and schema?;Athena for SQL uses a managed AWS Glue Data Catalog to store information and schemas about the databases and tables that you create for your data stored in S3. In Regions where AWS Glue is available, you can upgrade to using the Data Catalog with Athena. In Regions where AWS Glue is not available, Athena uses an internal catalog. /athena/faqs/;Why should I upgrade to Data Catalog?;AWS Glue is a fully managed extract, transform, and load (ETL) service. AWS Glue has three main components: 1) a crawler that automatically scans your data sources, identifies data formats, and infers schemas, 2) a fully managed ETL service that allows you to transform and move data to various destinations, and 3) a Data Catalog that stores metadata information about databases and tables either stored in S3 or an ODBC- or JDBC-compliant data store. To use the benefits of AWS Glue, you must upgrade from using Athena’s internal Data Catalog to the Glue Data Catalog. /athena/faqs/;Is there a step-by-step process to upgrade to the Data Catalog?;Yes. For a step-by-step process, review the Amazon Athena User Guide: Integration with AWS Glue. /athena/faqs/;In which Regions is Athena available?;For details of Athena service availability by Region, review the AWS Regional Services List. /athena/faqs/;How do I create tables and schemas for my data on S3?;Athena uses Apache Hive DDL to define tables. You can run DDL statements using the Athena console, with an ODBC or JDBC driver, through the API, or using the Athena create table wizard. If you use the Data Catalog with Athena, you can also use AWS Glue crawlers to automatically infer schemas and partitions. An AWS Glue crawler connects to a data store, progresses through a prioritized list of classifiers to extract the schema of your data and other statistics, and then populates the Data Catalog with this metadata. Crawlers can run periodically to detect the availability of new data and changes to existing data, including table definition changes. Crawlers automatically add new tables, new partitions to existing table, and new versions of table definitions. You can customize AWS Glue crawlers to classify your own file types. /athena/faqs/;Which data formats does Athena support?;Athena supports various data formats like CSV, TSV, JSONor Textfiles and also supports open-source columnar formats, such as ORC and Parquet. Athena also supports compressed data in Snappy, Zlib, LZO, and GZIP formats. You can improve performance and reduce your costs by compressing, partitioning, and using columnar formats. /athena/faqs/;Can I run any Hive Query on Athena?;Athena uses Hive only for DDL and creation/modification and deletion of tables or partitions. For a complete list of statements supported, review the Amazon Athena User Guide: DDL statements. Athena uses Trino and Presto when you run SQL queries on S3. You can run ANSI-compliant SQL SELECT statements to query your data on S3. /athena/faqs/;What is a SerDe?;SerDe stands for Serializer/Deserializer, which are libraries that tell Hive how to interpret data formats. Hive DDL statements require you to specify a SerDe so that the system knows how to interpret the data that you’re pointing to. Athena uses SerDes to interpret the data read from S3. The concept of SerDes in Athena is the same as the concept used in Hive. Amazon Athena supports the following SerDes: /athena/faqs/;Can I add my own SerDe to Athena?;Currently, you cannot add your own SerDe to Athena. We appreciate your feedback, so if there are any SerDes that you would like to see added, contact the Athena team at athena-feedback@amazon.com. /athena/faqs/;If I created Parquet/ORC files using Spark/Hive, will I be able to query them in Athena?;Yes, Parquet and ORC files created with Spark can be read in Athena. /athena/faqs/;Can I use QuickSight with Athena?;Yes. Athena integrates with QuickSight, so you can seamlessly visualize your data stored in S3. /athena/faqs/;What is a federated query?; Organizations often store data in a data source that meets the needs of their applications or business processes. These can include relational, key-value, document, in-memory, search, graph, time-series, and ledger databases in addition to storing data in an S3 data lake. Performing analytics on such diverse sources can be complex and time consuming because it typically requires learning new programming languages or database constructs and building complex pipelines to extract, transform, and duplicate data before it can be used for analysis. Athena reduces this complexity by allowing you to run SQL queries on the data where it is. You can use well-known SQL constructs to query data across multiple data sources for quick analysis, or use scheduled SQL queries to extract and transform data from multiple data sources and store them on S3 for further analysis. /athena/faqs/;Why should I use federated queries in Athena?; Athena provides built-in connectors to several popular data stores, including Amazon Redshift and Amazon DynamoDB. You can use these connectors to enable SQL analytics use cases on structured, semi-structured, object, graph, time-series, and other data storage types. For a list of supported sources, review the Amazon Athena User Guide: Using Athena Data Source Connectors. /athena/faqs/;Which data sources are supported?;You can also use Athena’s data connector SDK to create a custom data source connector and query it with Athena. Get started by reviewing the documentation and example connector implementation. /athena/faqs/;Which use cases does federated query enable?;Run on-demand analysis on data spread across multiple data stores using a single tool and SQL dialect. Visualize data in BI applications that push complex, multisource joins down to Athena’s distributed compute engine over ODBC and JDBC interfaces. Design self-service ETL pipelines and event-based data-processing workflows with Athena integration with AWS Step Functions. Unify diverse data sources to produce rich input features for ML model-training workflows. Develop user-facing data-as-a-product applications that surface insights across data mesh architectures. Support analytics use cases while your organization migrates on-premises sources to AWS. /athena/faqs/;Can I use federated query for ETL?; A data source connector is a piece of code that runs on Lambda that translates between your target data source and Athena. When you use a data source connector to register a data store with Athena, you can run SQL queries on federated data stores. When a query runs on a federated source, Athena calls the Lambda function and tasks it with running the parts of your query that are specific to the federated source. To learn more, review the Amazon Athena User Guide: Using Amazon Athena Federated Query. /athena/faqs/;What is Amazon Athena for Apache Spark?; Use Athena for Apache Spark when you need an interactive, fully managed analytics experience and a tight integration with AWS services. You can use Spark to perform analytics in Athena using familiar, expressive languages such as Python and the growing environment of Spark packages. You can also enter their Spark applications through Athena APIs or into simplified notebooks in the Athena console, and begin running Spark applications under a second without setting up and tuning the underlying infrastructure. Like the SQL query capabilities of Athena, Athena offers a fully managed Spark experience and handles the performance tuning, machine configurations, and software patching automatically so that you do not need to worry about keeping current with version upgrades. Also, Athena is tightly integrated with other analytics services in the AWS system such as Data Catalog. Therefore, you can create Spark applications on data in S3 data lakes by referencing tables from your Data Catalog. /athena/faqs/;Why should I use Athena for Apache Spark?; To get started with Athena for Apache Spark, you can start a notebook in the Athena console or start a session using the AWS Command Line Interface (CLI) or Athena API. In your notebook, you can start entering and shutting down Spark applications using Python. Athena also integrates with Data Catalog, so you can work with any data source referenced in the catalog, including data directly in S3 data lakes. Using notebooks, you can now query data from various sources, chain together multiple calculations, and visualize the results of their analyses. On your Spark applications, you can check the execution status and review logs and execution history in the Athena console. /athena/faqs/;How do I start working with Athena for Apache Spark?; Athena for Apache Spark is based on the stable Spark 3.2 release. As a fully managed engine, Athena will provide a custom build of Spark and will handle most Spark version updates automatically in a backward-compatible way without requiring your involvement. /athena/faqs/;How is Athena for Apache Spark priced?;When you start a Spark session either by starting a notebook on the Athena console or using Athena API, two nodes are provisioned for your application: a notebook node that will act as the server for the notebook user interface and a Spark driver node that coordinated that Spark application and communicates with all the Spark worker nodes. Athena will charge you for driver and worker nodes during the duration of the session. Amazon Athena provides notebooks on the console as a user interface for creating, submitting, and executing Apache Spark applications and offers it to you at no additional cost. Athena does not charge for the notebook nodes used during the Spark session. /athena/faqs/;When should I use Amazon EMR versus Athena?;Amazon EMR goes far beyond just running SQL queries. With Amazon EMR, you can run various scale-out data processing tasks for applications, such as machine learning (ML), graph analytics, data transformation, streaming data, and virtually anything that you can code. Use Amazon EMR if you use custom code to process and analyze large datasets with the latest big data processing frameworks, such as Apache HBase, Spark, Hadoop, or Presto. Amazon EMR gives you full control over the configuration of your clusters and the software installed on them. /athena/faqs/;Can I use Athena to query data that I process using Amazon EMR?;Yes, Athena supports many of the same data formats as Amazon EMR. The Athena Data Catalog is compatible with the Hive metastore. If you're using Amazon EMR and already have a Hive metastore, you run your DDL statements on Athena, and then you can start querying your data right away without impacting your Amazon EMR jobs. /athena/faqs/;How does federated query in Athena SQL relate to other AWS services?;Federated query in Athena provides you with a unified way to run SQL queries across various relational, nonrelational, and custom data sources. /emr/faqs/;What is Amazon EMR?;Amazon EMR is the industry-leading cloud big data platform for data processing, interactive analysis, and machine learning using open source frameworks such as Apache Spark, Apache Hive, and Presto. With EMR you can run petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 1.7x faster than standard Apache Spark. /emr/faqs/;Why should I use Amazon EMR?;Amazon EMR lets you focus on transforming and analyzing your data without having to worry about managing compute capacity or open-source applications, and saves you money. Using EMR, you can instantly provision as much or as little capacity as you like on Amazon EC2 and set up scaling rules to manage changing compute demand. You can set up CloudWatch alerts to notify you of changes in your infrastructure and take actions immediately. If you use Kubernetes, you can also use EMR to submit your workloads to Amazon EKS clusters.. Whether you use EC2 or EKS, you benefit from EMR’s optimized runtimes which speed your analysis and save both time and money. /emr/faqs/;How can I deploy and manage Amazon EMR?;You can deploy your workloads to EMR using Amazon EC2, Amazon Elastic Kubernetes Service (EKS), or on-premises AWS Outposts. You can run and manage your workloads withthe EMR Console, API, SDK or CLI and orchestrate them using Amazon Managed Workflows for Apache Airflow (MWAA) or AWS Step Functions. For an interactive experience you can use EMR Studio or SageMaker Studio. /emr/faqs/;How can I get started with Amazon EMR?;"To sign up for Amazon EMR, click the “Sign Up Now” button on the Amazon EMR detail page http://aws.amazon.com/emr. You must be signed up for Amazon EC2 and Amazon S3 to access Amazon EMR; if you are not already signed up for these services, you will be prompted to do so during the Amazon EMR sign-up process. After signing up, please refer to the Amazon EMR documentation, which includes our Getting Started Guide – the best place to get going with the service." /emr/faqs/;How reliable is Amazon EMR?;Service Level Agreement. /emr/faqs/;Where can I find code samples?;Check out the sample code in these Articles and Tutorials. If you use EMR Studio, you can explore the features using a set of notebook examples. /emr/faqs/;How do I develop a data processing application?;You can develop, visualize and debug data science and data engineering applications written in R, Python, Scala, and PySpark in Amazon EMR Studio. You can also develop a data processing job on your desktop, for example, using Eclipse, Spyder, PyCharm, or RStudio, and run it on Amazon EMR. Additionally, you can select JupyterHub or Zeppelin in the software configuration when spinning up a new cluster and develop your application on Amazon EMR using one or more instances. /emr/faqs/;What is the benefit of using the Command Line Tools or APIs vs. AWS Management Console?;The Command Line Tools or APIs provide the ability to programmatically launch and monitor progress of running clusters, to create additional custom functionality around clusters (such as sequences with multiple processing steps, scheduling, workflow, or monitoring), or to build value-added tools or applications for other Amazon EMR customers. In contrast, the AWS Management Console provides an easy-to-use graphical interface for launching and monitoring your clusters directly from a web browser. /emr/faqs/;Can I add steps to a cluster that is already running?;Yes. Once the job is running, you can optionally add more steps to it via the AddJobFlowSteps API. The AddJobFlowSteps API will add new steps to the end of the current step sequence. You may want to use this API to implement conditional logic in your cluster or for debugging. /emr/faqs/;Can I be notified when my cluster is finished?;You can sign up for up Amazon SNand have the cluster post to your SNtopic when it is finished. You can also view your cluster progress on the AWS Management Console or you can use the Command Line, SDK, or APIs to get a status on the cluster. /emr/faqs/;Can I terminate my cluster when my steps are finished?;Yes. You can terminate your cluster automatically when all your steps finish by turning the auto-terminate flag on. /emr/faqs/;What OS versions are supported with Amazon EMR?;Amazon EMR 5.30.0 and later, and the Amazon EMR 6.x series are based on Amazon Linux 2. You can also specify a custom AMI that you create based on the Amazon Linux 2. This allows you to perform sophisticated pre-configuration for virtually any application. For more information, see Using a Custom AMI. /emr/faqs/;Does Amazon EMR support third-party software packages?;Yes. You can use Bootstrap Actions to install third-party software packages on your cluster. You can also upload statically compiled executables using the Hadoop distributed cache mechanism. EMR 6.x supports Hadoop 3, which allows the YARN NodeManager to launch containers either directly on the EMR cluster host or inside a Docker container. Please see our documentation to learn more. /emr/faqs/;What tools are available to me for debugging?;There are several tools you can use to gather information about your cluster to help determine what went wrong. If you use Amazon EMR studio, you can launch tools like Spark UI and YARN Timeline Service to simplify debugging. From the Amazon EMR Console, you can get off-cluster access to persistent application user interfaces for Apache Spark, Tez UI and the YARN timeline server, several on-cluster application user interfaces, and a summary view of application history in the Amazon EMR console for all YARN applications. You can also connect to your Master Node Using SSH and view cluster instances via these the web interfaces . For more information, see our documentation. /emr/faqs/;What is EMR Studio?;EMR Studio is an integrated development environment (IDE) that makes it easy for data scientists and data engineers to develop, visualize, and debug data engineering and data science applications written in R, Python, Scala, and PySpark. /emr/faqs/;What can I do with EMR Studio?;With EMR Studio, you can log in directly to fully managed Jupyter notebooks using your corporate credentials without logging into the AWS console, start notebooks in seconds, get onboarded with sample notebooks, and perform your data exploration. You can also customize your environment by loading custom kernels and python libraries from notebooks. EMR Studio kernels and applications run on EMR clusters, so you get the benefit of distributed data processing using the performance optimized Amazon EMR runtime for Apache Spark. You can collaborate with peers by sharing notebooks via GitHub and other repositories. You can also run Notebooks directly as continuous integration and deployment pipelines. You can pass different parameter values to a notebook. You can also chain notebooks, and integrate notebooks into scheduled workflows using workflow orchestration services like Apache Airflow. Further, you can debug clusters and jobs using as few clicks as possible with native applications interfaces such as the Spark UI and the YARN Timeline service. /emr/faqs/;How is EMR Studio different from EMR Notebooks?;There are five main differences. /emr/faqs/;How is EMR Studio different from SageMaker Studio?;You can use both EMR Studio and SageMaker Studio with Amazon EMR. EMR Studio provides an integrated development environment (IDE) that makes it easy for you to develop, visualize, and debug data engineering and data science applications written in R, Python, Scala, and PySpark. Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all machine learning development steps. SageMaker Studio gives you complete access, control, and visibility into each step required to build, train, and deploy models. You can quickly upload data, create new notebooks, train and tune models, move back and forth between steps to adjust experiments, compare results, and deploy models to production all in one place, making you much more productive. /emr/faqs/;How do I get started with EMR Studio?;Your administrator must first set up an EMR Studio. When you receive a unique sign-on URL for your Amazon EMR Studio from your administrator, you can log in to the Studio directly using your corporate credentials. /emr/faqs/;Do I need to log in to the AWS Management Console to use EMR Studio?;No. After your administrator sets up an EMR Studio and provides the Studio access URL, your team can log in using corporate credentials. There’s no need to log in to the AWS Management Console. In an EMR Studio, your team can perform tasks and access resources configured by your administrator. /emr/faqs/;What identity providers are supported for the single sign-on experience in EMR Studio?;AWS IAM Identity Center (successor to AWS SSO) is the single sign-on service provider for EMR Studio. The list of identity providers supported by AWS IAM can be found in our documentation. /emr/faqs/;What is a Workspace in EMR Studio?;Workspaces help you organize Jupyter Notebooks. All notebooks in a Workspace are saved to the same Amazon S3 location and run on the same cluster. You can also link a code repository like a GitHub repository to all notebooks in a workspace. You can create and configure a Workspace before attaching it to a cluster, but you should connect to a cluster before running a notebook. /emr/faqs/;In EMR Studio, can I create a workspace or open a workspace without a cluster?;Yes, you can create or open a workspace without attaching it to a cluster. Only when you need to execute, you should connect them to a cluster. EMR Studio kernels and applications are executed on EMR clusters, so you get the benefit of distributed data processing using the performance optimized Amazon EMR runtime for Apache Spark. /emr/faqs/;Can I install custom libraries to use in my notebook code?;All Spark queries run on your EMR cluster, so you need to install all runtime libraries that your Spark application uses on the cluster. You can easily install notebook-scoped libraries within a notebook cell. You can also install Jupyter Notebook kernels and Python libraries on a cluster master node either within a notebook cell or while connected using SSH to the master node of the cluster. For more information, see documentation. Additionally, you can use a bootstrap action or a custom AMI to install required libraries when you create a cluster. For more information, see Create Bootstrap Actions to Install Additional Software and Using a Custom AMI in the Amazon EMR Management Guide. /emr/faqs/;Where are the notebooks saved?;Workspace together with notebook files in the workspace are saved automatically at regular intervals to the ipynb file format in the Amazon S3 location that you specify when you create the workspace. The notebook file has the same name as your notebook in the Amzon EMR Studio. /emr/faqs/;How do I use version control with my notebook? Can I use repositories like GitHub?;You can associate Git-based repositories with your Amazon EMR Studio notebooks to save your notebooks in a version controlled environment. /emr/faqs/;In EMR Studio, what compute resources can I run notebooks on?;With EMR Studio, you can run notebook code on Amazon EMR running on Amazon Elastic Compute Cloud (Amazon EC2) or Amazon EMR on Amazon Elastic Kubernetes Service (Amazon EKS). You can attach notebooks to either existing or new clusters. You can create EMR clusters in two ways in EMR Studio: create a cluster using a pre-configured cluster template via AWS Service Catalog, create a cluster by specifying cluster name, number of instances, and instance type. /emr/faqs/;Can I re-attach a workspace with a different compute resource in EMR Studio?;Yes, you can open your workspace, choose EMR Clusters icon on the left, push Detach button, and then select a cluster from the Select cluster drop down list, and push Attach button. /emr/faqs/;Where do I find all my workspaces in EMR Studio?;In EMR Studio, you may choose Workspaces tab on the left and view all workspaces created by you and other users in the same AWS account. /emr/faqs/;What are the IAM policies needed to use EMR Studio?;Each EMR studio needs permissions to interoperate with other AWS services. To give your EMR Studios the necessary permissions, your administrators need to create an EMR Studio service role with the provided policies. They also need to specify a user role for EMR Studio that defines Studio-level permissions. When they add users and groups from AWS IAM Identity Center (successor to AWS SSO) to EMR Studio, they can assign a session policy to a user or group to apply fine-grained permission controls. Session policies help administrators refine user permissions without the need to create multiple IAM roles. For more information about session policies, see Policies and Permissions in the AWS Identity and Access Management User Guide. /emr/faqs/;Is there any limitations on EMR clusters I can attach my workspace to in EMR Studio?;Yes. High Availability (Multi-master) clusters, Kerberized clusters, and AWS Lake Formation clusters are currently not supported. /emr/faqs/;What is the cost of using Amazon EMR Studio?;Amazon EMR Studio is provided at no additional charge to you. Applicable charges for Amazon Simple Storage Service storage and for Amazon EMR clusters apply when you use EMR Studio. For more information about pricing options and details, see Amazon EMR pricing. /emr/faqs/;What can I do with EMR Notebooks?;You can use EMR Notebooks to build Apache Spark applications and run interactive queries on your EMR cluster with minimal effort. Multiple users can create serverless notebooks directly from the console, attach them to an existing shared EMR cluster, or provision a cluster directly from the console and immediately start experimenting with Spark. You can detach notebooks and re-attach them to new clusters. Notebooks are auto-saved to S3 buckets, and you can retrieve saved notebooks from the console to resume work. EMR Notebooks are prepackaged with the libraries found in the Anaconda repository, allowing you to import and use these libraries in your notebooks code and use them to manipulate data and visualize results. Further, EMR notebooks have integrated Spark monitoring capabilities that you can use to monitor the progress of your Spark jobs and debug code from within the notebook. /emr/faqs/;How do I get started with EMR Notebooks?;To get started with EMR Notebooks, open the EMR console and choose Notebooks in the navigation pane. From there, just choose Create Notebook, enter a name for your notebook, choose an EMR cluster or instantly create a new one, provide a service role for the notebook to use, and choose an S3 bucket where you want to save your notebook files and then click on Create Notebook. After the notebook shows a Ready status, choose Open to start the notebook editor. /emr/faqs/;What is the cost of using EMR Notebooks?;EMR notebooks are provided at no additional charge to you. You will be charged as usual for the attached EMR clusters in your account. You and find out more about the pricing for your cluster by visiting https://aws.amazon.com/emr/pricing/ /emr/faqs/;How do I get my data into Amazon S3?;Amazon EMR provides several ways to get data onto a cluster. The most common way is to upload the data to Amazon S3 and use the built-in features of Amazon EMR to load the data onto your cluster. You can use the Distributed Cache feature of Hadoop to transfer files from a distributed file system to the local file system. For more details, see documentation. Alternatively, if you are migrating data from on premises to the cloud, you can use one of the Cloud Data Migration services from AWS. /emr/faqs/;How do I get logs for terminated clusters?;Hadoop system logs as well as user logs will be placed in the Amazon S3 bucket which you specify when creating a cluster. Persistent application UIs are run off-cluster, Spark History Server, Tez UI and YARN timeline servers logs are available for 30 days after an application terminates. /emr/faqs/;Do you compress logs?;No. At this time Amazon EMR does not compress logs as it moves them to Amazon S3. /emr/faqs/;Can I load my data from the internet or somewhere other than Amazon S3?;Yes. You can use AWS Direct Connect to establish a private dedicated network connection to AWS. If you have large amounts of data, you can use AWS Import/Export. For more details refer to our documentation. /emr/faqs/;Can Amazon EMR estimate how long it will take to process my input data?;No. As each cluster and input data is different, we cannot estimate your job duration. /emr/faqs/;How much does Amazon EMR cost?;Amazon EMR pricing is simple and predictable: you pay a per-second rate for every second you use, with a one-minute minimum. You can estimate your bill using the AWS Pricing Calculator. Usage for other Amazon Web Services including Amazon EC2 is billed separately from Amazon EMR. /emr/faqs/;When does billing of my Amazon EMR cluster begin and end?;Amazon EMR billing commences when the cluster is ready to execute steps. Amazon EMR billing ends when you request to shut down the cluster. For more details on when Amazon EC2 begins and ends billing, please refer to the Amazon EC2 Billing FAQ. /emr/faqs/;Where can I track my Amazon EMR, Amazon EC2 and Amazon S3 usage?;You can track your usage in the Billing & Cost Management Console. /emr/faqs/;How do you calculate the Normalized Instance Hours displayed on the console ?;On the AWS Management Console, every cluster has a Normalized Instance Hours column that displays the approximate number of compute hours the cluster has used, rounded up to the nearest hour. /emr/faqs/;Does Amazon EMR support Amazon EC2 On-Demand, Spot, and Reserved Instances?;Yes. Amazon EMR seamlessly supports On-Demand, Spot, and Reserved Instances. Click here to learn more about Amazon EC2 Reserved Instances. Click here to learn more about Amazon EC2 Spot Instances. Click here to learn more about Amazon EC2 Capacity Reservations. /emr/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /emr/faqs/;How do I prevent other people from viewing my data during cluster execution?;Amazon EMR starts your instances in two Amazon EC2 security groups, one for the master and another for the other cluster nodes. The master security group has a port open for communication with the service. It also has the SSH port open to allow you to SSH into the instances, using the key specified at startup. The other nodes start in a separate security group, which only allows interaction with the master instance. By default both security groups are set up to not allow access from external sources including Amazon EC2 instances belonging to other customers. Since these are security groups within your account, you can reconfigure them using the standard EC2 tools or dashboard. Click here to learn more about EC2 security groups. Additionally, you can configure Amazon EMR block public access in each region that you use to prevent cluster creation if a rule allows public access on any port that you don't add to a list of exceptions. /emr/faqs/;How secure is my data?;"Amazon S3 provides authentication mechanisms to ensure that stored data is secured against unauthorized access. Unless the customer who is uploading the data specifies otherwise, only that customer can access the data. Amazon EMR customers can also choose to send data to Amazon S3 using the HTTPS protocol for secure transmission. In addition, Amazon EMR always uses HTTPS to send data between Amazon S3 and Amazon EC2. For added security, customers may encrypt the input data before they upload it to Amazon S3 (using any common data encryption tool); they then need to add a decryption step to the beginning of their cluster when Amazon EMR fetches the data from Amazon S3." /emr/faqs/;Can I get a history of all EMR API calls made on my account for security or compliance auditing?;Yes. AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Learn more about CloudTrail at the AWS CloudTrail detail page, and turn it on via CloudTrail's AWS Management Console. /emr/faqs/;How do I control what EMR users can access in Amazon S3?;By default, Amazon EMR application processes use EC2 instance profiles when they call other AWS services. For multi-tenant clusters, Amazon EMR offers three options to manage user access to Amazon S3 data. /emr/faqs/;How does Amazon EMR make use of Availability Zones?;Amazon EMR launches all nodes for a given cluster in the same Amazon EC2 Availability Zone. Running a cluster in the same zone improves performance of the jobs flows. By default, Amazon EMR chooses the Availability Zone with the most available resources in which to run your cluster. However, you can specify another Availability Zone if required. You also have the option to optimize your allocation for lowest-priced on demand instances, optimal spot capacity, or use On-Demand Capacity Reservations. /emr/faqs/;In what Regions is this Amazon EMR available?;For a list of the supported Amazon EMR AWS regions, please visit the AWS Region Table for all AWS global infrastructure. /emr/faqs/;Is Amazon EMR supported in AWS Local Zones?;EMR supports launching clusters in the Los Angeles AWS Local Zone. You can use EMR in the US West (Oregon) region to launch clusters into subnets associated with the Los Angeles AWS Local Zone. /emr/faqs/;Which Region should I select to run my clusters?;When creating a cluster, typically you should select the Region where your data is located. /emr/faqs/;Can I use EU data in a cluster running in the US region and vice versa?;Yes, you can. If you transfer data from one region to the other you will be charged bandwidth charges. For bandwidth pricing information, please visit the pricing section on the EC2 detail page. /emr/faqs/;What is different about the AWS GovCloud (US) region?;The AWS GovCloud (US) region is designed for US government agencies and customers. It adheres to US ITAR requirements. In GovCloud, EMR does not support spot instances or the enable-debugging feature. The EMR Management Console is not yet available in GovCloud. /emr/faqs/;What is an Amazon EMR Cluster?; An Amazon EMR cluster has three types of nodes: /emr/faqs/;What are node types in a cluster?;Master node: A node that manages the cluster by running software components to coordinate the distribution of data and tasks among other nodes for processing. The master node tracks the status of tasks and monitors the health of the cluster. Every cluster has a master node, and it's possible to create a single-node cluster with only the master node. Core node: A node with software components that run tasks and store data in the Hadoop Distributed File System (HDFS) on your cluster. Multi-node clusters have at least one core node. Task node: A node with software components that only runs tasks and does not store data in HDFS. Task nodes are optional. /emr/faqs/;What is a cluster step?; STARTING – The cluster starts by configuring EC2 instances. BOOTSTRAPPING – Bootstrap actions are being executed on the cluster. RUNNING – A step for the cluster is currently being run. WAITING – The cluster is currently active, but has no steps to run. TERMINATING - The cluster is in the process of shutting down. TERMINATED - The cluster was shut down without error. TERMINATED_WITH_ERRORS - The cluster was shut down with errors. /emr/faqs/;What are different cluster states?; PENDING – The step is waiting to be run. RUNNING – The step is currently running. COMPLETED – The step completed successfully. CANCELLED – The step was cancelled before running because an earlier step failed or cluster was terminated before it could run. FAILED – The step failed while running. /emr/faqs/;How can I launch a cluster?; At any time, you can terminate a cluster via the AWS Management Console by selecting a cluster and clicking the “Terminate” button. Alternatively, you can use the TerminateJobFlows API. If you terminate a running cluster, any results that have not been persisted to Amazon S3 will be lost and all Amazon EC2 instances will be shut down. /emr/faqs/;How can I terminate a cluster?; You can start as many clusters as you like. When you get started, you are limited to 20 instances across all your clusters. If you need more instances, complete the Amazon EC2 instance request form. Once your Amazon EC2 limit is raised, the new limit will be automatically applied to your Amazon EMR clusters. /emr/faqs/;How does Amazon EMR use Amazon EC2 and Amazon S3?;You can upload your input data and a data processing application into Amazon S3. Amazon EMR then launches a number of Amazon EC2 instances that you specified. The service begins the cluster execution while pulling the input data from Amazon S3 using S3 URI scheme into the launched Amazon EC2 instances. Once the cluster is finished, Amazon EMR transfers the output data to Amazon S3, where you can then retrieve it or use as input in another cluster. /emr/faqs/;How is a computation done in Amazon EMR?;Amazon EMR uses the Hadoop data processing engine to conduct computations implemented in the MapReduce programming model. The customer implements their algorithm in terms of map() and reduce() functions. The service starts a customer-specified number of Amazon EC2 instances, comprised of one master and multiple other nodes. Amazon EMR runs Hadoop software on these instances. The master node divides input data into blocks, and distributes the processing of the blocks to the other nodes. Each node then runs the map function on the data it has been allocated, generating intermediate data. The intermediate data is then sorted and partitioned and sent to processes which apply the reducer function to it locally on the nodes. Finally, the output from the reducer tasks is collected in files. A single “cluster” may involve a sequence of such MapReduce steps. /emr/faqs/;Which Amazon EC2 instance types does Amazon EMR support?;See the EMR pricing page for details on latest available instance types and pricing per region. /emr/faqs/;How long will it take to run my cluster?;The time to run your cluster will depend on several factors including the type of your cluster, the amount of input data, and the number and type of Amazon EC2 instances you choose for your cluster. /emr/faqs/;If the master node in a cluster goes down, can Amazon EMR recover it?;Yes. You can launch an EMR cluster (version 5.23 or later) with three master nodes and support high availability of applications like YARN Resource Manager, HDFS Name Node, Spark, Hive, and Ganglia. Amazon EMR automatically fails over to a standby master node if the primary master node fails or if critical processes, like Resource Manager or Name Node, crash. Since the master node is not a potential single point of failure, you can run your long-lived EMR clusters without interruption. In the event of a failover, Amazon EMR automatically replaces the failed master node with a new master node with the same configuration and boot-strap actions. /emr/faqs/;If another node goes down in a cluster, can Amazon EMR recover from it?;Yes. Amazon EMR is fault tolerant for node failures and continues job execution if a node goes down. Amazon EMR will also provision a new node when a core node fails. However, Amazon EMR will not replace nodes if all nodes in the cluster are lost. /emr/faqs/;Can I SSH onto my cluster nodes?;Yes. You can SSH onto your cluster nodes and execute Hadoop commands directly from there. If you need to SSH into a specific node, you have to first SSH to the master node, and then SSH into the desired node. /emr/faqs/;What is Amazon EMR Bootstrap Actions?;Bootstrap Actions is a feature in Amazon EMR that provides users a way to run custom set-up prior to the execution of their cluster. Bootstrap Actions can be used to install software or configure instances before running your cluster. You can read more about bootstrap actions in EMR's Developer Guide. /emr/faqs/;How can I use Bootstrap Actions?;You can write a Bootstrap Action script in any language already installed on the cluster instance including Bash, Perl, Python, Ruby, C++, or Java. There are several pre-defined Bootstrap Actions available. Once the script is written, you need to upload it to Amazon S3 and reference its location when you start a cluster. Please refer to the Developer Guide for details on how to use Bootstrap Actions. /emr/faqs/;How do I configure Hadoop settings for my cluster?;The EMR default Hadoop configuration is appropriate for most workloads. However, based on your cluster’s specific memory and processing requirements, it may be appropriate to tune these settings. For example, if your cluster tasks are memory-intensive, you may choose to use fewer tasks per core and reduce your job tracker heap size. For this situation, a pre-defined Bootstrap Action is available to configure your cluster on startup. See the Configure Memory Intensive Bootstrap Action in the Developer’s Guide for configuration details and usage instructions. An additional predefined bootstrap action is available that allows you to customize your cluster settings to any value of your choice. See the Configure Hadoop Bootstrap Action in the Developer’s Guide for usage instructions. /emr/faqs/;Can I modify the number of nodes in a running cluster?;Yes. Nodes can be of two types: (1) core nodes, which both host persistent data using Hadoop Distributed File System (HDFS) and run Hadoop tasks and (2) task nodes, which only run Hadoop tasks. While a cluster is running you may increase the number of core nodes and you may either increase or decrease the number of task nodes. This can be done through the API, Java SDK, or though the command line client. Please refer to the Resizing Running clusters section in the Developer’s Guide for details on how to modify the size of your running cluster. You can also use EMR Managed Scaling. /emr/faqs/;When would I want to use core nodes versus task nodes?;As core nodes host persistent data in HDFS and cannot be removed, core nodes should be reserved for the capacity that is required until your cluster completes. As task nodes can be added or removed and do not contain HDFS, they are ideal for capacity that is only needed on a temporary basis. You can launch task instance fleets on Spot Instances to increase capacity while minimizing costs. /emr/faqs/;Why would I want to modify the number of nodes in my running cluster?;There are several scenarios where you may want to modify the number of nodes in a running cluster. If your cluster is running slower than expected, or timing requirements change, you can increase the number of core nodes to increase cluster performance. If different phases of your cluster have different capacity needs, you can start with a small number of core nodes and increase or decrease the number of task nodes to meet your cluster’s varying capacity requirements. You can also used EMR Managed Scaling. /emr/faqs/;Can I automatically modify the number of nodes between cluster steps?;Yes. You may include a predefined step in your workflow that automatically resizes a cluster between steps that are known to have different capacity needs. As all steps are guaranteed to run sequentially, this allows you to set the number of nodes that will execute a given cluster step. /emr/faqs/;How can I allow other IAM users to access my cluster?;To create a new cluster that is visible to all IAM users within the EMR CLI: Add the --visible-to-all-users flag when you create the cluster. For example: elastic-mapreduce --create --visible-to-all-users. Within the Management Console, simply select “Visible to all IAM Users” on the Advanced Options pane of the Create cluster Wizard. /emr/faqs/;What Amazon EMR resources can I tag?;You can add tags to an active Amazon EMR cluster. An Amazon EMR cluster consists of Amazon EC2 instances, and a tag added to an Amazon EMR cluster will be propagated to each active Amazon EC2 instance in that cluster. You cannot add, edit, or remove tags from terminated clusters or terminated Amazon EC2 instances which were part of an active cluster. /emr/faqs/;Does Amazon EMR tagging support resource-based permissions with IAM Users?;No, Amazon EMR does not support resource-based permissions by tag. However, it is important to note that propagated tags to Amazon EC2 instances behave as normal Amazon EC2 tags. Therefore, an IAM Policy for Amazon EC2 will act on tags propagated from Amazon EMR if they match conditions in that policy. /emr/faqs/;How many tags can I add to a resource?;You can add up to ten tags on an Amazon EMR cluster. /emr/faqs/;Do my Amazon EMR tags on a cluster show up on each Amazon EC2 instance in that cluster? If I remove a tag on my Amazon EMR cluster, will that tag automatically be removed from each associated EC2 instance?;Yes, Amazon EMR propagates the tags added to a cluster to that cluster's underlying EC2 instances. If you add a tag to an Amazon EMR cluster, it will also appear on the related Amazon EC2 instances. Likewise, if you remove a tag from an Amazon EMR cluster, it will also be removed from its associated Amazon EC2 instances. However, if you are using IAM policies for Amazon EC2 and plan to use Amazon EMR's tagging functionality, you should make sure that permission to use the Amazon EC2 tagging APIs CreateTags and DeleteTags is granted. /emr/faqs/;How do I get my tags to show up in my billing statement to segment costs?;Select the tags you would like to use in your AWS billing report here. Then, to see the cost of your combined resources, you can organize your billing information based on resources that have the same tag key values. /emr/faqs/;How do I tell which Amazon EC2 instances are part of an Amazon EMR cluster?;An Amazon EC2 instance associated with an Amazon EMR cluster will have two system tags: /emr/faqs/;Can I edit tags directly on the Amazon EC2 instances?;Yes, you can add or remove tags directly on Amazon EC2 instances that are part of an Amazon EMR cluster. However, we do not recommend doing this, because Amazon EMR’s tagging system will not sync the changes you make to an associated Amazon EC2 instance directly. We recommend that tags for Amazon EMR clusters be added and removed from the Amazon EMR console, CLI, or API to ensure that the cluster and its associated Amazon EC2 instances have the correct tags. /emr/faqs/;What is Amazon EMR Serverless?;Amazon EMR Serverless is a new deployment option in Amazon EMR that allows you to run big data frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters. /emr/faqs/;Who can use EMR Serverless?;Data engineers, analysts, and scientists can use EMR Serverless to build applications using open-source frameworks such as Apache Spark and Apache Hive. They can use these frameworks to transform data, run interactive SQL queries, and machine learning workloads. /emr/faqs/;What open-source frameworks does EMR Serverless support?;EMR Serverless currently supports Apache Spark and Apache Hive engines. If you want support for additional frameworks such as Apache Presto or Apache Flink, please send a request to emr-feedback@amazon.com. /emr/faqs/;In what Regions is EMR Serverless available?;EMR Serverless is available in the following AWS Regions: Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (São Paulo), US East (NVirginia), US East (Ohio), US West (NCalifornia), and US West (Oregon). /emr/faqs/;What is the difference between Amazon EMR Serverless, Amazon EMR on EC2, Amazon EMR on AWS Outposts, and Amazon EMR on EKS?;Amazon EMR provides the option to run applications on EC2 based clusters, EKS clusters, Outposts, or Serverless. EMR on EC2 clusters are suitable for customers who need maximum control and flexibility over running their application. With EMR on EC2 clusters, customers can choose the EC2 instance type to meet application-specific performance needs, customize the Linux AMI, customize the EC2 instance configuration, customize and extend open-source frameworks, and install additional custom software on cluster instances. Amazon EMR on EKS is suitable for customers who want to standardize on EKS to manage clusters across applications or use different versions of an open-source framework on the same cluster. Amazon EMR on AWS Outposts is for customers who want to run EMR closer to their data center, within an Outpost. EMR Serverless is suitable for customers who want to avoid managing and operating clusters and prefer to run applications using open-source frameworks. /emr/faqs/;What EMR releases are supported in EMR Serverless?;EMR Serverless supports EMR release labels 6.6 and above. With EMR Serverless, you get the same performance-optimized EMR runtime available in other EMR deployment options, which is 100% API-compatible with standard open-source frameworks. /emr/faqs/;What is an application and how can I create it?;With Amazon EMR Serverless, you can create one or more EMR Serverless applications that use open-source analytics frameworks. To create an application, you must specify the following attributes: 1) the Amazon EMR release version for the open-source framework version you want to use and 2) the specific analytics engines that you want your application to use, such as Apache Spark 3.1 or Apache Hive 3.0. After you create an application, you can start running your data processing jobs or interactive requests to your application. /emr/faqs/;What is a worker?;An EMR Serverless application internally uses workers to execute your workloads. When a job is submitted, EMR Serverless computes the resources needed for the job and schedules workers. EMR Serverless breaks down your workloads into tasks, provisions and sets up workers with the open-source framework, and decommissions them when the job completes. EMR Serverless automatically scales workers up or down depending on the workload and parallelism required at every stage of the job, thereby removing the need for you to estimate the number of workers required to run your workloads. The default size of these workers is based on your application type and Amazon EMR release version. You can override these sizes when scheduling a job run. /emr/faqs/;Can I specify the minimum and maximum number of workers that my jobs can use?;With EMR Serverless, you can specify the minimum and maximum number of concurrent workers and the vCPU and memory configuration for workers. You can also set the maximum capacity limits on the application’s resources to control costs. /emr/faqs/;When should I create multiple applications?;Consider creating multiple applications when doing any of the following: /emr/faqs/;Can I change default properties of an EMR Serverless application after it is created?;Yes, you can modify application properties such as initial capacity, maximum capacity limits, and network configuration using EMR Studio or the update-application API/CLI call. /emr/faqs/;When should I create an application with a pre-initialized pool of workers?;An EMR Serverless application without pre-initialized workers takes up to 120 seconds to determine the required resources and provision them. EMR Serverless provides an optional feature that keeps workers initialized and ready to respond in seconds, effectively creating an on-call pool of workers for an application. This feature is called pre-initialized capacity and can be configured for each application by setting the initial-capacity parameter of an application. /emr/faqs/;How do I submit and manage jobs on EMR Serverless?;You can submit and manage EMR Serverless jobs using EMR Studio, SDK/CLI, or our Apache Airflow connectors. /emr/faqs/;How can I include dependencies with jobs that I want to run on EMR Serverless?;For PySpark, you can package your Python dependencies using virtualenv and pass the archive file using the --archives option, which enables your workers to use the dependencies during the job run. For Scala or Java, you can package your dependencies as jars, upload them to Amazon S3, and pass them using the --jars or --packages options with your EMR Serverless job run. /emr/faqs/;Do EMR Serverless Spark and Hive applications support user-defined functions (UDFs)?;EMR Serverless supports Java-based UDFs. You can package them as jars, upload them to S3, and use them in your Spark or HiveQL scripts. /emr/faqs/;What worker configurations does EMR Serverless support?;Refer to the Supported Worker Configuration for details. /emr/faqs/;Can I cancel an EMR Serverless job in case it is running longer than expected?;Yes, you can cancel a running EMR Serverless job from EMR Studio or by calling the cancelJobRun API/CLI. /emr/faqs/;Can I add extra storage to the workers?;EMR Serverless comes with 20 GB of ephemeral storage on each worker. If you need more storage, you can customize this during job submission from 20 GB up to 200 GB. /emr/faqs/;How do I monitor Amazon EMR Serverless applications and job runs?;Amazon EMR Serverless application- and job-level metrics are published every 30 seconds to Amazon CloudWatch. /emr/faqs/;How do I launch Spark UI and Tez UI with EMR Serverless?;From EMR Studio, you can select a running or completed EMR Serverless job and then click on the Spark UI or Tez UI button to launch them. /emr/faqs/;Can I access resources in my Amazon Virtual Private Cloud (VPC)?;Yes, you can configure Amazon EMR Serverless applications to access resources in your own VPC. See the Configuring VPC access section in the documentation to learn more. /emr/faqs/;What kind of isolation can I get with an EMR Serverless application?;Each EMR Serverless application is isolated from other applications and runs on a secure Amazon VPC. /emr/faqs/;What is changing with Amazon EMR Serverless service quotas?; You can view, manage, and request quota increase in the AWS Service Quotas Management console. For more information, see Requesting a Quota Increase in the Service Quotas User Guide. /emr/faqs/;Where can I view and manage my account’s vCPU quota?; If you exceed your account-level vCPU quota, EMR Serverless will stop provisioning new capacity. If you try creating a new application after exceeding the quota, the application creation will fail with an error message “Application failed to create as you have exceeded the maximum concurrent vCPUs per account service quota. You can view and manage your service quota using AWS Service Quotas console.” If you submit a new job after exceeding the quota, the job will fail with an error message: “Job failed as you have exceeded the maximum concurrent vCPUs per account service quota. You can view and manage your service quota using AWS Service Quotas console.” Please refer to the documentation for more details. /emr/faqs/;How does Amazon EMR Serverless help save costs on big data deployments?;There are three ways in which Amazon EMR Serverless can help you save costs. First, there is no operational overhead of managing, securing, and scaling clusters. Second, EMR Serverless automatically scales workers up at each stage of processing your job and scales them down when they’re not required. You’re charged for aggregate vCPU, memory, and storage resources used from the time a worker starts running until it stops, rounded up to the nearest second with a 1-minute minimum. For example, your job may require 10 workers for the first 10 minutes of processing the job and 50 workers for the next 5 minutes. With fine-grained automatic scaling, you incur costs for only 10 workers for 10 minutes and 50 workers for 5 minutes. As a result, you don’t have to pay for underutilized resources. Third, EMR Serverless includes the Amazon EMR performance-optimized runtime for Apache Spark and Apache Hive, and Presto. The Amazon EMR runtime is API-compatible and over twice as fast as standard open-source analytics engines, so your jobs run faster and incur fewer compute costs. /emr/faqs/;Is the EMR Serverless cost comparable to Amazon EMR on EC2 Spot Instances?;It depends on your current EMR on EC2 cluster utilization. If you are running EMR clusters using EC2 On-Demand Instances, EMR Serverless will offer a lower total cost of ownership (TCO) if your current cluster utilization is less than 70%. If you are using the EC2 Savings Plans, EMR Serverless will offer a lower TCO if your current cluster utilization is less than 50%. And if you use EC2 Spot Instances, Amazon EMR on EC2 and Amazon EMR on EKS will continue to be more cost effective. /emr/faqs/;Are pre-initialized workers charged even after jobs have run to completion?;Yes, if you do not stop workers after a job is complete, you will incur charges on pre-initialized workers. /emr/faqs/;Who should I reach out to with questions, comments, and feature requests?;Please send us an email at emr-feedback@amazon.com with your inquiries and valuable feedback on EMR Serverless. /emr/faqs/;What is Amazon EMR on Amazon EKS?; Amazon EMR on Amazon EKS decouples the analytics job from the services and infrastructure that are processing the job by using a container-based approach. You can focus more on developing your application and less on operating the infrastructure as EMR on EKS dynamically configures the infrastructure based on the compute, memory, and application dependencies of the job. Infrastructure teams can centrally manage a common compute platform to consolidate EMR workloads with other container-based applications. Multiple teams, organizations, or business units can simultaneously and independently run their analytics processes on the shared infrastructure while maintaining isolation enabled by Amazon EKS and AWS Identity and Access Management (IAM). /emr/faqs/;Why should I use Amazon EMR on Amazon EKS?; If you already run Apache Spark on Amazon EKS, you can get all of the benefits of Amazon EMR like automatic provisioning and scaling and the ability to use the latest fully managed versions of open source big data analytics frameworks. You get an optimized EMR runtime for Apache Spark with 3X faster performance than open source Apache Spark on EKS, a serverless data science experience with EMR Studio and Apache Spark UI, fine grained data access control, and support for data encryption. /emr/faqs/;What are the benefits for users already running Apache Spark on Amazon EKS?; Amazon EKS provides customers with a managed experience for running Kubernetes on AWS, enabling you to add compute capacity using EKS Managed Node Groups or using AWS Fargate. Running EMR jobs on EKS can access their data on Amazon S3 while monitoring and logging can be integrated with Amazon CloudWatch. AWS Identity and Access Management (IAM) enables role based access control for both jobs and to dependent AWS services. /emr/faqs/;How does this feature relate to and work with other AWS services?; Register your EKS cluster with Amazon EMR. Then, submit your Spark jobs to EMR using the CLI, SDK or EMR Studio. EMR requests the Kubernetes scheduler on EKS to schedule Pods. For each job that you run, EMR on EKS creates a container. The container contains Amazon Linux 2 base image with security updates, plus Apache Spark and associated dependencies to run Spark, plus your application-specific dependencies. Each Job runs in a pod. The Pod downloads this container and starts to execute it. If the container’s image has been previously deployed to the node, then a cached image is used and the download is bypassed. Sidecar containers, such as log or metric forwarders, can be deployed to the pod. The Pod terminates after the job terminates. After the job terminates, you can still debug it using Spark UI. /emr/faqs/;How does Amazon EMR on Amazon EKS work?; You can use Amazon EMR for EKS with both Amazon Elastic Compute Cloud (EC2) instances to support broader customization options, or the serverless AWS Fargate service to process your analytics without having to provision or manage EC2 instances. Application availability can automatically improve by spreading your analytics jobs across multiple AWS Availability Zones (AZs). /emr/faqs/;What AWS compute services can I use with Amazon EMR on EKS?; To get started, register your Amazon EKS cluster with Amazon EMR. After registration, reference this registration in your job definition (that includes application dependencies and framework parameters) by submitting your workloads to EMR for execution. With EMR on EKS, you can use different open source big data analytics frameworks, versions, and configurations for analytics applications running on the same EKS cluster. For more information, refer to our documentation. /emr/faqs/;How do I submit analytics applications to EMR on EKS?;You submit analytics applications using AWS SDK / CLI, Amazon EMR Studio notebooks, and workflow orchestration services like Apache Airflow and Amazon Managed Workflows for Apache Airflow. AWS EMR on EKS’s airflow plugin can be downloaded from S3. To install the emr-containers plugin for Amazon Managed Workflows for Apache Airflow, see our documentation. /emr/faqs/;Can I use the same EMR release for EMR clusters and applications running on EKS?;Yes, you can use the same EMR release for applications that run on EMR clusters and applications that run on EKS. /emr/faqs/;How do I troubleshoot analytics applications?; Yes, EMR applications show up in the EKS console as Kubernetes jobs and deployments. /emr/faqs/;Can I see EMR applications in EKS?; Yes, Kubernetes natively provides job isolation. Additionally, each job can be configured to run with its own execution-role to limit which AWS resources the job can access. /emr/faqs/;Can I isolate multiple jobs or applications from each other on the same EKS cluster?; EMR on EKS reduces cost by removing the need to run dedicated clusters. You can use a common, shared EKS cluster to run analytics applications that require different versions of open source big data analytics frameworks. You can also use the same EKS cluster to run your other containerized non-EMR applications. /emr/faqs/;How does EMR on EKS help reduce costs?; Amazon EMR on EKS pricing is calculated based on the vCPU and memory resources requested for the pod(s) that are running your job at per minute granularity. For pricing information, visit the Amazon EMR pricing page. /emr/faqs/;What are Pod Templates?;EMR on EKS enables you to use Kubernetes Pod Templates to customize where and how your job runs in the Kubernetes cluster. Kubernetes Pod Templates provide a reusable design pattern or boilerplate for declaratively expressing how a Kubernetes pod should be deployed to your EKS cluster. /emr/faqs/;Why should I use Pod Templates with my EMR on EKS job?;Pod Templates can provide more control over how your jobs are scheduled in Kubernetes. For example, you can reduce cost by running Spark driver tasks on Amazon EC2 Spot instances or only allowing jobs requiring SSDs to run on SSD enabled instances. Pod Templates with EMR on EKS to enables fine-grained control of how resources are allocated and running custom containers alongside your job. Therefore, resulting in reduced cost and increased performance of your jobs. /emr/faqs/;What is a Pod?;Pods are one or more containers, with shared network and storage resources, that run on a Kubernetes worker node. EMR on EKS uses pods to run your job by scheduling Spark driver and executor tasks as individual pods. /emr/faqs/;What are some use-cases for Pod Templates?;You can optimize both performance and cost by using Pod Templates. For example, you can save cost by defining jobs to run on EC2 Spot instances or increase performance by scheduling them on GPU or SSD-backed EC2 instances. Customers often need fine-grained workload control in order to support multiple teams or organizations on EKS, and Pod Templates simplify running jobs on team designated node groups. In addition, you can deploy sidecar containers to run initialization code for your job or run common monitoring tools like Fluentd for log forwarding. /emr/faqs/;Can I specify a different Pod Template for my Spark drivers and Spark executors?;You can, but is not required, to provide individual templates for drivers and executors. For example, you can configure nodeSelectors and tolerations to designate Spark drivers to run only on AWS EC2 On-Demand instances and Spark executors to run only on AWS Fargate instances. In your job submission, configure the spark properties spark.kubernetes.driver.podTemplateFile and spark.kubernetes.executor.podTemplateFile to reference the template’s S3 location. /emr/faqs/;What template values can I specify?;You can specify both Pod Level Fields (including Volumes, Pod Affinity, Init Containers, Node Selector) and Spark Main Container level fields (including EnvFrom, Working Directory, Lifecycle, Container Volume Mounts). The full list of allowed values is provided in our documentation. /emr/faqs/;Where can I find more information about Pod Templates?;Amazon EKS already supports Pod Templates and for more information about Amazon EMR on EKS’ support for Pod Templates, refer to our documentation and the Apache Spark Pod Template documentation. /emr/faqs/;Why should I use Custom Images with EMR on EKS?;Without Custom Images, managing application dependencies with EMR on EKS required you to reference them at runtime from an external storage service such as Amazon S3. Now, with custom image support, you can create a self-contained docker image with the application and its’ dependent libraries. You no longer need to maintain, update or version externally stored libraries and your big data applications can be developed using the same DevOps processes that your other containerized applications are using. Just point at your image and run it. /emr/faqs/;What is a custom image?;A Custom Image is an EMR on EKS provided docker image (“base image”) that contains the EMR runtime and connectors to other AWS services that you modify to include application dependencies or additional packages that your application requires. The new image can be stored in either Amazon Elastic Container Registry (ECR) or your own Docker container registry. /emr/faqs/;What are some use-cases for Custom Images?;Customers can create a base image, add their corporate standard libraries, and then store it in Amazon Elastic Container Registry (Amazon ECR). Other customers can customize the image to include their application specific dependencies. The resulting immutable image can be vulnerability scanned, deployed to test and production environments. Examples of dependencies you can add include Java SDK, Python, or R libraries, you can add them to the image directly, just as with other containerized applications. /emr/faqs/;What is included in the base image?;The same performance optimized Spark Runtime and components are included in the base image as you get when you submit a job without a custom image. /emr/faqs/;Where can I find more information about Custom Images?;Apache Spark Documentation. /emr/faqs/;What is AWS Outposts?;AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility. Using EMR on Outposts, you can deploy, manage, and scale EMR clusters on-premises, just as you would in the cloud. /emr/faqs/;When should I use EMR on Outposts?;If you have existing on-premises Apache Hadoop deployments and are struggling to meet capacity demands during peak utilization, you can use EMR on Outposts to augment your processing capacity without having to move data to the cloud. EMR on Outposts enables you to launch a new EMR cluster on-premises in minutes and connect to existing datasets in on-premises HDFS storage to meet this demand and maintain SLAs. /emr/faqs/;What EMR versions are supported with EMR on Outposts?;The minimum supported Amazon EMR release is 5.28.0. /emr/faqs/;What EMR applications are available when using Outposts?;All applications in EMR release 5.28.0 and above are supported. See our release notes for a full list of EMR applications. /emr/faqs/;What EMR features are not supported with EMR on Outposts?;EC2 Spot instances are not available in AWS Outposts. When creating a cluster, you must choose EC2 On-Demand instances. A subset of EC2 instance types are available in AWS Outposts. For a list of supported instance types with EMR and Outposts, please see our documentation. When adding Amazon EBS volumes to instances, only the General Purpose SSD (GP2) storage type is supported in AWS Outposts. /emr/faqs/;Can I use EMR clusters in an Outpost to read data from my existing on-premises Apache Hadoop clusters?;Workloads running on EMR in an Outpost can read and write data in existing HDFS storage, allowing you to easily integrate with existing on-premises Apache Hadoop deployments. This gives you the ability to augment your data processing needs using EMR without the need to migrate data. /emr/faqs/;Can I choose where to store my data?;When an EMR cluster is launched in an Outpost, all of the compute and data storage resources are deployed in your Outpost. Data written locally to the EMR cluster is stored on local EBS volumes in your Outpost. Tools such as Apache Hive, Apache Spark, Presto, and other EMR applications can each be configured to write data locally in an Outpost, to external file system such as an existing HDFS installation, or to Amazon S3. Using EMR on Outposts, you have full control over storing your data in Amazon S3 or locally in your Outpost. /emr/faqs/;Do any EMR features need uploaded data to S3?;When launching an EMR cluster in an Outpost, you have the option to enable logging. When logging is enabled, cluster logs will be uploaded to the S3 bucket that you specify. These logs are used to simplify debugging clusters after they have been terminated. When disabled, no logs will be uploaded to S3. /emr/faqs/;What happens if my Outpost is out of capacity?;When launching a cluster in an Outpost, EMR will attempt to launch the number and type of EC2 On-Demand instances you’ve requested. If there is no capacity available on the Outpost, EMR will receive an insufficient capacity notice. EMR will retry for a period of time, and if no capacity becomes available, the cluster will fail to launch. The same process applies when resizing a cluster. If there is insufficient capacity on the Outpost for the requested instance types, EMR will be unable to scale up the cluster. You can easily set up Amazon CloudWatch alerts to monitor your capacity utilization on Outposts and receive alerts when instance capacity is lower than a desired threshold. /emr/faqs/;What happens if network connectivity is interrupted between my Outpost and AWS?;If network connectivity between your Outpost and its AWS Region is lost, your clusters in Outposts will continue to run, but there will be actions you will be unable to take until connectivity is restored. Until connectivity is restored, you cannot create new clusters or take new actions on existing clusters. In case of instance failures, the instance will not be automatically replaced. Also, actions such as adding steps to a running cluster, checking step execution status, and sending CloudWatch metrics and events will be delayed until connectivity is restored. /emr/faqs/;What can I do now that I could not do before?;"Most EC2 instances have fixed storage capacity attached to an instance, known as an ""instance store"". You can now add EBS volumes to the instances in your Amazon EMR cluster, allowing you to customize the storage on an instance. The feature also allows you to run Amazon EMR clusters on EBS-Only instance families such as the M4 and C4." /emr/faqs/;What are the benefits of adding EBS volumes to an instance running on Amazon EMR?;You will benefit by adding EBS volumes to an instance in the following scenarios: /emr/faqs/;Can I persist my data on an EBS volume after a cluster is terminated?;Currently, Amazon EMR will delete volumes once the cluster is terminated. If you want to persist data outside the lifecycle of a cluster, consider using Amazon S3 as your data store. /emr/faqs/;What kind of EBS volumes can I attach to an instance?;Amazon EMR allows you to use different EBS Volume Types: General Purpose SSD (GP2), Magnetic and Provisioned IOPS (SSD). /emr/faqs/;What happens to the EBS volumes once I terminate my cluster?;Amazon EMR will delete the volumes once the EMR cluster is terminated. /emr/faqs/;Can I use an EBS with instances that already have an instance store?;Yes, you can add EBS volumes to instances that have an instance store. /emr/faqs/;Can I attach an EBS volume to a running cluster?;No, currently you can only add EBS volumes when launching a cluster. /emr/faqs/;Can I snapshot volumes from a cluster?;The EBS API allows you to Snapshot a cluster. However, Amazon EMR currently does not allow you to restore from a snapshot. /emr/faqs/;Can I use encrypted EBS volumes?;You can encrypt EBS root device and storage volumes using AWS KMS as your key provider. For more information, see Local Disk Encryption. /emr/faqs/;What happens when I remove an attached volume from a running cluster?;Removing an attached volume from a running cluster will be treated as a node failure. Amazon EMR will replace the node and the EBS volume with each of the same. /emr/faqs/;What is Apache Spark?;Apache SparkTM is an open-source, distributed processing system used for big data workloads. It utilizes in-memory caching, and optimized query execution for fast analytic queries against data of any size. Amazon EMR is the best place to deploy Apache Spark in the cloud, because it combines the integration and testing rigor of commercial Spark distributions with the scale, simplicity, and cost effectiveness of the cloud. It allows you to launch Spark clusters in minutes without needing to do node provisioning, cluster setup, Spark configuration, or cluster tuning. EMR features Amazon EMR runtime for Apache Spark, a performance-optimized runtime environment for Apache Spark that is active by default on Amazon EMR clusters. Amazon EMR runtime for Apache Spark can be over 3x faster than clusters without the EMR runtime, and has 100% API compatibility with standard Apache Spark. Learn more about Spark and Spark on Amazon EMR. /emr/faqs/;What is Presto?;Presto is an open source, distributed SQL query engine, designed from the ground up for fast analytic queries against data of any size. With Amazon EMR, you can launch Presto clusters in minutes without needing to do node provisioning, cluster setup, Presto configuration, or cluster tuning. EMR enables you to provision one, hundreds, or thousands of compute instances in minutes. Presto has two community projects –PrestoDB and PrestoSQL. Amazon EMR supports both projects. Learn more about Presto and Presto on Amazon EMR. /emr/faqs/;What is Apache Hive?;Hive is an open source data warehouse and analytics package that runs on top of Hadoop. Hive is operated by a SQL-based language called Hive QL that allows users to structure, summarize, and query data sources stored in Amazon S3. Hive QL goes beyond standard SQL, adding first-class support for map/reduce functions and complex extensible user-defined data types like Json and Thrift. This capability allows processing of complex and even unstructured data sources such as text documents and log files. Hive allows user extensions via user-defined functions written in Java and deployed via storage in Amazon S3. You can learn more about Apache Hive here. /emr/faqs/;What can I do with Hive running on Amazon EMR?;Using Hive with Amazon EMR, you can implement sophisticated data-processing applications with a familiar SQL-like language and easy to use tools available with Amazon EMR. With Amazon EMR, you can turn your Hive applications into a reliable data warehouse to execute tasks such as data analytics, monitoring, and business intelligence tasks. /emr/faqs/;How is Hive different than traditional RDBMS systems?;Traditional RDBMS systems provide transaction semantics and ACID properties. They also allow tables to be indexed and cached so that small amounts of data can be retrieved very quickly. They provide for fast update of small amounts of data and for enforcement of referential integrity constraints. Typically they run on a single large machine and do not provide support for executing map and reduce functions on the table, nor do they typically support acting over complex user defined data types. /emr/faqs/;How can I get started with Hive running on Amazon EMR?;The best place to start is to review our written documentation located here. /emr/faqs/;Are there new features in Hive specific to Amazon EMR?;Yes. Refer to our documentation for further details: /emr/faqs/;What types of Hive clusters are supported?;There are two types of clusters supported with Hive: interactive and batch. In an interactive mode a customer can start a cluster and run Hive scripts interactively directly on the master node. Typically, this mode is used to do ad hoc data analyses and for application development. In batch mode, the Hive script is stored in Amazon S3 and is referenced at the start of the cluster. Typically, batch mode is used for repeatable runs such as report generation. /emr/faqs/;How can I launch a Hive cluster?;Both batch and interactive clusters can be started from AWS Management Console, EMR command line client, or APIs. Please refer to the Hive section in the Release Guide for more details on launching a Hive cluster. /emr/faqs/;When should I use Hive vs. PIG?;Hive and PIG both provide high level data-processing languages with support for complex data types for operating on large datasets. The Hive language is a variant of SQL and so is more accessible to people already familiar with SQL and relational databases. Hive has support for partitioned tables which allow Amazon EMR clusters to pull down only the table partition relevant to the query being executed rather than doing a full table scan. Both PIG and Hive have query plan optimization. PIG is able to optimize across an entire scripts while Hive queries are optimized at the statement level. /emr/faqs/;What version of Hive does Amazon EMR support?;For latest version of Hive on Amazon EMR, please refer to documentation. /emr/faqs/;Can I share data between clusters?;Yes. You can read data in Amazon S3 within a Hive script by having ‘create external table’ statements at the top of your script. You need a create table statement for each external resource that you access. /emr/faqs/;Should I run one large cluster, and share it amongst many users or many smaller clusters?;Amazon EMR provides a unique capability for you to use both methods. On the one hand one large cluster may be more efficient for processing regular batch workloads. On the other hand, if you require ad-hoc querying or workloads that vary with time, you may choose to create several separate cluster tuned to the specific task sharing data sources stored in Amazon S3. You can use EMR Managed Scaling to optimize resource usage. /emr/faqs/;Can I access a script or jar resource which is on my local file system?;No. You must upload the script or jar to Amazon S3 or to the cluster’s master node before it can be referenced. For uploading to Amazon S3 you can use tools including s3cmd, jets3t or S3Organizer. /emr/faqs/;Can I run a persistent cluster executing multiple Hive queries?;Yes. You run a cluster in a manual termination mode so it will not terminate between Hive steps. To reduce the risk of data loss we recommend periodically persisting all of your important data in Amazon S3. It is good practice to regularly transfer your work to a new cluster to test your process for recovering from master node failure. /emr/faqs/;Can multiple users execute Hive steps on the same source data?;Yes. Hive scripts executed by multiple users on separate clusters may contain create external table statements to concurrently import source data residing in Amazon S3. /emr/faqs/;Can multiple users run queries on the same cluster?;"Yes. In the batch mode, steps are serialized. Multiple users can add Hive steps to the same cluster; however, the steps will be executed serially. In interactive mode, several users can be logged on to the same cluster and execute Hive statements concurrently." /emr/faqs/;Can data be shared between multiple AWS users?;Yes. Data can be shared using standard Amazon S3 sharing mechanism described here. /emr/faqs/;Does Hive support access from JDBC?;Yes. Hive provides JDBC drive, which can be used to programmatically execute Hive statements. To start a JDBC service in your cluster you need to pass an optional parameter in the Amazon EMR command line client. You also need to establish an SSH tunnel because the security group does not permit external connections. /emr/faqs/;What is your procedure for updating packages on EMR AMIs?;On first boot, the Amazon Linux AMIs for EMR connect to the Amazon Linux AMI yum repositories to install security updates. When you use a custom AMI, you can disable this feature, but we don’t recommend this for security reasons. /emr/faqs/;Can I update my own packages on EMR clusters?;Yes. You can use Bootstrap Actions to install updates to packages on your clusters. /emr/faqs/;Can I process DynamoDB data using Hive?;Yes. Simply define an external Hive table based on your DynamoDB table. You can then use Hive to analyze the data stored in DynamoDB and either load the results back into DynamoDB or archive them in Amazon S3. For more information please visit our Developer Guide. /emr/faqs/;What is Apache Hudi?;Apache Hudi is an open-source data management framework used to simplify incremental data processing and data pipeline development. Apache Hudi enables you to manage data at the record-level in Amazon S3 to simplify Change Data Capture (CDC) and streaming data ingestion, and provides a framework to handle data privacy use cases requiring record level updates and deletes. Data sets managed by Apache Hudi are stored in S3 using open storage formats, and integrations with Presto, Apache Hive, Apache Spark, and AWS Glue Data Catalog give you near real-time access to updated data using familiar tools. /emr/faqs/;When should I use Apache Hudi?;Apache Hudi helps you with uses cases requiring record-level data management on S3. There are five common use-cases that benefit from these abilities: /emr/faqs/;How do I create an Apache Hudi data set?;Apache Hudi data sets are created using Apache Spark. Creating a data set is as simple as writing an Apache Spark DataFrame. The metadata for Apache Hudi data sets can optionally be stored in the AWS Glue Data Catalog or the Hive metastore to simplify data discovery and for integrating with Apache Hive and Presto. /emr/faqs/;How does Apache Hudi manage data sets?;When creating a data set with Apache Hudi, you can choose what type of data access pattern the data set should be optimized for. For read-heavy use cases, you can choose the “Copy on Write” data management strategy to optimize for frequent reads of the data set. This strategy organizes data using columnar storage formats, and merges existing data and new updates when the updates are written. For write-heavy workloads, Apache Hudi uses the “Merge on Read” data management strategy which organizes data using a combination of columnar and row storage formats, where updates are appended to a file in row based storage format, while the merge is performed at read time to provide the updated results. /emr/faqs/;How do I write to an Apache Hudi data set?;"Changes to Apache Hudi data sets are made using Apache Spark. With Apache Spark, Apache Hudi data sets are operated on using the Spark DataSource API, enabling you to read and write data. DataFrame containing newly added data or updates to existing data can be written using the same DataSource API"". You can also use the Hudi DeltaStreamer utility." /emr/faqs/;How do I read from an Apache Hudi data set?;You can read data using either Apache Spark, Apache Hive, Presto, Amazon Redshift Spectrum or Amazon Athena. When you create a data set, you have the option to publish the metadata of that data set in either the AWS Glue Data Catalog, or the Hive metastore. If you choose to publish the metadata in a metastore, your data set will look just like an ordinary table, and you can query that table using Apache Hive and Presto. /emr/faqs/;What considerations or limitations should I be aware of when using Apache Hudi?;For a full list of consideration and limitations when using Apache Hudi on Amazon EMR, please refer to our Amazon EMR documentation. /emr/faqs/;How does my existing data work with Apache Hudi?;If you have existing data that you want to now manage with Apache Hudi, you can easily convert your Apache Parquet data to Apache Hudi data sets using an import tool provided with Apache Hudi on Amazon EMR, or you can use Hudi DeltaStreamer utility, or Apache Spark to rewrite your existing data as an Apache Hudi data set. /emr/faqs/;What is Impala?;Impala is an open source tool in the Hadoop ecosystem for interactive, ad hoc querying using SQL syntax. Instead of using MapReduce, it leverages a massively parallel processing (MPP) engine similar to that found in traditional relational database management systems (RDBMS). With this architecture, you can query your data in HDFS or HBase tables very quickly, and leverage Hadoop’s ability to process diverse data types and provide schema at runtime. This lends Impala to interactive, low-latency analytics. In addition, Impala uses the Hive metastore to hold information about the input data, including the partition names and data types. Also, Impala on Amazon EMR requires AMIs running Hadoop 2.x or greater. Click here to learn more about Impala. /emr/faqs/;What can I do with Impala running on Amazon EMR?;Similar to using Hive with Amazon EMR, leveraging Impala with Amazon EMR can implement sophisticated data-processing applications with SQL syntax. However, Impala is built to perform faster in certain use cases (see below). With Amazon EMR, you can use Impala as a reliable data warehouse to execute tasks such as data analytics, monitoring, and business intelligence. Here are three use cases: /emr/faqs/;How is Impala different than traditional RDBMSs?;Traditional relational database systems provide transaction semantics and database atomicity, consistency, isolation, and durability (ACID) properties. They also allow tables to be indexed and cached so that small amounts of data can be retrieved very quickly, provide for fast updates of small amounts of data, and for enforcement of referential integrity constraints. Typically, they run on a single large machine and do not provide support for acting over complex user defined data types. Impala uses a similar distributed query system to that found in RDBMSs, but queries data stored in HDFS and uses the Hive metastore to hold information about the input data. As with Hive, the schema for a query is provided at runtime, allowing for easier schema changes. Also, Impala can query a variety of complex data types and execute user defined functions. However, because Impala processes data in-memory, it is important to understand the hardware limitations of your cluster and optimize your queries for the best performance. /emr/faqs/;How is Impala different than Hive?;Impala executes SQL queries using a massively parallel processing (MPP) engine, while Hive executes SQL queries using MapReduce. Impala avoids Hive’s overhead from creating MapReduce jobs, giving it faster query times than Hive. However, Impala uses significant memory resources and the cluster’s available memory places a constraint on how much memory any query can consume. Hive is not limited in the same way, and can successfully process larger data sets with the same hardware. Generally, you should use Impala for fast, interactive queries, while Hive is better for ETL workloads on large datasets. Impala is built for speed and is great for ad hoc investigation, but requires a significant amount of memory to execute expensive queries or process very large datasets. Because of these limitations, Hive is recommended for workloads where speed is not as crucial as completion. Click here to view some performance benchmarks between Impala and Hive. /emr/faqs/;Can I use Hadoop 1?;No, Impala requires Hadoop 2, and will not run on a cluster with an AMI running Hadoop 1.x. /emr/faqs/;What instance types should I use for my Impala cluster?;For the best experience with Impala, we recommend using memory-optimized instances for your cluster. However, we have shown that there are performance gains over Hive when using standard instance types as well. We suggest reading our Performance Testing and Query Optimization section in the Amazon EMR Developer’s Guide to better estimate the memory resources your cluster will need with regards to your dataset and query types. The compression type, partitions, and the actual query (number of joins, result size, etc.) all play a role in the memory required. You can use the EXPLAIN statement to estimate the memory and other resources needed for an Impala query. /emr/faqs/;What happens if I run out of memory on a query?;"If you run out of memory, queries fail and the Impala daemon installed on the affected node shuts down. Amazon EMR then restarts the daemon on that node so that Impala will be ready to run another query. Your data in HDFS on the node remains available, because only the daemon running on the node shuts down, rather than the entire node itself. For ad hoc analysis with Impala, the query time can often be measured in seconds; therefore, if a query fails, you can discover the problem quickly and be able to submit a new query in quick succession." /emr/faqs/;Does Impala support user defined functions?;Yes, Impala supports user defined functions (UDFs). You can write Impala specific UDFs in Java or C++. Also, you can modify UDFs or user-defined aggregate functions created for Hive for use with Impala. For information about Hive UDFs, click here. /emr/faqs/;Where is the data stored for Impala to query?;Impala queries data in HDFS or in HBase tables. /emr/faqs/;Can I run Impala and MapReduce at the same time on a cluster?;Yes, you can set up a multitenant cluster with Impala and MapReduce. However, you should be sure to allot resources (memory, disk, and CPU) to each application using YARN on Hadoop 2.x. The resources allocated should be dependent on the needs for the jobs you plan to run on each application. /emr/faqs/;Does Impala support ODBC and JDBC drivers?;While you can use ODBC drivers, Impala is also a great engine for third-party tools connected through JDBC. You can download and install the Impala client JDBC driver from http://elasticmapreduce.s3.amazonaws.com/libs/impala/1.2.1/impala-jdbc-1.2.1.zip. From the client computer where you have your business intelligence tool installed, connect the JDBC driver to the master node of an Impala cluster using SSH or a VPN on port 21050. For more information, see Open an SSH Tunnel to the Master Node. /emr/faqs/;What is Apache Pig?;Pig is an open source analytics package that runs on top of Hadoop. Pig is operated by a SQL-like language called Pig Latin, which allows users to structure, summarize, and query data sources stored in Amazon S3. As well as SQL-like operations, Pig Latin also adds first-class support for map/reduce functions and complex extensible user defined data types. This capability allows processing of complex and even unstructured data sources such as text documents and log files. Pig allows user extensions via user-defined functions written in Java and deployed via storage in Amazon S3. /emr/faqs/;What can I do with Pig running on Amazon EMR?;Using Pig with Amazon EMR, you can implement sophisticated data-processing applications with a familiar SQL-like language and easy to use tools available with Amazon EMR. With Amazon EMR, you can turn your Pig applications into a reliable data warehouse to execute tasks such as data analytics, monitoring, and business intelligence tasks. /emr/faqs/;How can I get started with Pig running on Amazon EMR?;The best place to start is to review our written documentation located here. /emr/faqs/;Are there new features in Pig specific to Amazon EMR?;Yes. There are three new features which make Pig even more powerful when used with Amazon EMR, including: /emr/faqs/;What types of Pig clusters are supported?;There are two types of clusters supported with Pig: interactive and batch. In an interactive mode a customer can start a cluster and run Pig scripts interactively directly on the master node. Typically, this mode is used to do ad hoc data analyses and for application development. In batch mode, the Pig script is stored in Amazon S3 and is referenced at the start of the cluster. Typically, batch mode is used for repeatable runs such as report generation. /emr/faqs/;How can I launch a Pig cluster?;Both batch and interactive clusters can be started from AWS Management Console, EMR command line client, or APIs. /emr/faqs/;What version of Pig does Amazon EMR support?;Amazon EMR supports multiple versions of Pig. /emr/faqs/;Can I share input data in S3 between clusters?;Yes, you are able to read the same data in S3 from two concurrent clusters. /emr/faqs/;Can data be shared between multiple AWS users?;Yes. Data can be shared using standard Amazon S3 sharing mechanism described here http://docs.amazonwebservices.com/AmazonS3/latest/index.html?S3_ACLs.html /emr/faqs/;Should I run one large cluster, and share it amongst many users or many smaller clusters?;Amazon EMR provides a unique capability for you to use both methods. On the one hand one large cluster may be more efficient for processing regular batch workloads. On the other hand, if you require ad-hoc querying or workloads that vary with time, you may choose to create several separate cluster tuned to the specific task sharing data sources stored in Amazon S3. /emr/faqs/;Can I access a script or jar resource which is on my local file system?;No. You must upload the script or jar to Amazon S3 or to the cluster’s master node before it can be referenced. For uploading to Amazon S3 you can use tools including s3cmd, jets3t or S3Organizer. /emr/faqs/;Can I run a persistent cluster executing multiple Pig queries?;Yes. You run a cluster in a manual termination mode so it will not terminate between Pig steps. To reduce the risk of data loss we recommend periodically persisting all important data in Amazon S3. It is good practice to regularly transfer your work to a new cluster to test you process for recovering from master node failure. /emr/faqs/;Does Pig support access from JDBC?;No. Pig does not support access through JDBC. /emr/faqs/;What is Apache HBase?;HBase is an open source, non-relational, distributed database modeled after Google's BigTable. It was developed as part of Apache Software Foundation's Hadoop project and runs on top of Hadoop Distributed File System(HDFS) to provide BigTable-like capabilities for Hadoop. HBase provides you a fault-tolerant, efficient way of storing large quantities of sparse data using column-based compression and storage. In addition, HBase provides fast lookup of data because data is stored in-memory instead of on disk. HBase is optimized for sequential write operations, and it is highly efficient for batch inserts, updates, and deletes. HBase works seamlessly with Hadoop, sharing its file system and serving as a direct input and output to Hadoop jobs. HBase also integrates with Apache Hive, enabling SQL-like queries over HBase tables, joins with Hive-based tables, and support for Java Database Connectivity (JDBC). You can learn more about Apache HBase here. /emr/faqs/;Are there new features in HBase specific to Amazon EMR?;With Amazon EMR, you can you can use HBase on Amazon S3 to store a cluster's HBase root directory and metadata directly to Amazon S3 and create read replicas and snapshots. Please see our documentation to learn more. /emr/faqs/;Which versions of HBase are supported on Amazon EMR?;You can look at the latest HBase versions supported on Amazon EMR here. /emr/faqs/;What does EMR Connector to Kinesis enable?;The connector enables EMR to directly read and query data from Kinesis streams. You can now perform batch processing of Kinesis streams using existing Hadoop ecosystem tools such as Hive, Pig, MapReduce, Hadoop Streaming, and Cascading. /emr/faqs/;What does the EMR connector to Kinesis enable that I couldn’t have done before?;Reading and processing data from a Kinesis stream would require you to write, deploy and maintain independent stream processing applications. These take time and effort. However, with this connector, you can start reading and analyzing a Kinesis stream by writing a simple Hive or Pig script. This means you can analyze Kinesis streams using SQL! Of course, other Hadoop ecosystem tools could be used as well. You don’t need to developed or maintain a new set of processing applications. /emr/faqs/;Who will find this functionality useful?;The following types of users will find this integration useful: /emr/faqs/;What are some use cases for this integration?;The following are representative use cases are enabled by this integration: /emr/faqs/;What EMR AMI version do I need to be able to use the connector?;You need to use EMR’s AMI version 3.0.4 and later. /emr/faqs/;Is this connector a stand-alone tool?;No, it is a built in component of the Amazon distribution of Hadoop and is present on EMR AMI versions 3.0.4 and later. Customer simply needs to spin up a cluster with AMI version 3.0.4 or later to start using this feature. /emr/faqs/;What data format is required to allow EMR to read from a Kinesis stream?;The EMR Kinesis integration is not data format-specific. You can read data in any format. Individual Kinesis records are presented to Hadoop as standard records that can be read using any Hadoop MapReduce framework. Individual frameworks like Hive, Pig and Cascading have built in components that help with serialization and deserialization, making it easy for developers to query data from many formats without having to implement custom code. For example, in Hive users can read data from JSON files, XML files and SEQ files by specifying the appropriate Hive SerDe when they define a table. Pig has a similar component called Loadfunc/Evalfunc and Cascading has a similar component called a Tap. Hadoop users can leverage the extensive ecosystem of Hadoop adapters without having to write format-specific code. You can also implement custom deserialization formats to read domain specific data in any of these tools. /emr/faqs/;How do I analyze a Kinesis stream using Hive in EMR?;Create a table that references a Kinesis stream. You can then analyze the table like any other table in Hive. Please see our tutorials for page more details. /emr/faqs/;Using Hive, how do I create queries that combine Kinesis stream data with other data source?;First create a table that references a Kinesis stream. Once a Hive table has been created, you can join it with tables mapping to other data sources such as Amazon S3, Amazon Dynamo DB, and HDFS. This effectively results in joining data from Kinesis stream to other data sources. /emr/faqs/;Is this integration only available for Hive?;No, you can use Hive, Pig, MapReduce, Hadoop Streaming, and Cascading. /emr/faqs/;How do I setup scheduled jobs to run on a Kinesis stream?;The EMR Kinesis input connector provides features that help you configure and manage scheduled periodic jobs in traditional scheduling engines such as Cron. For example, you can develop a Hive script that runs every N minutes. In the configuration parameters for a job, you can specify a Logical Name for the job. The Logical Name is a label that will inform the EMR Kinesis input connector that individual instances of the job are members of the same periodic schedule. The Logical Name allows the process to take advantage of iterations, which are explained next. /emr/faqs/;Where is the metadata for Logical Names and Iterations stored?;The metadata that allows the EMR Kinesis input connector to work in scheduled periodic workflows is stored in Amazon DynamoDB. You must provision an Amazon Dynamo DB table and specify it as an input parameter to the Hadoop Job. It is important that you configure appropriate IOPS for the table to enable this integration. Please refer to the getting started tutorial for more information on setting up your Amazon Dynamo DB table. /emr/faqs/;What happens when an iteration processing fails?;Iterations identifiers are user-provided values that map to specific boundary (start and end sequence numbers) in a Kinesis stream. Data corresponding to these boundaries is loaded in the Map phase of the MapReduce job. This phase is managed by the framework and will be automatically re-run (three times by default) in case of job failure. If all the retries fail, you would still have options to retry the processing starting from last successful data boundary or past data boundaries. This behavior is controlled by providing kinesis.checkpoint.iteration.no parameter during processing. Please refer to the getting started tutorial for more information on how this value is configured for different tools in the Hadoop ecosystem. /emr/faqs/;Can I run multiple queries on the same iteration?;Yes, you can specify a previously run iteration by setting the kinesis.checkpoint.iteration.no parameter in successive processing. The implementation ensures that successive runs on the same iteration will have precisely the same input records from the Kinesis stream as the previous runs. /emr/faqs/;What happens if records in an Iteration expire from the Kinesis stream?;In the event that the beginning sequence number and/or end sequence number of an iteration belong to records that have expired from the Kinesis steam, the Hadoop job will fail. You would need to use a different Logical Name to process data from the beginning of the Kinesis stream. /emr/faqs/;Can I push data from EMR into Kinesis stream?;No. The EMR Kinesis connector currently does not support writing data back into a Kinesis stream. /emr/faqs/;Does the EMR Hadoop input connector for Kinesis enable continuous stream processing?;The Hadoop MapReduce framework is a batch processing system. As such, it does not support continuous queries. However there is an emerging set of Hadoop ecosystem frameworks like Twitter Storm and Spark Streaming that enable to developers build applications for continuous stream processing. A Storm connector for Kinesis is available at on GitHub here and you can find a tutorial explaining how to setup Spark Streaming on EMR and run continuous queries here. /emr/faqs/;Can I specify access credential to read a Kinesis stream that is managed in another AWS account?;Yes. You can read streams from another AWS account by specifying the appropriate access credentials of the account that owns the Kinesis stream. By default, the Kinesis connector utilizes the user-supplied access credentials that are specified when the cluster is created. You can override these credentials to access streams from other AWS Accounts by setting the kinesis.accessKey and kinesis.secretKey parameters. The following examples show how to set the kinesis.accessKey and kinesis.secretKey parameters in Hive and Pig. /emr/faqs/;Can I run multiple parallel queries on a single Kinesis Stream? Is there a performance impact?;Yes, a customer can run multiple parallel queries on the same stream by using separate logical names for each query. However, reading from a shard within a Kinesis stream is subjected to a rate limit of 2MB/sec. Thus, if there are N parallel queries running on the same stream, each one would get roughly (2/NMB/sec egress rate per shard on the stream. This may slow down the processing and in some cases fail the queries as well. /emr/faqs/;Can I join and analyze multiple Kinesis streams in EMR?;Yes, for example in Hive, you can create two tables mapping to two different Kinesis streams and create joins between the tables. /emr/faqs/;Does the EMR Kinesis connector handle Kinesis scaling events, such as merge and split events?;Yes. The implementation handles split and merge events. The Kinesis connector ties individual Kinesis shards (the logical unit of scale within a Kinesis stream) to Hadoop MapReduce map tasks. Each unique shard that exists within a stream in the logical period of an Iteration will result in exactly one map task. In the event of a shard split or merge event, Kinesis will provision new unique shard Ids. As a result, the MapReduce framework will provision more map tasks to read from Kinesis. All of this is transparent to the user. /emr/faqs/;What happens if there are periods of “silence” in my stream?;The implementation allows you to configure a parameter called kinesis.nodata.timeout. For example, consider a scenario where kinesis.nodata.timeout is set to 2 minutes and you want to run a Hive query every 10 minutes. Additionally, consider some data has been written to the stream since the last iteration (10 minutes ago). However, currently no new records are arriving, i.e. there is a silence in the stream. In this case, when the current iteration of the query launches, the Kinesis connector would find that no new records are arriving. The connector will keep polling the stream for 2 minutes and if no records arrive for that interval then it will stop and process only those records that were already read in the current batch of stream. However, if new records start arriving before kinesis.nodata.timeout interval is up, then the connector will wait for an additional interval corresponding to a parameter called kinesis.iteration.timeout. Please look at the tutorials to see how to define these parameters. /emr/faqs/;How do I debug a query that continues to fail in each iteration?;In the event of a processing failure, you can utilize the same tools they currently do when debugging Hadoop Jobs. Including the Amazon EMR web console, which helps identify and access error logs. More details on debugging an EMR job can be found here. /emr/faqs/;What happens if I specify a DynamoDB table that I don’t have access to?;The job would fail and the exception would show up in error logs for the job. /emr/faqs/;What happens if job doesn’t fail but checkpointing to DynamoDB fails?;The job would fail and the exception would show up in error logs for the job. /emr/faqs/;How do I maximize the read throughput from Kinesis stream to EMR?;Throughput from Kinesis stream increases with instance size used and record size in the Kinesis stream. We recommend that you use m1.xlarge and above for both master and core nodes for this feature. /emr/faqs/;What is Amazon EMR Service Level Agreement?;Please refer to our Service Level Agreement. /emr/faqs/;What does your Amazon EMR Service Level Agreement provide?;AWS will use commercially reasonable efforts to make each Amazon EMR Service available with a Monthly Uptime Percentage for each AWS region, in each case during any monthly billing cycle, of at least 99.9% (the “Service Commitment”). /emr/faqs/;How do I know if I qualify for a Service Credit? How do I claim one?;opening a case in the AWS Support Center. To understand the eligibility and claim format, please see https://aws.amazon.com/emr/sla/ /cloudsearch/faqs/;What is Amazon CloudSearch?;Amazon CloudSearch is a fully-managed service in the AWS Cloud that makes it easy to set up, manage, and scale a search solution for your website or application. /cloudsearch/faqs/;What is a search engine?;"A search engine makes it possible to search large collections of mostly textual data items (called documents) to quickly find the best matching results. Search requests are usually a few words of unstructured text, such as ""matt damon movies"". The returned results are usually ranked with the best matching, or most relevant, items listed first (the ones that are most ""about"" the search words)." /cloudsearch/faqs/;What benefits does Amazon CloudSearch offer?;Amazon CloudSearch is a fully managed search service that automatically scales with the volume of data and complexity of search requests to deliver fast and accurate results. Amazon CloudSearch lets customers add search capability without needing to manage hosts, traffic and data scaling, redundancy, or software packages. Users pay low hourly rates only for the resources consumed. Amazon CloudSearch can offer significantly lower total cost of ownership compared to operating and managing your own search environment. /cloudsearch/faqs/;Can Amazon CloudSearch be used with a storage service?;A search service and a storage service are complementary. A search service requires that your documents already be stored somewhere, whether it's in files of a file system, data in Amazon S3, or records in an Amazon DynamoDB or Amazon RDS instance. The search service is a rapid retrieval system that makes those items searchable with sub-second latencies through a process called indexing. /cloudsearch/faqs/;Can Amazon CloudSearch be used with a database?;Search engines and databases are not mutually exclusive - in fact, they are often used together. If you already have a database that contains structured data, you might want to use a search engine to intelligently filter and rank the database contents using search keywords as relevance criteria. /cloudsearch/faqs/;What regions is Amazon CloudSearch available in?;Amazon CloudSearch is available in the following AWS Regions: US East (Northern Virgina), US West (Oregon), US West (NCalifornia), EU (Ireland), EU (Frankfurt), South America (Sao Paulo) and Asia Pacific (Singapore, Tokyo, Sydney, and Seoul). /cloudsearch/faqs/;What are the latest CloudSearch instance types?;"In Jan 2021, we launched new CloudSearch instance types to replace the older instances. The latest CloudSearch instances are search.small, search.medium, search.large, search.xlarge, and search.2xlarge, and are one to one replacements for the existing instances; for example, search.small replaces search.m1.small. The new instances leverage the latest generation EC2 instance types underneath, and hence provide better availability and performance at the same pricing." /cloudsearch/faqs/;How do we update our domains to the new instances?;We will automatically move your domain to the new instances seamlessly. Naction is needed by you. We will do this migration incrementally over the next several weeks, starting with domains that are on the 2013 version of CloudSearch. You will see a notification on the console once your domain is updated to the new instance types. Any new domains that you create, will automatically start using the new instances. If you have any questions about the migration, please reach out to AWS support. /cloudsearch/faqs/;Will I incur additional cost due to the new instances?;No. These instances are priced the same as the instances that you were using earlier or are currently using, and offer better availability and performance. /cloudsearch/faqs/;My domain is running previous generation CloudSearch instances such as search.m2.2xlarge. Will my domain be migrated?;Yes, your domain will be migrated to equivalent new instances in subsequent phases of the migration. For example, search.m2.2xlarge will be updated to search.previousgeneration.2xlarge. These instances are priced the same as the existing instances, and provide better stability for your domain. /cloudsearch/faqs/;What new features does Amazon CloudSearch support?;With this latest release Amazon CloudSearch supports several new search and administration features. The key new features include: /cloudsearch/faqs/;Does Amazon CloudSearch still support dictionary stemming?;Yes. The new version of Amazon CloudSearch supports dictionary stemming in addition to algorithmic stemming. /cloudsearch/faqs/;Does the new version of Amazon CloudSearch use Apache Solr?;Yes. The latest version of Amazon CloudSearch has been modified to use Apache Solr as the underlying text search engine. Amazon CloudSearch now provides several popular search engine features available with Apache Solr in addition to the managed search service experience that makes it easy to set up, operate, and scale a search domain. /cloudsearch/faqs/;Can I access the new version of Amazon CloudSearch through the console?;Yes. You can access the new version of Amazon CloudSearch through the console. If you are a current Amazon CloudSearch customer with existing search domains, you have the option to select which version of Amazon CloudSearch you want to use when creating new search domains. New customers will use the new version of Amazon CloudSearch by default and will not have access to the 2011-01-01 version. /cloudsearch/faqs/;What data types does the new version of Amazon CloudSearch support?;Amazon CloudSearch supports two types of text fields, text and literal. Text fields are processed according to the language configured for the field to determine individual words that can serve as matches for queries. Literal fields are not processed and must match exactly, including case. CloudSearch also supports four numeric types: int, double, date, and latlon. Int fields hold 64-bit, signed integer values. Double fields hold double-width floating point values. Date fields hold dates specified in UTC (Coordinated Universal Time) according to IETF RFC3339: yyyy-mm-ddT00:00:00Z. Latlon fields contain a location stored as a latitude and longitude value pair. /cloudsearch/faqs/;Will my existing search domains created with the 2011-02-01 version of Amazon CloudSearch continue to work?;Yes. Existing search domains created with the 2011-02-01 version of Amazon CloudSearch will continue to work. /cloudsearch/faqs/;Will I be able to use the new features on my existing search domains created with the 2011-01-01 version of Amazon CloudSearch?;No. Existing search domains created with the 2011-01-01 version of Amazon CloudSearch will not have access to the features available in the new version. To access the new features you will have to create a new search domain using the 2013-01-01 version of Amazon CloudSearch. /cloudsearch/faqs/;How can I migrate my applications built using the 2011-01-01 version of Amazon CloudSearch to the new version of Amazon CloudSearch?;To use the new version of Amazon CloudSearch you need to recreate existing domains using the new version of Amazon CloudSearch and re-upload your data. For more information, see Migrating to the 2013-01-01 API in the Amazon CloudSearch Developer Guide. /cloudsearch/faqs/;Will AWS continue to support the 2011-02-01 version of Amazon CloudSearch?;Yes. AWS will continue support for the 2011-02-01 version of Amazon CloudSearch. /cloudsearch/faqs/;Can I create new search domains using the 2011-02-01 version of Amazon CloudSearch?;Current Amazon CloudSearch customers who have existing 2011-02-01 domains will be able to choose whether their new domains use the 2011-02-01 API or the new 2013-01-01 API. Search domains created by new customers will automatically be created with the 2013-01-01 API. /cloudsearch/faqs/;Can I take advantage of the free trial offer with the new version of Amazon CloudSearch?;New customers will still be able to take advantage of the free trial offer available with Amazon CloudSearch. See the Amazon CloudSearch Free Trial page for details. /cloudsearch/faqs/;How do I get started with Amazon CloudSearch?;To sign up for Amazon CloudSearch, click the Create Free Account button on the Amazon CloudSearch detail page and complete the sign-up process. You must have an Amazon Web Services account. If you do not already have one, you will be prompted to create an AWS account when you begin the Amazon CloudSearch sign-up process. /cloudsearch/faqs/;Do the AWS SDKs support Amazon CloudSearch?;Yes, the AWS SDKs for Java, Ruby, Python, .Net, PHP, and Node.js provide support for CloudSearch. Using the AWS SDKs you can quickly create a search domain, configure your search fields, upload data, and send search queries to your search domain. /cloudsearch/faqs/;Can I still use the Amazon CloudSearch CLTs?;Yes, the Amazon CloudSearch CLTs will continue to work. /cloudsearch/faqs/;What is a search domain and how do I create one?;A search domain is a data container and a set of services that make the data searchable. These services include: /cloudsearch/faqs/;How do I upload documents to my search domain?;You upload documents to your domain using the AWS Management Console, AWS SDKs, or AWS CLI. /cloudsearch/faqs/;Do my documents need to be in a particular format?;To make your data searchable, you need to format your data in JSON or XML. Each item that you want to be able to receive as a search result is represented as a document. Every document has a unique document ID and one or more fields that contain the data that you want to search and return in results. Amazon CloudSearch generates a search index from your document data according to the index fields configured for the domain. As your data changes, you submit updates to add or delete documents from your index. /cloudsearch/faqs/;How do I create document batches formatted for Amazon CloudSearch?;To create document batches that describe your data, you create JSON or XML text files that specify: /cloudsearch/faqs/;How do my documents get indexed?;Documents are automatically indexed when you upload them to your search domain. You can also explicitly re-index your documents when you make configuration changes by sending an IndexDocuments request. /cloudsearch/faqs/;When do I need to re-index my domain?;Certain configuration options, such as adding a new index field or updating your stemming or stopword dictionaries, are not available until your domain is re-indexed. When you have made changes that require indexing, the domain’s status will indicate that it needs to be indexed. You can initiate indexing from the AWS Management Console, AWS SDKs, or AWS CLI. /cloudsearch/faqs/;How do I send search requests to my search domain?;Every search domain has a REST-based search service with a unique URL (search endpoint) that accepts search requests for its document set. You can send search requests from the AWS Management Console, AWS SDKs, or AWS CLI. /cloudsearch/faqs/;Can a search domain span multiple Availability Zones?;Yes. If you enable the Multi-AZ option, Amazon CloudSearch deploys additional instances in a second availability zone in the same Region. For more information, see Configuring Availability Options in the Amazon CloudSearch Developer Guide. /cloudsearch/faqs/;Can I move a search domain from one region to another?;At this time, there is no way to automatically migrate a search domain from one region to another. You will need to create a new domain in the target region, configure the domain and upload your data, then delete the original domain. /cloudsearch/faqs/;How do I delete my search domain?;To delete a search domain, click on Delete Domain button in the Amazon CloudSearch console. You can also delete domains through the AWS SDKs or AWS CLI. /cloudsearch/faqs/;How do I delete documents from my search domain?;To delete documents you specify a delete operation in your batch upload that contains the ID of the document you want to remove. /cloudsearch/faqs/;How do I empty my search domain?;If you wish to maintain your domain’s endpoints, you can send a delete for each document that is in your domain. /cloudsearch/faqs/;"Why is my domain in the ""Processing"" state?";A domain can be in one of three different states: “processing,” “active,” or “reindexing.” Normally, your domain will be in the “active” state, which indicates that no changes are currently being made, that the domain can be queried and updated, and that all previous changes are currently visible in the search results. /cloudsearch/faqs/;What are the best practices for bootstrapping data into CloudSearch?;After you’ve launched your domain, the next step is loading your data into Amazon CloudSearch. You’ll likely need to upload a single large dataset, and then make smaller updates or additions as new data comes in. The following guidelines will help make bootstrapping your initial data into CloudSearch quick and easy. 1. Use the curl-v command line tool when preparing your script /cloudsearch/faqs/;What are some ways to avoid 504 errors?;If you’re seeing 504 errors or high replication counts, try moving to larger instance type. For example, if you’re having problems with m3.large, move up to m3.xlarge. If you continue to get 504 errors even after pre-scaling, start batching the data and increase the delay between retries. /cloudsearch/faqs/;What are the best practices to accelerate domain configuration and re-indexing?;When you change the configuration options of your search domain, you must rebuild your search index for those changes to take effect in search results. Rebuilding the index can take 30 to 60 minutes whether you make one configuration change at a time or several configuration changes at once. Even if your domain has only a small number of documents, re-indexing takes this time because of the processing and provisioning necessary to build the index and distribute it. Therefore, you should plan your configuration changes ahead of time, make all of your changes at once, and then re-index your domain. The same applies when setting up a new domain - plan your configuration before you set it up so that you can index only once and get up and running in the shortest time possible. Some domain changes require re-indexing while others just require re-deploying the existing index. Redeploying the domain takes 10 to 15 minutes compared to 30-60 minutes for re-indexing. During re-deployment, CloudSearch creates new nodes, deploys the index on them, and shuts down the old nodes. Your domain status changes to “Processing” during re-deployment. When re-indexing is needed, your domain status changes to “Needs Indexing,” followed by “Processing” once you have initiated indexing. Once the new index is created, your domain is re-deployed. The following table summarizes which changes require re-indexing followed by re-deployment and which changes require just re-deployment. Understanding this will help you better plan your configuration changes. /cloudsearch/faqs/;What search features does Amazon CloudSearch provide?;Amazon CloudSearch provides features to index and search both structured data and plain text, including faceted search, free text search, Boolean search expressions, customizable relevance ranking, query time rank expressions, field weighting, searching and sorting of results using any field, and text processing options including tokenization, stopwords, stemming and synonyms. It also provides near real-time indexing for document updates. New features include: /cloudsearch/faqs/;What is faceting?;"Faceting allows you to categorize your search results into refinements on which the user can further search. For example, a user might search for ""umbrellas"", and facets allow you to group the results by price, such as $0-$10, $10-$20, $20-$40, and so on. Amazon CloudSearch also allows for result counts to be included in facets, so that each refinement has a count of the number of documents in that group. The example could then be: $0-$10 (4 items), $10-$20 (123 items), $20-$40 (57 items), and so on." /cloudsearch/faqs/;What languages does Amazon CloudSearch support?;Amazon CloudSearch currently supports 34 languages: Arabic (ar), Armenian (hy), Basque (eu), Bulgarian (bg), Catalan (ca), simplified Chinese (zh-Simp), traditional Chinese (zh-Trad), Czech (cs), Danish (da), Dutch (nl), English (en), Finnish (fi), French (fr), Galician (gl), German (de), Greek (el), Hebrew (he), Hindi (hi), Hungarian (hu), Indonesian (id), Irish (ga), Italian (it), Japanese (ja), Korean (ko), Latvian (la), Norwegian (no), Persian (fa), Portuguese (pt), Romanian (ro), Russian (ru), Spanish (es), Swedish (sv), Thai (th), and Turkish (tr). In addition, Amazon CloudSearch supports a Multiple (mul) option for fields that contain mixed languages. /cloudsearch/faqs/;Does Amazon CloudSearch support geospatial search?;Yes, Amazon CloudSearch has a native type to support latitude and longitude (latlon), so that you can easily implement geographically-based searching and sorting. For more information, see Searching and Ranking Results by Geographic Location in the Amazon CloudSearch Developer Guide. /cloudsearch/faqs/;How quickly will my uploaded documents become searchable?;Documents uploaded to a search domain typically become searchable within seconds to a few minutes. /cloudsearch/faqs/;How many search requests can I send to my search domain?;There is no intrinsic limit on the number of search requests that can be sent to a search domain. /cloudsearch/faqs/;What factors affect the latency of my search requests?;Your search requests are typically processed within a few hundred milliseconds, frequently much faster. Latency is affected by many factors including the time it takes for your request and responses to travel between your own application and your search domain, the complexity of your search request, and how heavily you are using your search domain. /cloudsearch/faqs/;What makes one search request more complex than another?;Amazon CloudSearch is designed to efficiently process a wide range of search requests very quickly. Search requests vary in complexity depending on the expressions that determine which documents match and additional criteria that determine how closely each document matches. Search requests that match a large number of documents take longer to process than those that match very few documents. Search requests that compute complex expressions take longer to process than those that rank using a simple criteria such as a single field. To help you understand the difference in complexity between Search requests, the time it took to process the request is returned as part of the response. /cloudsearch/faqs/;Where should I run my search application to minimize communication time with my search domain?;Applications hosted in the same AWS Region as your search domain will experience the fastest communication times. /cloudsearch/faqs/;What is a search instance?;A search instance is a single search engine in the cloud that indexes documents and responds to search requests. It has a finite amount of RAM and CPU resources for indexing data and processing requests. /cloudsearch/faqs/;What is a search partition?;A search partition is the portion of your data which fits on a single search instance. A search domain can have one or more search partitions, and the number of search partitions can change as your documents are indexed. /cloudsearch/faqs/;How does my search domain scale to meet my application needs?;Search domains scale in two dimensions: data and traffic. As your data volume grows, you need more (or larger) Search instances to contain your indexed data, and your index is partitioned among the search instances. As your request volume or request complexity increases, each Search Partition must be replicated to provide additional CPU for that Search Partition. For example, if your data requires three search partitions, you will have 3 search instances in your search domain. As your traffic increases beyond the capacity of a single search instance, each partition is replicated to provide additional CPU capacity, adding an additional three search instances to your search domain. Further increases in traffic will result in additional replicas, to a maximum of 5, for each search partition. /cloudsearch/faqs/;How much data can I upload to my search domain?;The number of partitions you need depends on your data and configuration, so the maximum data you can upload is the data set that when your search configuration is applied results in 10 search partitions. When you exceed your search partition limit, your domain will stop accepting uploads until you delete documents and re-index your domain. If you need more than 10 search partitions, please contact us. /cloudsearch/faqs/;Do I need to select the number and type of search instances for my search domain?;CloudSearch is a fully managed search service that automatically scales your search domain and selects the number and type of search instances. All search instances in a given search domain are of the same type and this type can change over time as your data or traffic grows. /cloudsearch/faqs/;What instance types does Amazon CloudSearch support?;Amazon CloudSearch supports the following instance types: /cloudsearch/faqs/;How do I find out the number and type of search instances in my search domain?;You can find out the number and type of search instances in your search domain by using the AWS Management Console, AWS SDKs, or AWS CLI. The number and type of search instances change over time and automatically scale up and down according to your indexable data and search traffic. /cloudsearch/faqs/;How quickly does my search domain scale to accommodate changes in data and traffic?;Search domains typically react to increases in traffic changes within minutes. Changes in data volume or a reduction in traffic might take longer but you can accelerate this process by invoking an IndexDocuments operation. If you are about to upload a large amount of data or expect a surge in query traffic, you can prescale your domain by setting the desired instance type and replication count. For more information, see Configuring Scaling Options in the Amazon CloudSearch Developer Guide. /cloudsearch/faqs/;Does Amazon CloudSearch support Multi-AZ deployments?;Yes. Amazon CloudSearch supports Multi-AZ deployments. When you enable the Multi-AZ option, Amazon CloudSearch provisions and maintains extra instances for your search domain in a second Availability Zone to ensure high availability. Updates are automatically applied to the instances in both Availability Zones. Search traffic is distributed across all of the instances and the instances in either zone are capable of handling the full load in the event of a failure. /cloudsearch/faqs/;How does the new Multi-AZ feature work? Will my system experience any downtime in the event of a failure?;When the Multi-AZ option is enabled, Amazon CloudSearch instances in either zone are capable of handling the full load in the event of a failure. If there's service disruption or the instances in one zone become degraded, Amazon CloudSearch routes all traffic to the other Availability Zone. Redundant instances are restored in a separate Availability Zone without any administrative intervention or disruption in service. /cloudsearch/faqs/;Can a search domain be deployed in more than 2 Availability Zones?;No. The maximum number of Availability Zones a domain can be deployed in is two. /cloudsearch/faqs/;Can I modify the Multi-AZ configuration on my search domain?;Yes. You can turn the Multi-AZ configuration on and off for your search domains. The service is not interrupted when this setting is changed. /cloudsearch/faqs/;Can I choose which Availability Zones my search domain is deployed in?;No. At this time Amazon CloudSearch automatically chooses an alternate Availability Zone in the same Region. /cloudsearch/faqs/;Can I choose the instance type my domain uses?;Yes. With the latest release, Amazon CloudSearch enables you to specify the desired instance type for your domain. If necessary, Amazon CloudSearch will scale your domain up to a larger instance type, but will never scale back to a smaller instance type. /cloudsearch/faqs/;What is the fastest way to get my data into CloudSearch?;By default, all domains start out on a small search instance. If you need to upload a large amount of data, you should prescale your domain to a larger instance type. For more information, see Bulk Uploads in the Amazon CloudSearch Developer Guide. /cloudsearch/faqs/;How do I know which instance type I should choose for my initial setup?;For datasets of less than 1 GB of data or fewer than one million 1 KB documents, start with the default settings of a single small search instance. For larger data sets consider pre-warming the domain by setting the desired instance type. For data sets up to 8 GB, start with a large search instance. For datasets between 8 GB and 16 GB, start with an extra large search instance. For datasets between 16 GB and 32 GB, start with a double extra large search instance. Contact us if you need more upload capacity or have more than 500 GB to index. /cloudsearch/faqs/;How do I upload my data to Amazon CloudSearch securely?;You send us your data using a secure and encrypted SSL connection by using HTTPS instead of HTTP when you connect to Amazon CloudSearch. /cloudsearch/faqs/;My data is already encrypted. Can I just send you the encrypted data and the encryption key?;We do not support user-generated encryption keys. You will need to decrypt the data and upload it using HTTPS. /cloudsearch/faqs/;Do you support encrypted search results?;Yes. We support HTTPS for all Amazon CloudSearch requests. /cloudsearch/faqs/;How can I prevent specific users from accessing my search domain?;Amazon CloudSearch supports IAM integration for the configuration service and all search domain services. You can grant users full access to Amazon CloudSearch, restrict their access to specific domains, and allow or deny access to specific actions. /cloudsearch/faqs/;How will I be charged and billed for my use of Amazon CloudSearch?;There are no set-up fees or commitments to begin using the service. Following the end of the month, your credit card will automatically be charged for that month's usage. You can view your charges for the current billing period at any time on the AWS web site by logging into your Amazon Web Services account and clicking Account Activity under Your Web Services Account. /cloudsearch/faqs/;How much does it cost to use Amazon CloudSearch?;There are no changes to the pricing structure for Amazon CloudSearch at this time. For detailed pricing information, see Amazon CloudSearch Pricing. /cloudsearch/faqs/;Is a free trial available for Amazon CloudSearch?;Yes, a free trial is available for new CloudSearch customers. For more information, see Amazon CloudSearch 30 Day Free Trial. /cloudsearch/faqs/;How much does it cost to use the new version of Amazon CloudSearch?;There are no changes to the pricing structure for Amazon CloudSearch at this time. See the Pricing page for more information. /cloudsearch/faqs/;Are there any cost savings to using the new version of Amazon CloudSearch?;The latest version of Amazon CloudSearch features advanced index compression and supports larger indexes on each instance type. This makes the new version of Amazon CloudSearch more efficient than the previous version and can result in significant cost savings. /cloudsearch/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /kinesis/data-streams/faqs/;What is Amazon Kinesis Data Streams?;With Amazon Kinesis Data Streams, you can build custom applications that process or analyze streaming data for specialized needs. You can add various types of data such as clickstreams, application logs, and social media to a Kinesis data stream from hundreds of thousands of sources. Within seconds, the data will be available for your applications to read and process from the stream. /kinesis/data-streams/faqs/;What does Amazon Kinesis Data Streams manage on my behalf?;Amazon Kinesis Data Streams manages the infrastructure, storage, networking, and configuration needed to stream your data at the level of your data throughput. You don't have to worry about provisioning, deployment, or ongoing maintenance of hardware, software, or other services for your data streams. In addition, Kinesis Data Streams synchronously replicates data across three Availability Zones, providing high availability and data durability. By default, Kinesis Data Streams scales capacity automatically, freeing you from provisioning and managing capacity. You can choose provisioned mode if you want to provision and manage throughput on your own. /kinesis/data-streams/faqs/;What can I do with Amazon Kinesis Data Streams?;Kinesis Data Streams is useful for rapidly moving data off data producers and then continuously processing the data, whether that means transforming it before emitting to a data store, running real-time metrics and analytics, or deriving more complex data streams for further processing. /kinesis/data-streams/faqs/;How do I use Amazon Kinesis Data Streams?;After you sign up for AWS, you can start using Kinesis Data Streams by creating a Kinesis data stream through either the AWS Management Console or the CreateStream operation. Then configure your data producers to continuously add data to your data stream. You can optionally send data from existing resources in AWS services such as Amazon DynamoDB, Amazon Aurora, Amazon CloudWatch, and AWS IoT Core. You can then use AWS Lambda, Amazon Kinesis Data Analytics, or AWS Glue Streaming to quickly process data stored in Kinesis Data Streams. You can also build custom applications that run on Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Service (ECS), and Amazon Elastic Kubernetes Service (EKS) using either Amazon Kinesis API or Amazon Kinesis Client Library (KCL). /kinesis/data-streams/faqs/;What is a shard, producer, and consumer in Kinesis Data Streams?;A shard has a sequence of data records in a stream. It serves as a base throughput unit of a Kinesis data stream. A shard supports 1 MB/second and 1,000 records per second for writes and 2 MB/second for reads. The shard limits ensure predictable performance, making it easy to design and operate a highly reliable data streaming workflow. A producer puts data records into shards and a consumer gets data records from shards. Consumers use shards for parallel data processing and for consuming data in the exact order in which they are stored. If writes and reads exceed the shard limits, the producer and consumer applications will receive throttles, which can be handled through retries. /kinesis/data-streams/faqs/;What is a record?;A record is the unit of data stored in an Amazon Kinesis data stream. A record is composed of a sequence number, partition key, and data blob. Data blob is the data of interest your data producer adds to a data stream. The maximum size of a data blob (the data payload before Base64-encoding) is 1 megabyte (MB). /kinesis/data-streams/faqs/;What is a partition key?;A partition key is used to segregate and route records to different shards of a data stream. A partition key is specified by your data producer while adding data to a Kinesis data stream. For example, let’s say you have a data stream with two shards (shard 1 and shard 2). You can configure your data producer to use two partition keys (key A and key B) so that all records with key A are added to shard 1 and all records with key B are added to shard 2. /kinesis/data-streams/faqs/;What is a sequence number?;"A sequence number is a unique identifier for each record. Sequence number is assigned by Amazon Kinesis when a data producer calls PutRecord or PutRecords operation to add data to a Amazon Kinesis data stream. Sequence numbers for the same partition key generally increase over time; the longer the time period between PutRecord or PutRecords requests, the larger the sequence numbers become." /kinesis/data-streams/faqs/;What is a capacity mode?;The capacity mode of Kinesis Data Streams determines how capacity is managed and usage is charged for a data stream. You can choose between provisioned and on-demand modes. In provisioned mode, you specify the number of shards for the data stream. The total capacity of a data stream is the sum of the capacities of its shards. You can increase or decrease the number of shards in a data stream as needed, and you pay for the number of shards at an hourly rate. In on-demand mode, AWS manages the shards to provide the necessary throughput. You pay only for the actual throughput used, and Kinesis Data Streams automatically accommodates your workload throughput needs as they ramp up or down. All Kinesis Data Streams write and read APIs, along with optional features such as Extended Retention and Enhanced Fan-Out, are supported in both capacity modes. /kinesis/data-streams/faqs/;How do I choose between on-demand and provisioned mode?;On-demand mode is best suited for workloads with unpredictable and highly variable traffic patterns. You should use this mode if you prefer AWS to manage capacity on your behalf or prefer pay-per-throughput pricing. Provisioned mode is best suited for predictable traffic, where capacity requirements are easy to forecast. You should consider using provisioned mode if you want fine-grained control over how data is distributed across shards. Provisioned mode is also suitable if you want to provision additional shards so the consuming application can have more read throughput to speed up the overall processing. /kinesis/data-streams/faqs/;Can I switch between on-demand and provisioned mode?;Yes. You can switch between on-demand and provisioned mode twice a day. The shard count of your data stream remains the same when you switch from provisioned mode to on-demand mode and vice versa. With the switch from provisioned to on-demand capacity mode, your data stream retains whatever shard count it had before the transition. But from that point on, Kinesis Data Streams monitors your data traffic and scales the shard count of this on-demand data stream up or down depending on traffic increase or decrease. /kinesis/data-streams/faqs/;How do I add data to my Amazon Kinesis data stream?;You can add data to a Kinesis data stream through PutRecord and PutRecords operations, Amazon Kinesis Producer Library (KPL), or Amazon Kinesis Agent. /kinesis/data-streams/faqs/;What is the difference between PutRecord and PutRecords?;PutRecord operation allows a single data record within an API call, and PutRecords operation allows multiple data records within an API call. For more information, see PutRecord and PutRecords. /kinesis/data-streams/faqs/;What is Amazon Kinesis Producer Library (KPL)?;Amazon Kinesis Producer Library (KPL) is an easy-to-use and highly configurable library that helps you put data into an Amazon Kinesis data stream. KPL presents a simple, asynchronous, and reliable interface that enables you to quickly achieve high producer throughput with minimal client resources. /kinesis/data-streams/faqs/;What is Amazon Kinesis Agent?;Amazon Kinesis Agent is a prebuilt Java application that offers an easy way to collect and send data to your Amazon Kinesis data stream. You can install the agent on Linux-based server environments such as web servers, log servers, and database servers. The agent monitors certain files and continuously sends data to your data stream. For more information, see Writing with Agents. /kinesis/data-streams/faqs/;What data is counted against the data throughput of an Amazon Kinesis data stream during a PutRecord or PutRecords call?;Your data blob, partition key, and data stream name are required parameters of a PutRecord or PutRecords call. The size of your data blob (before Base64 encoding) and partition key will be counted against the data throughput of your Amazon Kinesis data stream, which is determined by the number of shards within the data stream. /kinesis/data-streams/faqs/;What is a consumer, and what are different consumer types offered by Amazon Kinesis Data Streams?;A consumer is an application that processes all data from a Kinesis data stream. You can choose between shared fan-out and enhanced fan-out consumer types to read data from a Kinesis data stream. The shared fan-out consumers all share a shard’s 2 MB/second of read throughput and five transactions per second limits and require the use of the GetRecords API. An enhanced fan-out consumer gets its own 2 MB/second allotment of read throughput, allowing multiple consumers to read data from the same stream in parallel, without contending for read throughput with other consumers. You need to use the SubscribeToShard API with the enhanced fan-out consumers. We recommend using enhanced fan-out consumers if you want to add more than one consumer to your data stream. /kinesis/data-streams/faqs/;How I can process data captured and stored in Amazon Kinesis Data Streams?;You can use managed services such as AWS Lambda, Amazon Kinesis Data Analytics, and AWS Glue to process data stored in Kinesis Data Streams. These managed services take care of provisioning and managing the underlying infrastructure so you can focus on writing your business logic. You can also deliver data stored in Kinesis Data Streams to Amazon S3, Amazon OpenSearch Service, Amazon Redshift, and custom HTTP endpoints using its prebuilt integration with Kinesis Data Firehose. You can also build custom applications using Amazon Kinesis Client Library, a prebuilt library, or the Amazon Kinesis Data Streams API. /kinesis/data-streams/faqs/;What is Amazon Kinesis Client Library (KCL)?;Amazon Kinesis Client Library (KCL) for Java, Python, Ruby, Node.js, and .NET is a prebuilt library that helps you easily build Amazon Kinesis applications for reading and processing data from an Amazon Kinesis data stream. /kinesis/data-streams/faqs/;What is the SubscribeToShard API?;The SubscribeToShard API is a high-performance streaming API that pushes data from shards to consumers over a persistent connection without a request cycle from the client. The SubscribeToShard API uses the HTTP/2 protocol to deliver data to registered consumers whenever new data arrives on the shard, typically within 70 milliseconds, offering approximately 65% faster delivery compared to the GetRecords API. The consumers will enjoy fast delivery even when multiple registered consumers are reading from the same shard. /kinesis/data-streams/faqs/;What is enhanced fan-out?;Enhanced fan-out is an optional feature for Kinesis Data Streams consumers that provides logical 2 MB/second throughput pipes between consumers and shards. This allows you to scale the number of consumers reading from a data stream in parallel, while maintaining high performance. /kinesis/data-streams/faqs/;When should I use enhanced fan-out?;You should use enhanced fan-out if you have, or expect to have, multiple consumers retrieving data from a stream in parallel, or if you have at least one consumer that requires the use of the SubscribeToShard API to provide sub-200 millisecond data delivery speeds between producers and consumers. /kinesis/data-streams/faqs/;How is enhanced fan-out used by a consumer?;Consumers use enhanced fan-out by retrieving data with the SubscribeToShard API. The name of the registered consumer is used within the SubscribeToShard API, which leads to utilization of the enhanced fan-out benefit provided to the registered consumer. /kinesis/data-streams/faqs/;Can I have some consumers using enhanced fan-out, and other not?;Yes. You can have multiple consumers using enhanced fan-out and others not using enhanced fan-out at the same time. The use of enhanced fan-out does not impact the limits of shards for traditional GetRecords usage. /kinesis/data-streams/faqs/;Do I need to use enhanced fan-out if I want to use SubscribeToShard?;Yes. To use SubscribeToShard, you need to register your consumers, which activates enhanced fan-out. By default, your consumer will use enhanced fan-out automatically when data is retrieved through SubscribeToShard. /kinesis/data-streams/faqs/;What are the default throughput quotas to write data into data stream using on-demand mode?;A new data stream created in on-demand mode has a quota of 4 MB/second and 4,000 records per second for writes. By default, these streams automatically scale up to 200 MB/second and 200,000 records per second for writes. /kinesis/data-streams/faqs/;How do data streams scale in on-demand mode to handle increase in write throughput?;A data stream in on-demand mode accommodates up to double its previous peak write throughput observed in the last 30 days. As your data stream’s write throughput hits a new peak, Kinesis Data Streams scales the stream’s capacity automatically. For example, if your data stream has a write throughput that varies between 10 MB/second and 40 MB/second, Kinesis Data Streams will ensure that you can easily burst to double the peak throughput of 80 MB/second. Subsequently, if the same data stream sustains a new peak throughput of 50 MB/second, Data Streams will ensure that there is enough capacity to ingest 100 MB/second of write throughput. However, you will see “ProvisionedThroughputExceeded” exceptions if your traffic grows more than double the previous peak within a 15-minute duration. You need to retry these throttled requests. /kinesis/data-streams/faqs/;What are the throughput limits for reading data from streams in on-demand mode?;On-demand mode’s aggregate read capacity increases proportionally to write throughput to ensure that consuming applications always have adequate read throughput to process incoming data in real time. You get at least twice the write throughput to read data using the GetRecords API. We recommend using one consumer with the GetRecord API so it has enough room to catch up when the application needs to recover from downtime. To add more than one consuming application, you need to use enhanced fan-out, which supports adding up to 20 consumers to a data stream using the SubscribeToShard API, with each having dedicated throughput. /kinesis/data-streams/faqs/;What are the limits of Kinesis Data Streams in provisioned mode?;The throughput of a Kinesis data stream in provisioned mode is designed to scale without limits by increasing the number of shards within a data stream. /kinesis/data-streams/faqs/;How do I scale capacity of Kinesis Data Streams in provisioned mode?;You can scale up a Kinesis Data Stream capacity in provisioned mode by splitting existing shards using the SplitShard API. You can scale down capacity by merging two shards using the MergeShard API. Alternatively, you can use UpdateShardCount API to scale up (or down) a stream capacity to a specific shard count. /kinesis/data-streams/faqs/;How do I decide the throughput of my Amazon Kinesis data stream in provisioned mode?;The throughput of a Kinesis data stream is determined by the number of shards within the data stream. Follow the steps below to estimate the initial number of shards your data stream needs in provisioned mode. Note that you can dynamically adjust the number of shards within your data stream through resharding. /kinesis/data-streams/faqs/;What is the maximum throughput I can request for my Amazon Kinesis data stream in provisioned mode?;The throughput of a Kinesis data stream is designed to scale without limits. The default shard quota is 500 shards per stream for the following AWS Regions: US East (NVirginia), US West (Oregon), and Europe (Ireland). For all other Regions, the default shard quota is 200 shards per stream. You can request the increase in the shard quota using the AWS Service Quotas console. /kinesis/data-streams/faqs/;What happens if the capacity limits of an Amazon Kinesis data stream are exceeded while the data producer adds data to the data stream in provisioned mode?;In provisioned mode, the capacity limits of a Kinesis data stream are defined by the number of shards within the data stream. The limits can be exceeded either by data throughput or by the number of PUT records. While the capacity limits are exceeded, the put data call will be rejected with a ProvisionedThroughputExceeded exception. If this is due to a temporary rise of the data stream’s input data rate, retry by the data producer will eventually lead to completion of the requests. If it’s due to a sustained rise of the data stream’s input data rate, you should increase the number of shards within your data stream to provide enough capacity for the put data calls to consistently succeed. In both cases, Amazon CloudWatch metrics allow you to learn about the change of the data stream’s input data rate and the occurrence of ProvisionedThroughputExceeded exceptions. /kinesis/data-streams/faqs/;What happens if the capacity limits of an Amazon Kinesis data stream are exceeded while the Amazon Kinesis application reads data from the data stream in provisioned mode?;In provisioned mode, the capacity limits of a Kinesis data stream are defined by the number of shards within the data stream. The limits can be exceeded either by data throughput or by the number of read data calls. While the capacity limits are exceeded, the read data call will be rejected with a ProvisionedThroughputExceeded exception. If this is due to a temporary rise of the data stream’s output data rate, retry by the Amazon Kinesis application will eventually lead to completion of the requests. If it’s due to a sustained rise of the data stream’s output data rate, you should increase the number of shards within your data stream to provide enough capacity for the read data calls to consistently succeed. In both cases, Amazon CloudWatch metrics allow you to learn about the change of the data stream’s output data rate and the occurrence of ProvisionedThroughputExceeded exceptions. /kinesis/data-streams/faqs/;What is the retention period supported by Kinesis Data Streams?;The default retention period of 24 hours covers scenarios where intermittent lags in processing require catch-up with the real-time data. A seven-day retention lets you reprocess data for up to seven days to resolve potential downstream data losses. Long term data retention greater than seven days and up to 365 days lets you reprocess old data for use cases such as algorithm back testing, data store backfills, and auditing. /kinesis/data-streams/faqs/;Can I use the existing Kinesis Data Streams APIs to read data older than seven days?;Yes. You can use the same getShardIterator, GetRecords, and SubscribeToShard APIs to read data retained for more than seven days. The consumers can move the iterator to the desired location in the stream, retrieve the shard map (including both open and closed), and read the records. /kinesis/data-streams/faqs/;Are there any new APIs to further assist in reading old data?;Yes. There are API enhancements to ListShards, GetRecords, and SubscribeToShard APIs. You can use the new filtering option with the TimeStamp parameter available in the ListShards API to efficiently retrieve the shard map and improve the performance of reading old data. The TimeStamp filter lets applications discover and enumerate shards from the point in time you wish to reprocess data and eliminate the need to start at the trim horizon. GetRecords and SubscribeToShards have a new field, ChildShards, which allows you to quickly discover the children shards when an application finishes reading data from a closed shard, instead of having to traverse the shard map again. The fast discovery of shards makes efficient use of the consuming application’s compute resources for any sized stream, irrespective of the data retention period. /kinesis/data-streams/faqs/;When do I use the API enhancements?;You should consider the API enhancements if you plan to retain data longer and scale your stream’s capacity regularly. Stream scaling operations close existing shards and open new child shards. The data in all the open and closed shards is retained until the end of the retention period. So the total number of shards increase linearly with a longer retention period and multiple scaling operations. This increase in the shard map requires you to use ListShards with the TimeStamp filter and ChildShards field in GetRecords, and SubscribeToShard API for efficient discovery of shards for data retrieval. You will need to upgrade your KCL to the latest version (1.x for standard consumers and 2.x for enhanced fan-out consumers) for these features. /kinesis/data-streams/faqs/;Does Amazon Kinesis Data Streams support schema registration?;Yes. Clients of Kinesis Data Streams can use the AWS Glue Schema Registry, a serverless feature of AWS Glue, either through the Kinesis Producer Library (KPL) and Kinesis Client Libraries (KCL) or through AWS Glue Schema Registry APIs in the AWS Java SDK. The Schema Registry is available at no additional charge. /kinesis/data-streams/faqs/;How do I change the throughput of my Amazon Kinesis data stream in provisioned mode?;There are two ways to change the throughput of your data stream. You can use the UpdateShardCount API or the AWS Management Console to scale the number of shards in a data stream, or you can change the throughput of an Amazon Kinesis data stream by adjusting the number of shards within the data stream (resharding). /kinesis/data-streams/faqs/;How long does it take to change the throughput of my Amazon Kinesis data stream running in provisioned mode using UpdateShardCount or the AWS Management Console?;Typical scaling requests should take a few minutes to complete. Larger scaling requests will take longer than smaller ones. /kinesis/data-streams/faqs/;Does Amazon Kinesis Data Streams remain available when I change the throughput of my Kinesis data stream in provisioned mode or when the scaling happens automatically in on-demand mode?;Yes. You can continue adding data to and reading data from your Kinesis data stream while you use UpdateShardCount or reshard to change the throughput of the data stream or when Kinesis Data Streams does it automatically in on-demand mode. /kinesis/data-streams/faqs/;How do I monitor the operations and performance of my Amazon Kinesis data stream?;Amazon Kinesis Data Streams Management Console displays key operational and performance metrics such as throughput of data input and output of your Kinesis data streams. Kinesis Data Streams also integrates with Amazon CloudWatch so you can collect, view, and analyze CloudWatch metrics for your data streams and shards within those data streams. For more information about Amazon Kinesis Data Streams metrics, see Monitoring Amazon Kinesis Data Streams with Amazon CloudWatch. /kinesis/data-streams/faqs/;How do I manage and control access to my Amazon Kinesis data stream?;Amazon Kinesis Data Streams integrates with AWS Identity and Access Management (IAM), a service that enables you to securely control access to your AWS services and resources for your users. For example, you can create a policy that allows only a specific user or group to add data to your Kinesis data stream. For more information about access management and control of your data stream, see Controlling Access to Amazon Kinesis Data Streams Resources using IAM. /kinesis/data-streams/faqs/;How do I log API calls made to my Amazon Kinesis data stream for security analysis and operational troubleshooting?;Kinesis Data Streams integrates with Amazon CloudTrail, a service that records AWS API calls for your account and delivers log files to you. For more information about API call logging and a list of supported Amazon Kinesis API operations, see Logging Amazon Kinesis API calls Using Amazon CloudTrail. /kinesis/data-streams/faqs/;How do I effectively manage my Amazon Kinesis data streams and the costs associated with them?;Kinesis Data Streams allows you to tag your Kinesis data streams for easier resource and cost management. A tag is a user-defined label expressed as a key-value pair that helps organize AWS resources. For example, you can tag your data streams by cost centers so you can categorize and track your Kinesis Data Streams costs based on cost centers. For more information about Amazon Kinesis Data Streams tagging, see Tagging Your Amazon Kinesis Data Streams. /kinesis/data-streams/faqs/;When I use Kinesis Data Streams, how secure is my data?;Amazon Kinesis is secure by default. Only the account and data stream owners have access to the Kinesis resources they create. Kinesis supports user authentication to control access to data. You can use AWS IAM policies to selectively grant permissions to users and groups of users. You can securely put and get your data from Kinesis through SSL endpoints using the HTTPS protocol. If you need extra security, you can use server-side encryption with AWS Key Management Service (KMS) keys to encrypt data stored in your data stream. AWS KMS allows you to use AWS-generated KMS keys for encryption, or if you prefer, you can bring your own KMS key into AWS KMS. Lastly, you can use your own encryption libraries to encrypt data on the client side before putting the data into Kinesis. /kinesis/data-streams/faqs/;Can I privately access Kinesis Data Streams APIs from my Amazon Virtual Private Cloud (VPC) without using public IPs?;Yes. You can privately access Kinesis Data Streams APIs from your Amazon VPC by creating VPC Endpoints. With VPC Endpoints, the routing between the VPC and Kinesis Data Streams is handled by the AWS network without the need for an internet gateway, NAT gateway, or VPN connection. The latest generation of VPC Endpoints used by Kinesis Data Streams are powered by AWS PrivateLink, a technology that enables private connectivity between AWS services using Elastic Network Interfaces (ENI) with private IPs in your VPCs. To learn more about PrivateLink, visit the PrivateLink documentation. /kinesis/data-streams/faqs/;Can I encrypt the data I put into a Kinesis data stream?;Yes, and there are two options for doing so. You can use server-side encryption, which is a fully managed feature that automatically encrypts and decrypts data as you put and get it from a data stream. You can also write encrypted data to a data stream by encrypting and decrypting on the client side. /kinesis/data-streams/faqs/;Why should I use server-side encryption instead of client-side encryption?;You might choose server-side encryption over client-side encryption for any of the following reason: /kinesis/data-streams/faqs/;What is server-side encryption?;Server-side encryption for Kinesis Data Streams automatically encrypts data using a user specified AWS KMS key before it is written to the data stream storage layer, and decrypts the data after it is retrieved from storage. Encryption makes writes impossible and the payload and the partition key unreadable unless the user writing or reading from the data stream has the permission to use the key selected for encryption on the data stream. As a result, server-side encryption can make it easier to meet internal security and compliance requirements governing your data. /kinesis/data-streams/faqs/;Is there a server-side encryption getting started guide?;Yes, there is a getting started guide in the user documentation. /kinesis/data-streams/faqs/;Does server-side encryption interfere with how my applications interact with Kinesis Data Streams?;Possibly. It depends on the key you use for encryption and the permissions governing access to the key. /kinesis/data-streams/faqs/;Is there an additional cost associated with the use of server-side encryption?;Yes, however if you are using the AWS-managed KMS key for Kinesis and are not exceeding the AWS Free Tier KMS API usage costs, your use of server-side encryption is free. The following describes the costs by resource: /kinesis/data-streams/faqs/;Which AWS regions offer server-side encryption for Kinesis Data Streams?;Kinesis Data Streams server-side encryption is available in the AWS GovCloud Region and all public Regions except the China (Beijing) region. /kinesis/data-streams/faqs/;How do I start, update, or remove server-side encryption from a data stream?;All of these operations can be completed using the AWS Management Console or the AWS SDK. To learn more, see the Kinesis Data Streams server-side encryption getting started guide. /kinesis/data-streams/faqs/;What encryption algorithm is used for server-side encryption?;Kinesis Data Streams uses an AES-GCM 256 algorithm for encryption. /kinesis/data-streams/faqs/;If I encrypt a data stream that already has data written to it, either in plain text or ciphertext, will all of the data in the data stream be encrypted or decrypted if I update encryption?;No. Only new data written into the data stream will be encrypted (or left decrypted) by the new application of encryption. /kinesis/data-streams/faqs/;What does server-side encryption for Kinesis Data Streams encrypt?;Server-side encryption encrypts the payload of the message along with the partition key, which is specified by the data stream producer applications. /kinesis/data-streams/faqs/;Is server-side encryption a shard specific feature or a stream specific feature?;Server-side encryption is a stream specific feature. /kinesis/data-streams/faqs/;Can I change the KMS key that is used to encrypt a specific data stream?;Yes, using the AWS Management Console or the AWS SDK, you can choose a new KMS key to apply to a specific data stream. /kinesis/data-streams/faqs/;Is Amazon Kinesis Data Streams available in the AWS Free Tier?;No. Amazon Kinesis Data Streams is not currently available in the AWS Free Tier. AWS Free Tier is a program that offers free trial for a group of AWS services. For more details about AWS Free Tier, see AWS Free Tier. /kinesis/data-streams/faqs/;What does the Amazon Kinesis Data Streams SLA guarantee?;Our Kinesis Data Streams SLA guarantees a Monthly Uptime Percentage of at least 99.9% for Kinesis Data Streams. /kinesis/data-streams/faqs/;How do I know if I qualify for a SLA Service Credit?;You are eligible for a SLA credit for Kinesis Data Streams under the Kinesis Data Streams SLA if more than one Availability Zone in which you are running a task, within the same Region has a Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle. /kinesis/data-streams/faqs/;How does Amazon Kinesis Data Streams pricing work?;Kinesis Data Streams uses simple pay-as-you-go pricing. There are no upfront costs or minimum fees, and you pay only for the resources you use. Kinesis Data Streams has two capacity modes—on-demand and provisioned—and both come with specific billing options. /kinesis/data-streams/faqs/;How does Kinesis Data Streams pricing work in on-demand mode?;With on-demand capacity mode, you don’t need to specify how much read and write throughput you expect your application to perform. In this mode, pricing is based on the volume of data ingested and retrieved along with a per-hour charge for each data stream in your account. There are additional charges for optional features: Extended data retention (beyond the first 24 hours and within the first seven days), Long-Term data retention (beyond seven days and up to one year), and Enhanced Fan-Out. For more information about Kinesis Data Streams costs, see Amazon Kinesis Data Streams Pricing. /kinesis/data-streams/faqs/;How does Kinesis Data Streams pricing work in provisioned mode?;With provisioned capacity mode, you specify the number of shards necessary for your application based on its write and read request rate. A shard is a unit of capacity that provides 1 MB/second of write and 2 MB/second of read throughout. You’re charged for each shard at an hourly rate. You also pay for records written into your Kinesis data stream. You incur additional charges when you use optional features such as Extended retention and Enhanced Fan-Out. /kinesis/data-streams/faqs/;How is a consumer-shard hour calculated for Enhanced Fan-Out usage in provisioned mode?;A consumer-shard hour is calculated by multiplying the number of registered stream consumers with the number of shards in the stream. You will also pay only for the prorated portion of the hour the consumer was registered to use enhanced fan-out. For example, if a consumer-shard hour costs $0.015, for a 10-shard data stream, this consumer using enhanced fan-out would be able to read from 10 shards, and thus incur a consumer-shard hour charge of $0.15 per hour (1 consumer * 10 shards * $0.015 per consumers-shard hour). If there were two consumers registered for enhanced fan-out simultaneously, the total consumer-shard hour charge would be $0.30 per hour (2 consumers * 10 shards * $0.015). /kinesis/data-streams/faqs/;How does Amazon Kinesis Data Streams differ from Amazon SQS?;Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Kinesis data stream (for example, to perform counting, aggregation, and filtering). Amazon Simple Queue Service (SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows. /kinesis/data-streams/faqs/;When should I use Amazon Kinesis Data Streams, and when should I use Amazon SQS?;We recommend Kinesis Data Streams for use cases with requirements that are similar to the following: /msk/faqs/;What are Apache Kafka’s primary capabilities?;Apache Kafka has three key capabilities: /msk/faqs/;What does Amazon MSK do?;Amazon MSK makes it easy to get started and run open-source versions of Apache Kafka on AWS with high availability and security. Amazon MSK also offers integrations with AWS services without the operational overhead of running an Apache Kafka cluster. Amazon MSK allows you to use open-source versions of Apache Kafka while the service manages the setup, provisioning, AWS integrations, and on-going maintenance of Apache Kafka clusters. With a few clicks in the console, you can create an Amazon MSK cluster. From there, Amazon MSK replaces unhealthy brokers, automatically replicates data for high availability, manages Apache ZooKeeper nodes, automatically deploys hardware patches as needed, manages the integrations with AWS services, makes important metrics visible through the console, and supports Apache Kafka version upgrades so you can take advantage of improvements to the open-source version of Apache Kafka. /msk/faqs/;What is MSK Serverless?; Yes. MSK Serverless fully manages partitions, including monitoring and moving them to even load across a cluster. /msk/faqs/;Does MSK Serverless automatically balance partitions within a cluster?; MSK Serverless provides up to 200 MBps of write capacity and 400 MBps of read capacity per cluster. Additionally, to ensure sufficient throughput availability for all partitions in a cluster, MSK Serverless allocates up to 5 MBps of instant write capacity and 10 MBps of instant read capacity per partition. /msk/faqs/;How much data throughput capacity does MSK Serverless support?; MSK Serverless encrypts all traffic in transit and all data at rest using service-managed keys issued through AWS Key Management Service (KMS). Clients connect to MSK Serverless over a private connection using AWS PrivateLink without exposing your traffic to the public internet. Additionally, MSK Serverless offers IAM Access Control, which you can use to manage client authentication and client authorization to Apache Kafka resources such as topics. /msk/faqs/;What security features does MSK Serverless offer?; When you create a MSK Serverless cluster, you provide subnets of one or more Amazon Virtual Private Clouds (VPCs) that host the clients of the cluster. Clients hosted in any of these VPCs will be able to connect to the MSK Serverless cluster using its broker bootstrap string. /msk/faqs/;How can producers and consumers access my MSK Serverless clusters?; Please refer to the MSK pricing page for up-to-date regional availability. /msk/faqs/;Which regions is MSK Serverless available in?; MSK Serverless currently supports AWS IAM for client authentication and authorization. Your clients can assume an AWS IAM role for authentication, and you can enforce access control using an associated IAM policy. /msk/faqs/;Which authentication types does MSK Serverless support?; You can use any Apache Kafka compatible tools to process data in your MSK Serverless cluster topics. MSK Serverless integrates with Amazon Kinesis Data Analytics for Apache Flink for stateful stream processing and AWS Lambda for event processing. You can also use Kafka Connect sink connectors to send data to any desired destination. /msk/faqs/;How do I process data in my MSK Serverless cluster?; When you create a partition, MSK Serverless creates 2 replicas of it and places them in different availability zones. Additionally, MSK serverless automatically detects and recovers failed backend resources to maintain high availability. /msk/faqs/;How does data replication work in Amazon MSK?;Amazon MSK uses Apache Kafka’s leader-follower replication to replicate data between brokers. Amazon MSK makes it easy to deploy clusters with multi-AZ replication and gives you the option to use a custom replication strategy by topic. By default with each of the replication options, leader and follower brokers will be deployed and isolated using the replication strategy specified. For example, if you select a three AZ broker replication strategy with 1 broker per AZ cluster, Amazon MSK will create a cluster of three brokers (one broker in three AZs in a region), and by default (unless you choose to override the topic replication factor) the topic replication factor will also be three. /msk/faqs/;Can I change the default broker configurations or upload a cluster configuration to Amazon MSK?; Amazon MSK uses Apache Kafka’s default configuration unless otherwise specified here. /msk/faqs/;How will the brokers in my Amazon MSK cluster be made accessible to clients within my VPC?; Yes, Amazon MSK offers an option to securely connect to the brokers of Amazon MSK clusters running Apache Kafka 2.6.0 or later versions over the internet. By enabling public access, authorized clients external to a private Amazon Virtual Private Cloud (VPC) can stream encrypted data in and out of specific Amazon MSK clusters. You can enable public access for MSK clusters after a cluster has been created at no additional cost, but standard AWS data transfer costs for cluster ingress and egress apply. To learn more about turning on public access, see the public access documentation. /msk/faqs/;Is it possible to connect to my cluster over the public Internet?; By default, the only way data can be produced and consumed from an Amazon MSK cluster is over a private connection between your clients in your VPC and the Amazon MSK cluster. However, if you turn on public access for your Amazon MSK cluster and connect to your MSK cluster using the public bootstrap-brokers string, the connection, though authenticated, authorized and encrypted, is no longer considered private. We recommend that you configure the cluster's security groups to have inbound TCP rules that allow public access from your trusted IP address and make these rules as restrictive as possible if you turn on public access. /msk/faqs/;How do I control cluster authentication and Apache Kafka API authorization?; If you are using IAM Access Control, Amazon MSK uses the policies you write and its own authorizer to authorize actions. If you are using TLS certificate authentication or SASL/SCRAM, Apache Kafka uses access control lists (ACLs) for authorization. To enable ACLs you must enable client authentication using either TLS certificates or SASL/SCRAM. /msk/faqs/;How does authorization work in Amazon MSK?; If you are using IAM Access Control, Amazon MSK will authenticate and authorize for you without any additional set up. If you are using TLS authentication, you can use the Dname of clients TLS certificates as the principal of the ACL to authorize client requests. If you are using SASL/SCRAM, you can use the username as the principal of the ACL to authorize client requests. /msk/faqs/;How can I authenticate and authorize a client at the same time?; You can control service API actions using AWS Identity and Access Management (IAM). /msk/faqs/;How do I control service API actions?; Yes, you can enable IAM Access Control for an existing cluster from the AWS console or by using the UpdateSecurity API. /msk/faqs/;Can I enable IAM Access Control for an existing cluster?; No, IAM Access Control is only available for Amazon MSK clusters. /msk/faqs/;How much does it cost to publish the consumer lag metric to Amazon CloudWatch?; Yes, if you use IAM Access Control, the use of Apache Kafka resource APIs is logged to AWS CloudTrail. /msk/faqs/;How do I access Apache Kafka broker Logs?; Yes, if you use IAM Access Control, the use of Apache Kafka resource APIs is logged to AWS CloudTrail. /msk/faqs/;What is the logging level for broker logs?; Yes, if you use IAM Access Control, the use of Apache Kafka resource APIs is logged to AWS CloudTrail. /msk/faqs/;What AWS services does Amazon MSK integrate with?;Amazon MSK integrates with: /msk/faqs/;How does tiered storage work?; You can create an auto-scaling storage policy using the AWS Management Console or by creating an AWS Application Auto Scaling policy using the AWS CLI or APIs. /msk/faqs/;What compliance programs are in scope for Amazon MSK?;Amazon MSK is compliant or eligible for the following programs: /msk/faqs/;What does the Amazon MSK SLA guarantee?;Our Amazon MSK SLA guarantees a Monthly Uptime Percentage of at least 99.9% for Amazon MSK (not applicable to MSK Serverless). /msk/faqs/;How do I know if I qualify for a SLA Service Credit?;You are eligible for a SLA credit for Amazon MSK under the Amazon MSK SLA if Multi-AZ deployments on Amazon MSK have a Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle. /opensearch-service/faqs/;What is Amazon OpenSearch Service?;Amazon OpenSearch Service is a managed service that makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Amazon OpenSearch Service offers the latest versions of OpenSearch, support for 19 versions of Elasticsearch (1.5 to 7.10 versions), as well as visualization capabilities powered by OpenSearch Dashboards and Kibana (1.5 to 7.10 versions). Amazon OpenSearch Service currently has tens of thousands of active customers with hundreds of thousands of clusters under management processing trillions of requests per month. See the Amazon OpenSearch Service FAQ for more information. /opensearch-service/faqs/;Which OpenSearch and Elasticsearch versions does Amazon OpenSearch Service support?;Amazon OpenSearch Service offers the latest versions of OpenSearch and support for 19 versions of Elasticsearch (1.5 to 7.10 versions). For more details refer our documentation. /opensearch-service/faqs/;What is an Amazon OpenSearch Service domain?;Amazon OpenSearch Service domains are Elasticsearch (1.5 to 7.10) or OpenSearch clusters created using the Amazon OpenSearch Service console, CLI, or API. Each domain is an OpenSearch or Elasticsearch cluster in the cloud with the compute and storage resources you specify. You can create and delete domains, define infrastructure attributes, and control access and security. You can run one or more Amazon OpenSearch Service domains. /opensearch-service/faqs/;What does Amazon OpenSearch Service manage on my behalf?;Amazon OpenSearch Service manages the work involved in setting up a domain, from provisioning infrastructure capacity in the network environment you request to installing the OpenSearch or Elasticsearch software. Once your domain is running, Amazon OpenSearch Service automates common administrative tasks, such as performing backups, monitoring instances and patching software. Amazon OpenSearch Service integrates with Amazon CloudWatch to produce metrics that provide information about the state of the domains. Amazon OpenSearch Service also offers options to modify your domain instance and storage settings to simplify the task of tailoring your domain based to your application needs. /opensearch-service/faqs/;Does Amazon OpenSearch Service support the open-source Elasticsearch and OpenSearch APIs?;Amazon OpenSearch Service supports most of the commonly used OpenSearch and Elasticsearch APIs, so the code, applications, and popular tools that you're already using with your Elasticsearch (up to version 7.10) or OpenSearch environments work seamlessly. For a full list of supported operations, see our documentation. /opensearch-service/faqs/;What are the Availability Zone (AZ) deployment options available on Amazon OpenSearch Service?;Amazon OpenSearch Service offers customers the option to deploy their instances across one, two, or three AZs. Customers running development or test workloads can pick the single AZ option. Those running production-grade workloads should use two or three AZs. Three AZ deployments are strongly recommended for workloads with higher availability requirements. /opensearch-service/faqs/;In which regions does Amazon OpenSearch Service offer three AZ deployments?;Amazon OpenSearch Service supports three AZ deployments in all regions in which the service is available, except US West (NCalifornia), where we support two AZs only. /opensearch-service/faqs/;Can I create and modify my Amazon OpenSearch Service domain through the Amazon OpenSearch Service console?;Yes. You can create a new Amazon OpenSearch Service domain with the Domain Creation Wizard in the console with just a few clicks. While creating a new domain you can specify the number of instances, instance types, and EBS volumes you want allocated to your domain. You can also modify or delete existing Amazon OpenSearch Service domains using the console. /opensearch-service/faqs/;Does Amazon OpenSearch Service support Amazon VPC?;Yes, Amazon OpenSearch Service is integrated with Amazon VPC. When choosing VPC access, IP addresses from your VPC are attached to your Amazon OpenSearch Service domain and all network traffic stays within the AWS network and is not accessible to the Internet. Moreover, you can use security groups and IAM policies to restrict access to your Amazon OpenSearch Service domains. /opensearch-service/faqs/;Can I use CloudFormation Templates to provision Amazon OpenSearch Service domains?;Yes. AWS CloudFormation supports Amazon OpenSearch Service. For more information, see the CloudFormation Template Reference documentation. /opensearch-service/faqs/;Does Amazon OpenSearch Service support configuring dedicated master nodes?;Yes. You can configure dedicated master nodes for your domains. When choosing a dedicated master configuration, you can specify the instance type and instance count. /opensearch-service/faqs/;Can I create multiple Elasticsearch or OpenSearch indices within a single Amazon OpenSearch Service domain?;Yes. You can create multiple Elasticsearch or OpenSearch indices within the same Amazon OpenSearch Service domain. Elasticsearch and OpenSearch automatically distribute the indices and any associated replicas between the instances allocated to the domain. /opensearch-service/faqs/;How do I ingest data into my Amazon OpenSearch Service domain?;Amazon OpenSearch Service supports three options for data ingestion: /opensearch-service/faqs/;Does Amazon OpenSearch Service support integration with Logstash?;Yes. Amazon OpenSearch Service supports integration with Logstash. You can set up your Amazon OpenSearch Service domain as the backend store for all logs coming through your Logstash implementation. You can set up access control on your Amazon OpenSearch Service domain to either use request signing to authenticate calls from your Logstash implementation, or use resource based IAM policies to include IP addresses of instances running your Logstash implementation. /opensearch-service/faqs/;Does Amazon OpenSearch Service support integration with Kibana?;Yes. Amazon OpenSearch Service offers visualization capabilities powered by OpenSearch Dashboards and Kibana (1.5 to 7.10 versions). /opensearch-service/faqs/;What storage options are available with Amazon OpenSearch Service?;You can choose between local on-instance storage or EBS volumes. During domain creation, if you select EBS storage, you can increase and decrease the size of the storage volume as necessary. /opensearch-service/faqs/;What types of EBS volumes does Amazon OpenSearch Service support?;You can choose between Magnetic, General Purpose, and Provisioned IOPS EBS volumes. /opensearch-service/faqs/;How are dedicated master instances distributed across AZs?;If you deploy your data instances in a single AZ, your dedicated master instances are also deployed in the same AZ. However, if you deploy your data instances across two or three AZs, Amazon OpenSearch Service automatically distributes the dedicated master instances across three AZs. The exception to this rule occurs if a region only has two AZs or if you select an older-generation instance type for the master instances that is not available in all AZs. For more details, refer our documentation. /opensearch-service/faqs/;What is the recommended AZ configuration for production workloads?;For production workloads, we recommend deploying your data instances across three AZs since it offers better availability. Also, we recommend provisioning instances in multiples of three for equal distribution across AZs. In regions where three AZs are not available, we recommend using a two AZ deployment with an even number of data instances. In all cases, we recommend provisioning three dedicated master instances. /opensearch-service/faqs/;How can I configure my domain for three AZ deployment?;You can enable three AZ deployment for both existing and new domains using the AWS console, CLI or SDKs. For more details, refer our documentation. /opensearch-service/faqs/;Is there a fee for enabling three AZ deployment?;No. Amazon OpenSearch Service does not charge anything for enabling three AZ deployment. You only pay for the number of instances in your domain, not the number of AZs to which they are deployed. /opensearch-service/faqs/;I no longer see the “zone awareness” option in my console. Is my domain no longer zone aware?;All domains configured for multiple AZs will have zone awareness enabled to ensure shards are distributed across Availability Zones. In the console, you can now explicitly choose two or three AZ deployments. Domains previously configured with “Zone Awareness” will continue to be deployed across two AZs unless they are reconfigured. For more details, refer our documentation. /opensearch-service/faqs/;How does Amazon OpenSearch Service handle instance failures and AZ disruptions?;If one or more instances in an AZ are unreachable or not functional, Amazon OpenSearch Service automatically tries to bring up new instances in the same AZ to replace the affected instances. In the rare event that new instances cannot be brought up in the AZ, Amazon OpenSearch Service brings up new instances in the other available AZs if the domain has been configured to deploy instances across multiple AZs. Once the AZ issue resolves, Amazon OpenSearch Service rebalances the instances such that they are equally distributed across the AZs configured for the domain. For more details refer our documentation. /opensearch-service/faqs/;If I have only one replica for the indices in my domain, should I use two or three AZs?;Even when you configure one replica, we recommend three AZs. If an AZ disruption occurs in a three AZ domain, you only lose one-third of your capacity but if the disruption occurs in a two AZ domain, you lose half your capacity, which can be more disruptive. Also, in a three AZ domain, when an AZ is disrupted, Amazon OpenSearch Service can fall back to the two remaining AZs, and still support cross-AZ replication . In a two AZ domain, you lose cross-AZ replication if one AZ is disrupted, which can further reduce availability. For more details refer our documentation. /opensearch-service/faqs/;How do I leverage three AZ deployment for my VPC domain?;The number of AZs your domain is deployed to corresponds to the number of subnets you have configured for your VPC domain. You need to configure at least three subnets in your VPC domain to enable three AZ deployment. For more details on configuring VPC, refer our documentation. /opensearch-service/faqs/;Can programs running on servers in my own data center access my Amazon OpenSearch Service domains?;Yes. The programs with public Internet access can access Amazon OpenSearch Service domains through a public endpoint. If your data center is already connected to Amazon VPC through Direct Connect or SSH tunneling, you can also use VPC access. In both cases, you can configure IAM policies and security groups to allow programs running on servers outside of AWS to access your Amazon OpenSearch Service domains. Click here for more information about signed requests. /opensearch-service/faqs/;How can I migrate data from my existing OpenSearch/Elasticsearch cluster to a new Amazon OpenSearch Service domain?;To migrate data from an existing Elasticsearch or OpenSearch cluster, you should create a snapshot of an existing cluster, and store the snapshot in your Amazon S3 bucket. Then you can create a new Amazon OpenSearch Service domain and load data from the snapshot into the newly created Amazon OpenSearch Service domain using the restore API. /opensearch-service/faqs/;How can I scale an Amazon OpenSearch Service domain?;Amazon OpenSearch Service allows you to control the scaling of your Amazon OpenSearch Service domains using the console, API, and CLI. You can scale your Amazon OpenSearch Service domain by adding, removing, or modifying instances or storage volumes depending on your application needs. Amazon OpenSearch Service is integrated with Amazon CloudWatch to provide metrics about the state of your Amazon OpenSearch Service domains to enable you to make appropriate scaling decisions for your domains. /opensearch-service/faqs/;Does scaling my Amazon OpenSearch Service domain require downtime?;No. Scaling your Amazon OpenSearch Service domain by adding or modifying instances, and storage volumes is an online operation that does not require any downtime. /opensearch-service/faqs/;Does Amazon OpenSearch Service support cross-zone replication?;Yes. If you enable replicas for your OpenSearch/Elasticsearch indices and use multiple Availability Zones, Amazon OpenSearch Service automatically distributes your primary and replica shards across instances in different AZs. /opensearch-service/faqs/;Does Amazon OpenSearch Service expose any performance metrics through Amazon CloudWatch?;Yes. Amazon OpenSearch Service exposes several performance metrics through Amazon CloudWatch including number of nodes, cluster health, searchable documents, EBS metrics (if applicable), CPU, memory and disk utilization for data and master nodes. Please refer to the service documentation for a full listing of available CloudWatch metrics. /opensearch-service/faqs/;I wish to perform security analysis or operational troubleshooting of my Amazon OpenSearch Service deployment. Can I get a history of all the Amazon OpenSearch Service API calls made on my account?;Yes. AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The AWS API call history produced by AWS CloudTrail enables security analysis, resource change tracking, and compliance auditing. Learn more about AWS CloudTrail at the AWS CloudTrail detail page, and turn it on via CloudTrail's AWS Management Console home page. /opensearch-service/faqs/;What is a snapshot?;A snapshot is a copy of your Amazon OpenSearch Service domain at a moment in time. /opensearch-service/faqs/;Why would I need snapshots?;Creating snapshots can be useful in case of data loss caused by node failure, as well as the unlikely event of a hardware failure. You can use snapshots to recover your Amazon OpenSearch Service domain with preloaded data or to create a new Amazon OpenSearch Service domain with preloaded data. Another common reason to use backups is for archiving purposes. Snapshots are stored in Amazon S3. /opensearch-service/faqs/;Does Amazon OpenSearch Service provide automated snapshots?;Yes. By default, Amazon OpenSearch Service automatically creates hourly snapshots of each Amazon OpenSearch Service domain and retains them for 14 days. /opensearch-service/faqs/;How long are the automated daily hourly snapshots stored by Amazon OpenSearch Service?;Amazon OpenSearch Service will retain the last 14 days’ worth of automated hourly snapshots. /opensearch-service/faqs/;Is there a charge for the automated daily hourly snapshots?;There is no additional charge for the automated hourly snapshots. The snapshots are stored for free in an Amazon OpenSearch Service S3 bucket and will be made available for node recovery purposes. /opensearch-service/faqs/;Can I create additional snapshots of my Amazon OpenSearch Service domains as needed?;Yes. You can use the snapshot API to create additional manual snapshots in addition to the daily-automated snapshots created by Amazon OpenSearch Service. The manual snapshots are stored in your S3 bucket and will incur relevant Amazon S3 usage charges. /opensearch-service/faqs/;Can snapshots created by the manual snapshot process be used to recover a domain in the event of a failure?;Yes. Customers can create a new Amazon OpenSearch Service domain and load data from the snapshot into the newly created Amazon OpenSearch Service domain using the OpenSearch/Elasticsearch restore API. /opensearch-service/faqs/;What happens to my snapshots when I delete my Amazon OpenSearch Service domain?;The daily snapshots retained by Amazon OpenSearch Service will be deleted as part of domain deletion. Before deleting a domain, you should consider creating a snapshot of the domain in your own S3 buckets using the manual snapshot process. The snapshots stored in your S3 bucket will not be affected if you delete your Amazon OpenSearch Service domain. /opensearch-service/faqs/;What types of OpenSearch/Elasticsearch logs are exposed by Amazon OpenSearch Service?;Amazon OpenSearch Service exposes three Elasticsearch or OpenSearch logs through Amazon CloudWatch Logs: error logs, search slow logs, and index slow logs. These logs are useful for troubleshooting performance and stability issues with one’s domain. /opensearch-service/faqs/;What are slow logs?;Slow logs are log files that help track the performance of various stages in an operation. OpenSearch and Elasticsearch exposes two kinds of slow logs: /opensearch-service/faqs/;How can I enable slow logs on Amazon OpenSearch Service?;Slows logs can be enabled via the click of a button from the Console or via our CLI and APIs. For more details please refer to our documentation. /opensearch-service/faqs/;Can I only enable slow logs for specific indices?;Yes. You can update the settings for a specific index to enable or disable slow logs for it. For more details refer to our documentation. /opensearch-service/faqs/;Does turning on slow logs in Amazon OpenSearch Service automatically enable logging for all indexes?;No. Turning on slow logs in Amazon OpenSearch Service enables the option to publish the generated logs to Amazon CloudWatch Logs for indices in the given domain. However, in order to generate the logs you have to update the settings for one or more indices to start the logging process. For more details on setting the index configuration for enabling slow logs, please refer to our documentation. /opensearch-service/faqs/;If I turn off the slow logs in Amazon OpenSearch Service, does it mean that log files are no longer being generated?;No. The generation of log files are dependent on the index settings. To turn off generation of the log files you have to update the index configuration. For more details on setting the index configuration for enabling slow logs, see our documentation. /opensearch-service/faqs/;Can I change the granularity of logging?;You can only change the granularity of logging for slow logs. OpenSearch and Elasticsearch expose multiple levels of logging for slow logs. You need to set the appropriate level in the configuration of your index. For more details on setting the index configuration for enabling slow logs, please refer to OpenSearch documentation. /opensearch-service/faqs/;Will enabling slow logs or error logs cost me anything?;When slow logs or error logs are enabled, Amazon OpenSearch Service starts publishing the generated logs to CloudWatch Logs. Amazon OpenSearch Service does not charge anything for enabling the logs. However, standard CloudWatch charges will apply. /opensearch-service/faqs/;What kinds of error logs are exposed by Amazon OpenSearch Service?;OpenSearch uses Apache Log4j 2 and its built-in log levels (from least to most severe) of TRACE, DEBUG, INFO, WARNERROR, and FATAL. If you enable error logs, Amazon OpenSearch Service publishes log lines of WARNERROR, and FATAL, and select errors from the DEBUG level to CloudWatch. For more details, refer our documentation. /opensearch-service/faqs/;How can I enable error logs on Amazon OpenSearch Service?;Error logs can be enabled via the click of a button from the AWS Console or via our CLI and APIs. For more details please refer to our documentation. /opensearch-service/faqs/;Can I enable error logs for only specific indices?;No, error logs are exposed for the entire domain. That is, once enabled, log entries from all indices in the domain will be made available. /opensearch-service/faqs/;Are error logs available for all versions of Elasticsearch supported by Amazon OpenSearch Service?;No, error logs are available only for Elasticsearch versions 5.x and above. /opensearch-service/faqs/;Is there any limit on the size of each log entry?;Yes. Each log entry made into CloudWatch will be limited to 255,000 characters. If your log entry is bigger than that, it will be truncated to 255,000 characters. /opensearch-service/faqs/;What is the recommended best practice for using slow logs?;Slow logs are only needed when you want to troubleshoot your indexes or fine-tune performance. The recommended approach is to only enable logging for those indexes for which you need additional performance insights. Also, once the investigation is done, you should turn off logging so that you don’t incur any additional costs on account of it. For more details, see our documentation. /opensearch-service/faqs/;How can I consume logs from CloudWatch Logs?;CloudWatch offers multiple ways to consume logs. You can view log data, export it to S3, or process it in real time. To learn more, see the CloudWatch Logs developer guide. /opensearch-service/faqs/;Are slow logs available for all versions of OpenSearch and Elasticsearch supported by Amazon OpenSearch Service?;Yes, slow logs can be enabled for all versions of OpenSearch and Elasticsearch supported by Amazon OpenSearch Service. However, there are slight differences in the way log settings can be specified for each version of Elasticsearch. Please refer to our documentation for more details. /opensearch-service/faqs/;Will the cluster have any down time when logging is turned on or off?;No. There will not be any down-time. Every time the log status is updated, we will deploy a new cluster in the background and replace the existing cluster with the new one. This process will not cause any down time. However, since a new cluster is deployed the update to the log status will not be instantaneous. /opensearch-service/faqs/;Which Elasticsearch and OpenSearch versions does the in-place upgrade feature support?;Amazon OpenSearch Service currently supports in-place version upgrade for domains with any OpenSearch version or Elasticsearch versions 5.x and above. The target versions that we support for the upgrade are 5.6, 6.3, 6.4, 6.5, 6.7, 6.8, 7.1, 7.4, 7.7, 7.8, 7.9, and 7.10. For more details refer our documentation. /opensearch-service/faqs/;My domain runs a version of Elasticsearch older than 5.x. How do I upgrade those domains?;Please refer to our documentation for details on migrating from various Elasticsearch versions. /opensearch-service/faqs/;Will my domain be offline while the in-place upgrade is in progress?;No. Your domain remains available throughout the upgrade process. However, part of the upgrade process involves relocating shards, which can impact domain performance. We recommend upgrading when the load on your domain is low. /opensearch-service/faqs/;How can I check if my domain’s Elasticsearch version can be upgraded?;In-place version upgrade is available only for domains running Elasticsearch 5.x and above. If your domain is of version 5.x or above, you can run the upgrade eligibility check to validate whether your domain can be upgraded to the desired version. Please refer to our documentation to learn more. /opensearch-service/faqs/;What are the tests done by Amazon OpenSearch Service to validate my domains upgrade eligibility?;For detailed list of the tests we run to validate upgrade eligibility, please refer to our documentation. /opensearch-service/faqs/;Can I update my domain configuration while the version upgrade is in progress?;No. Once the in-place version upgrade has been triggered, you cannot make changes to your domain configuration until the upgrade completes or fails. You can continue reading and writing data while the upgrade is in progress. Also, you can delete the domain, in which case the upgrade is terminated and the domain deleted. /opensearch-service/faqs/;What happens to the automated system snapshot when the in-place version upgrade is in progress?;The version upgrade process automatically takes a snapshot of the system and only starts the actual upgrade if the snapshot succeeds. If the upgrade is in progress when the automated snapshot’s start time is reached, the automated snapshot is skipped for that day and continued on the next day. /opensearch-service/faqs/;How does Amazon OpenSearch Service safeguard against issues that can crop up during version upgrades?;Amazon OpenSearch Service runs a set of tests before triggering the upgrade to check for known issues that can block the upgrade. If no issues are encountered, the service takes a snapshot of the domain and starts the upgrade process if the snapshot is successful. The upgrade is not triggered if there are any issues encountered with any of the steps. /opensearch-service/faqs/;What happens if the system encounters issues while performing the in-place version upgrade?;If encountered issues are minor and fixable, Amazon OpenSearch Service automatically tries to address them and unblock the upgrade. However, if an issue blocks the upgrade, the service reverts back to the snapshot that was taken before the upgrade and logs the error. For more details on viewing the logs from the upgrade progress, please refer to our documentation. /opensearch-service/faqs/;Can I view the history of upgrades on my domain?;Yes. You can view the upgrade logs from the AWS console or request them using the CLI or SDKs. Please refer to our documentation for more details. /opensearch-service/faqs/;Can I pause or cancel the version upgrade after it has been triggered?;No. After the upgrade has been triggered, it cannot be paused or cancelled until it either completes or fails. /opensearch-service/faqs/;Can I run in-place version upgrade on multiple domains in parallel?;Yes. However, if you want to keep all of your domains on the same version, we recommend running the upgrade eligibility check on all domains before upgrading them. This extra step can help catch issues with one domain that might not be present on others. /opensearch-service/faqs/;How long does the in-place version upgrade take?;Depending on the amount of data and the size of the cluster, upgrades can take anywhere from a few minutes to a few hours to complete. /opensearch-service/faqs/;Can I just upgrade the domain quickly without retaining any of the data?;No. With in-place version upgrade, all the data in your cluster is also restored as part of the upgrade process. If you only wish to upgrade the domain alone, you can take a snapshot of your data, delete all your indexes from the domain and then trigger an in-place version upgrade. Alternatively, you can create a separate domain with the newer version and then restore your data to that domain. /opensearch-service/faqs/;Can I downgrade to previous version if I’m not comfortable with the new version?;No. If you need to downgrade to an older version, contact AWS Support to restore the automatic, pre-upgrade snapshot on a new domain. If you took a manual snapshot of the original domain, you can perform this step yourself. /opensearch-service/faqs/;How will I be charged and billed for my use of Amazon OpenSearch Service?;You pay only for what you use, and there are no minimum or setup fees. You are billed based on: /opensearch-service/faqs/;When does billing of my Amazon OpenSearch Service domain begin and end?;Billing commences for an Amazon OpenSearch Service instance as soon as the instance is available. Billing continues until the Amazon OpenSearch Service instance terminates, which would occur upon deletion or in the event of instance failure. /opensearch-service/faqs/;What defines billable instance hours for Amazon OpenSearch Service?;Amazon OpenSearch Service instance hours are billed for each hour your instance is running in an available state. If you no longer wish to be charged for your Amazon OpenSearch Service instance, you must delete the domain to avoid being billed for additional instance hours. Partial Amazon OpenSearch Service instance hours consumed are billed as full hours. /opensearch-service/faqs/;What is a Reserved Instance (RI)?;Amazon OpenSearch Service Reserved Instances give you the option to reserve an instance for a one- or three-year term, and in turn receive significant savings compared to the On-Demand Instance pricing. /opensearch-service/faqs/;How are Reserved Instances different from On-Demand Instances?;Functionally, Reserved Instances and On-Demand Instances are exactly the same. The only difference is how your instance(s) are billed. With Reserved Instances, you purchase a one- or three-year reservation and receive a lower effective hourly usage rate (compared to On-Demand Instances) for the duration of the term. Unless you purchase Reserved Instances in a Region, all instances in that Region are billed at On-Demand Instance hourly rates. /opensearch-service/faqs/;What are the payment options for Reserved Instances?;Three options are available: /opensearch-service/faqs/;How do I purchase Reserved Instances?;"You purchase Reserved Instances in the ""Reserved Instance"" section of the AWS Management Console for Amazon OpenSearch Service. Alternatively, you can use the Amazon OpenSearch Service API or AWS Command Line Interface to list and purchase Reserved Instances." /opensearch-service/faqs/;Are Reserved Instances specific to an Availability Zone?;Amazon OpenSearch Service Reserved Instances are purchased for a Region rather than for a specific Availability Zone. After you purchase a Reserved Instance for a Region, the discount applies to matching usage in any Availability Zone within that Region. /opensearch-service/faqs/;How many Reserved Instances can I purchase?;You can procure up to 100 Reserved Instances in a single purchase. If you need more Reserved Instances, you need to place more purchase requests. /opensearch-service/faqs/;Do Reserved Instances include a capacity reservation?;Amazon OpenSearch Service Reserved Instances are purchased for a Region rather than for a specific Availability Zone. Hence, they are not capacity reservations. Even if capacity is limited in one Availability Zone, Reserved Instances can still be purchased in the Region. The discount applies to matching usage in any Availability Zone within that Region. /opensearch-service/faqs/;What if I have an existing On-Demand Instance that I’d like to convert to a Reserved Instance?;Simply purchase a Reserved Instance of the same type as the existing On-Demand Instance. If the Reserved Instance purchase succeeds, Amazon OpenSearch Service automatically applies the new hourly usage charge for the duration of your reservation. /opensearch-service/faqs/;If I sign up for a Reserved Instance, when does the term begin? What happens to my Reserved Instance when the term ends?;Pricing changes and the reservation term associated with your Reserved Instance become active after your request is received and the payment authorization is processed. If the one-time payment (if applicable) or new hourly rate (if applicable) cannot be successfully authorized by the next billing period, the discounted price does not take effect and your term does not begin. You can follow the status of your reservation using the console, API, or CLI. For more details, refer our documentation. /opensearch-service/faqs/;How do I control which instances are billed at the Reserved Instance rate?;When computing your bill, our system automatically applies your reservation(s) such that all eligible instances are charged at the lower hourly Reserved Instance rate. Amazon OpenSearch Service does not distinguish between On-Demand and Reserved Instances while operating Amazon OpenSearch Service domains. /opensearch-service/faqs/;If I scale my Reserved Instance up or down, what happens to my reservation?;Each Reserved Instance is associated with the instance type and Region that you picked for it. If you change the instance type in the Region where you have the Reserved Instance, you will not receive discounted pricing. You must ensure that your reservation matches the instance type you plan to use. For more details, please refer to Amazon OpenSearch Service Reserved Instance Documentation. /opensearch-service/faqs/;Can I move a Reserved Instance from one Region or Availability Zone to another?;Each Reserved Instance is associated with a specific Region, which is fixed for the lifetime of the reservation and cannot be changed. Each Reserved Instance can, however, be used in any of the Availability Zones within the associated Region. /opensearch-service/faqs/;Are Reserved Instances applicable if use multiple Availability Zones?;A Reserved Instance is for an AWS Region and can be used in any of the Availability Zones in that Region. /opensearch-service/faqs/;Are Reserved Instances available for both Master nodes and Data nodes?;Yes. Amazon OpenSearch Service does not differentiate between Master and Data nodes when applying Reserved Instance discounts. /opensearch-service/faqs/;Can I cancel a Reserved Instance?;No, you cannot cancel your Reserved Instances, and the one-time payment (if applicable) and discounted hourly usage rate (if applicable) are not refundable. Also, you cannot transfer the Reserved Instance to another account. You must pay for every hour during your Reserved Instance’s term, regardless of your usage. /opensearch-service/faqs/;If I purchase a Reserved Instance from a payer (master) account, is it accessible to all the member accounts?;Yes. Reserved Instance pricing and application follows the policies defined for consolidated billing on AWS. More details can be found here. /opensearch-service/faqs/;If AWS reduces prices of On-Demand Instances for Amazon OpenSearch Service, will the amount I pay for my current Reserved Instances change?;No. The price you pay for already-purchased Reserved Instances does not change for the term of the reservation. /opensearch-service/faqs/;Can I sell my Reserved Instances on the Reserved Instance Marketplace?;No. Reserved Instances purchased on Amazon OpenSearch Service cannot be sold on the Reserved Instance Marketplace. /opensearch-service/faqs/;Are volume discounts available for Reserved Instance purchase?;No. We do not offer volume discounts for Amazon OpenSearch Service Reserved Instances. /opensearch-service/faqs/;What is a Reserved Instance (RI)?;Amazon OpenSearch Service Reserved Instances give you the option to reserve an instance for a one- or three-year term, and in turn receive significant savings compared to the On-Demand Instance pricing. /opensearch-service/faqs/;How are Reserved Instances different from On-Demand Instances?;Functionally, Reserved Instances and On-Demand Instances are exactly the same. The only difference is how your instance(s) are billed. With Reserved Instances, you purchase a one- or three-year reservation and receive a lower effective hourly usage rate (compared to On-Demand Instances) for the duration of the term. Unless you purchase Reserved Instances in a Region, all instances in that Region are billed at On-Demand Instance hourly rates. /opensearch-service/faqs/;What are the payment options for Reserved Instances?;Three options are available: /opensearch-service/faqs/;How do I purchase Reserved Instances?;"You purchase Reserved Instances in the ""Reserved Instance"" section of the AWS Management Console for Amazon OpenSearch Service. Alternatively, you can use the Amazon OpenSearch Service API or AWS Command Line Interface to list and purchase Reserved Instances." /opensearch-service/faqs/;Are Reserved Instances specific to an Availability Zone?;Amazon OpenSearch Service Reserved Instances are purchased for a Region rather than for a specific Availability Zone. After you purchase a Reserved Instance for a Region, the discount applies to matching usage in any Availability Zone within that Region. /opensearch-service/faqs/;How many Reserved Instances can I purchase?;You can procure up to 100 Reserved Instances in a single purchase. If you need more Reserved Instances, you need to place more purchase requests. /opensearch-service/faqs/;Do Reserved Instances include a capacity reservation?;Amazon OpenSearch Service Reserved Instances are purchased for a Region rather than for a specific Availability Zone. Hence, they are not capacity reservations. Even if capacity is limited in one Availability Zone, Reserved Instances can still be purchased in the Region. The discount applies to matching usage in any Availability Zone within that Region. /opensearch-service/faqs/;What if I have an existing On-Demand Instance that I’d like to convert to a Reserved Instance?;Simply purchase a Reserved Instance of the same type as the existing On-Demand Instance. If the Reserved Instance purchase succeeds, Amazon OpenSearch Service automatically applies the new hourly usage charge for the duration of your reservation. /opensearch-service/faqs/;If I sign up for a Reserved Instance, when does the term begin? What happens to my Reserved Instance when the term ends?;Pricing changes and the reservation term associated with your Reserved Instance become active after your request is received and the payment authorization is processed. If the one-time payment (if applicable) or new hourly rate (if applicable) cannot be successfully authorized by the next billing period, the discounted price does not take effect and your term does not begin. You can follow the status of your reservation using the console, API, or CLI. For more details, refer our documentation. /opensearch-service/faqs/;How do I control which instances are billed at the Reserved Instance rate?;When computing your bill, our system automatically applies your reservation(s) such that all eligible instances are charged at the lower hourly Reserved Instance rate. Amazon OpenSearch Service does not distinguish between On-Demand and Reserved Instances while operating Amazon OpenSearch Service domains. /opensearch-service/faqs/;If I scale my Reserved Instance up or down, what happens to my reservation?;Each Reserved Instance is associated with the instance type and Region that you picked for it. If you change the instance type in the Region where you have the Reserved Instance, you will not receive discounted pricing. You must ensure that your reservation matches the instance type you plan to use. For more details, please refer to Amazon OpenSearch Service Reserved Instance Documentation. /opensearch-service/faqs/;Can I move a Reserved Instance from one Region or Availability Zone to another?;Each Reserved Instance is associated with a specific Region, which is fixed for the lifetime of the reservation and cannot be changed. Each Reserved Instance can, however, be used in any of the Availability Zones within the associated Region. /opensearch-service/faqs/;Are Reserved Instances applicable if use multiple Availability Zones?;A Reserved Instance is for an AWS Region and can be used in any of the Availability Zones in that Region. /opensearch-service/faqs/;Are Reserved Instances available for both Master nodes and Data nodes?;Yes. Amazon OpenSearch Service does not differentiate between Master and Data nodes when applying Reserved Instance discounts. /opensearch-service/faqs/;Can I cancel a Reserved Instance?;No, you cannot cancel your Reserved Instances, and the one-time payment (if applicable) and discounted hourly usage rate (if applicable) are not refundable. Also, you cannot transfer the Reserved Instance to another account. You must pay for every hour during your Reserved Instance’s term, regardless of your usage. /opensearch-service/faqs/;If I purchase a Reserved Instance from a payer (master) account, is it accessible to all the member accounts?;Yes. Reserved Instance pricing and application follows the policies defined for consolidated billing on AWS. More details can be found here. /opensearch-service/faqs/;If AWS reduces prices of On-Demand Instances for Amazon OpenSearch Service, will the amount I pay for my current Reserved Instances change?;No. The price you pay for already-purchased Reserved Instances does not change for the term of the reservation. /opensearch-service/faqs/;Can I sell my Reserved Instances on the Reserved Instance Marketplace?;No. Reserved Instances purchased on Amazon OpenSearch Service cannot be sold on the Reserved Instance Marketplace. /opensearch-service/faqs/;Are volume discounts available for Reserved Instance purchase?;No. We do not offer volume discounts for Amazon OpenSearch Service Reserved Instances. /opensearch-service/faqs/;What does the Amazon OpenSearch Service SLA guarantee?;Our Amazon OpenSearch Service SLA guarantees a Monthly Uptime Percentage of at least 99.9% for Amazon OpenSearch Service. /opensearch-service/faqs/;How do I know if I qualify for a SLA Service Credit?;You are eligible for a SLA credit for Amazon OpenSearch Service under the Amazon OpenSearch Service SLA if multi-AZ domains on Amazon OpenSearch Service have a Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle. /opensearch-service/faqs/;What is cross-cluster search?;Cross-cluster search is an Elasticsearch and OpenSearch feature that enables performing queries and aggregation across two connected clusters. It works by setting up a lightweight unidirectional connection between participating clusters. /opensearch-service/faqs/;What are the minimum requirements for a domain to participate in cross-cluster search?;Domains participating in a cross-cluster search needs to meet the following criteria: /opensearch-service/faqs/;What are the instance types that support cross-cluster search?;Cross-cluster search is currently supported on the following instance types /opensearch-service/faqs/;What are the instance types that do not support cross-cluster search?;Cross-cluster search is not supported on the t2 and m3 family instances due to technical limitation. /opensearch-service/faqs/;Can domains in two different AWS accounts participate in cross-cluster search?;Yes. Participating domains can belong to two different AWS accounts. /opensearch-service/faqs/;Can domains in two different AWS regions participate in cross-cluster search?;No. /opensearch-service/faqs/;How can I start using cross-cluster search?;To get started with cross-cluster search, follow the documentation here /opensearch-service/faqs/;What is cross-cluster replication?;Cross-cluster replication, a new capability that allows Amazon OpenSearch Service customers to automate copying and synchronizing indices from one cluster to another at low latency in same or different AWS Regions. /opensearch-service/faqs/;What are the minimum requirements for a domain to participate in cross-cluster replication?;Domains participating in a cross-cluster replications needs to meet the following criteria: /opensearch-service/faqs/;Can domains in two different AWS Regions participate in cross-cluster replication?;Yes. domains in two different AWS Regions can participate in cross-cluster replication. /opensearch-service/faqs/;Does cross-cluster replication support Ultrawarm and Cold Storage?;No. Current implementation of cross-cluster replication does not support Ultrawarm or Cold Storage. /opensearch-service/faqs/;What are the charges for cross-cluster replication?;Yes. You need to pay standard AWS data transfer charges for the data transferred in and out of Amazon OpenSearch Service. /opensearch-service/faqs/;Why should I use Trace Analytics?;Developers and IT Ops need Trace Analytics to find and fix performance problems in their distributed applications. By adding trace data to the existing log analytics capabilities of Amazon OpenSearch Service, customers can use the same service to both isolate the source of performance problems and diagnose their root cause. In addition, with the support for the OpenTelemetry standard, Trace Analytics supports integration with Jaeger and Zipkin SDKs, two popular open source distributed tracing systems, which allows developers to continue using these SDKs and not have to re-instrument their applications. /opensearch-service/faqs/;What data sources does Trace Analytics support?;Trace Analytics today supports the collection of trace data from application libraries and SDKs that are compatible with the open source OpenTelemetry Collector, including Jaeger, Zipkin, and X-Ray SDKs. Trace Analytics also integrates with AWS Distro for OpenTelemetry, which is a distribution of OpenTelemetry APIs, SDKs, and agents/collectors. It is a performant and secure distribution of OpenTelemetry components that has been tested for production use and is supported by AWS. Customers can use AWS Distro for OpenTelemetry to collect traces and metrics for multiple monitoring solutions, including Amazon OpenSearch Service and AWS X-Ray for trace data and Amazon CloudWatch for metrics. /opensearch-service/faqs/;Why did the name change to Amazon OpenSearch Service from Amazon Elasticsearch Service?;We announced the OpenSearch project, a community-driven, open source fork of Elasticsearch and Kibana, on April 12, 2021. We committed to making a long-term investment in OpenSearch to ensure users continue to have a secure, high-quality, fully open source search and analytics suite with a rich roadmap of new and innovative functionality. This project includes OpenSearch (derived from Elasticsearch 7.10.2) and OpenSearch Dashboards (derived from Kibana 7.10.2). We launched version 1.0 of OpenSearch on July 12, 2021. As part of our long-term commitment to OpenSearch, we added support for OpenSearch 1.0 on the managed service on September 7, 2021 and changed the name from Amazon Elasticsearch Service to Amazon OpenSearch Service. Along with OpenSearch 1.0, we continue to support legacy Elasticsearch versions until 7.10 on the service. Aside from the name change, you can rest assured that we will continue to deliver the same great experience without any impact to ongoing operations, development methodology, or business use. Learn more about OpenSearch here: https://opensearch.org/. /opensearch-service/faqs/;Do I, as a customer, have to take any action as part of this name change?;"We have strived to make this name change as seamless as possible for you. There are aspects, such as the new SDK/configuration APIs, that require your action to ensure you derive the best benefits from the service. While the existing SDK will continue to work from a compatibility perspective, any new functionality that requires new configuration APIs will only be implemented in the new SDK. Hence, we recommend that you move to the new SDK. In addition, irrespective of the new SDK, we strongly recommend that you move your existing IAM policies to use the renamed configuration APIs. As of now, your existing IAM policies will continue to work with the old API definition. However, we will move over to the new API-based permission validation and we will eventually require you to use the new APIs in your policies (specifically for the APIs where there is a name change; e.g. CreateElasticsearchDomain to CreateDomain). Please see the documentation for more details." /opensearch-service/faqs/;Do I have to move to the new SDK to upgrade to OpenSearch 1.0?;No. From a backward compatibility perspective, we will ensure that your existing setup continues to work with OpenSearch 1.0. However, we recommend that you eventually move to the latest SDK for a cleaner and up-to-date experience, as mentioned above. /opensearch-service/faqs/;Are there any changes to pricing with this name change?;No. There are no changes to pricing. /opensearch-service/faqs/;I'm using the Elasticsearch engine in Amazon OpenSearch Service. Why should I upgrade to the OpenSearch 1.x engine? What are the benefits for me?;Upgrading to OpenSearch 1.x ensures your search infrastructure is built on a growing and dynamic Apache-Licensed open source project and gives you access to a wealth of innovative improvements and features available in OpenSearch 1.2 (as of this writing). Features such as enterprise-grade security, alerting, data-lifecycle management, observability, ML-based anomaly detection, and more are all part of OpenSearch Service, with no additional licensing fees. /opensearch-service/faqs/;Will I have downtime if I upgrade?;We use a blue/green (BG) deployment process during upgrade. During a BG, the service adds nodes to the OpenSearch Service cluster in the new configuration and version, migrates data from the old nodes, and drops the old nodes when the data migration is complete. During a BG, search and indexing APIs are available and function normally. Although BG is designed not to interfere with query and indexing requests, some changes (especially those involving changes to security-related settings) can cause dashboards to be unavailable during the period of change. /opensearch-service/faqs/;Is AWS deprecating older versions of Elasticsearch Service?;AWS maintains 19 versions of Apache-2.0-licensed Elasticsearch. None of these versions are deprecated or planned for deprecation at this time. /opensearch-service/faqs/;Will the upgrade trigger a BG?  If not, what is the process for upgrading our nodes?;Yes, the upgrade will trigger a BG deployment process. Please review the upgrade preparation and steps here. /opensearch-service/faqs/;I want to move to Amazon OpenSearch Service 1.x to take advantage of AWS Graviton2 instances, but I am locked in with my existing reserved instances (RIs).  How can you help?;Please work with your AWS account team for information based on your specific situation with RIs. /opensearch-service/faqs/;What should I plan for before initiating an upgrade to Amazon OpenSearch Service 1.x or greater?;The OpenSearch project 1.0 is a fork of open source Elasticsearch 7.10.2. It is wire-compatible with Elasticsearch 7.10--you don’t need to change your usage. To migrate, you can upgrade your domain to version Elasticsearch 7.10 from any prior version in the 6.x and 7.x series, take a snapshot, and restore that snapshot to a domain running OpenSearch Service 1.x. Some clients or tools include version checks that may cause the client or tool to not work with OpenSearch Service. When you upgrade, enable compatibility mode to work around these version checks. /opensearch-service/faqs/;I'm running Elasticsearch version 5.x or earlier. What’s my best upgrade path?;Elasticsearch 5.x indices are not compatible with Elasticsearch 7.10 or OpenSearch 1.x. You must create a new index and load data from your source. If you are running a log analytics workload, you can evaluate whether your data retention strategy supports running in parallel while you build up a full data set on the new domain. /opensearch-service/faqs/;Are there any partners that can help me with my upgrade?;Yes, please contact opensearchmigration-si-support@amazon.com to request a list of partners for your region, industry, and project complexity. AWS Partner Network (APNpartners are trained and have the experience to assist you with upgrades. /opensearch-service/faqs/;Will Amazon OpenSearch Service remain compatible with Elasticsearch in the future? What’s the plan for the future?;OpenSearch 1.0 is a fork of Elasticsearch 7.10.2. OpenSearch and Elasticsearch are compatible. If you enable compatibility mode, Elasticsearch clients are also compatible with OpenSearch 1.0. /opensearch-service/faqs/;What is Amazon OpenSearch Serverless?;Automatic provisioning and scaling to provide consistently fast data ingestion rates and millisecond response times during changing usage patterns and application demand Support for production workloads with redundancy for AZ outages and infrastructure failures Same data durability as Amazon S3 Pay only for the resources consumed by your workload /opensearch-service/faqs/;How does OpenSearch Serverless work with other AWS services?; OpenSearch Serverless is an enhanced security feature by default. All data is encrypted at rest with a collection-level option for you to use either a service managed key or assign your own key through AWS KMS. Access to the collections is controlled through IAM, VPC security group, and SAML 2.0. OpenSearch Serverless supports hierarchical data access policies where you can configure policies at the account, collection, and index levels. You can also configure role-based access control for your collections and indexes. /opensearch-service/faqs/;Which security features does OpenSearch Serverless support?; When the system resources such as CPU, memory, and disk limits in the ingestion or search nodes are breached or it notices hot shards processing large amounts of read or write requests, OpenSearch Serverless horizontally scales out nodes in response to increased workload demand. Similarly, when the resource utilization falls below a certain threshold, OpenSearch Serverless will automatically and gradually scale in the resources without impacting the performance. /quicksight/resources/faqs/;What is Amazon QuickSight?;"Amazon QuickSight is a very fast, easy-to-use, cloud-powered business analytics service that makes it easy for all employees within an organization to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data, anytime, on any device. Upload CSV and Excel files; connect to SaaS applications like Salesforce; access on-premises databases like SQL Server, MySQL, and PostgreSQL; and seamlessly discover your AWS data sources such as Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon Athena, and Amazon S3. QuickSight enables organizations to scale their business analytics capabilities to hundreds of thousands of users, and delivers fast and responsive query performance by using a robust in-memory engine (SPICE)." /quicksight/resources/faqs/;How is Amazon QuickSight different from traditional Business Intelligence (BI) solutions?;Traditional BI solutions often require teams of data engineers to spend months building complex data models before generating a report. They typically lack interactive ad-hoc data exploration and visualization, limiting users to canned reports and pre-selected queries. Traditional BI solutions also require significant up-front investment in complex and costly hardware and software, and then customers to invest in even more infrastructure to maintain fast query performance as database sizes grow. This cost and complexity makes it difficult for companies to enable analytics solutions across their organizations. Amazon QuickSight has been designed to solve these problems by bringing the scale and flexibility of the AWS Cloud to business analytics. Unlike traditional BI or data discovery solutions, getting started with Amazon QuickSight is simple and fast. When you log in, Amazon QuickSight seamlessly discovers your data sources in AWS services such as Amazon Redshift, Amazon RDS, Amazon Athena, and Amazon Simple Storage Service (Amazon S3). You can connect to any of the data sources discovered by Amazon QuickSight and get insights from this data in minutes. You can choose for Amazon QuickSight to keep the data in SPICE up-to-date as the data in the underlying sources change. SPICE supports rich data discovery and business analytics capabilities to help customers derive valuable insights from their data without worrying about provisioning or managing infrastructure. Organizations pay a low monthly fee for each Amazon QuickSight user, eliminating the cost of long-term licenses. With Amazon QuickSight, organizations can deliver rich business analytics functionality to all employees without incurring a huge cost upfront. /quicksight/resources/faqs/;;"Amazon QuickSight is built with ""SPICE"" – a Super-fast, Parallel, In-memory Calculation Engine. Built from the ground up for the cloud, SPICE uses a combination of columnar storage, in-memory technologies enabled through the latest hardware innovations and machine code generation to run interactive queries on large datasets and get rapid responses. SPICE supports rich calculations to help you derive valuable insights from your analysis without worrying about provisioning or managing infrastructure. Data in SPICE is persisted until it is explicitly deleted by the user. SPICE also automatically replicates data for high availability and enables QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wide variety of AWS data sources." /quicksight/resources/faqs/;How can I get started with Amazon QuickSight?;To get started, sign up to Amazon Quicksight, with 4 authors free in trial for 30 days. /quicksight/resources/faqs/;Can I use Amazon QuickSight on my mobile device?;QuickSight mobile apps (available on iOS and Android) gives instant access to your data and insights for you to make decisions on the go. Browse, search and interact with your dashboards. Add dashboards to Favorites for quick and easy access. Explore your data with drill downs, filtering and more. You can also use a web browser on any mobile device to access Amazon QuickSight. /quicksight/resources/faqs/;On which browsers is Amazon QuickSight supported?;Amazon QuickSight supports the latest versions of Mozilla Firefox, Chrome, Safari, Internet Explorer version 10 and above and Edge. /quicksight/resources/faqs/;Which data sources does Amazon QuickSight support?;You can connect to AWS data sources including Amazon RDS, Amazon Aurora, Amazon Redshift, Amazon Athena and Amazon S3. You can also upload Excel spreadsheets or flat files (CSV, TSV, CLF, and ELF), connect to on-premises databases like SQL Server, MySQL and PostgreSQL and import data from SaaS applications like Salesforce. /quicksight/resources/faqs/;Can I connect Amazon QuickSight to my Amazon EC2 or on-premises database?;Yes. In order to connect Amazon QuickSight to an Amazon EC2 or on-premises database, you need to add the Amazon QuickSight IP range to the authorized list in your hosted database. /quicksight/resources/faqs/;How do I upload my data files into Amazon QuickSight?;You can upload XLSX, CSV, TSV, CLF, XLF data files directly from Amazon QuickSight website. You can also upload them to an Amazon S3 bucket and point Amazon QuickSight to the Amazon S3 object. /quicksight/resources/faqs/;How do I access my data in AWS data sources?;Amazon QuickSight seamlessly discovers your AWS data sources that are available in your account with your approval. You can immediately start browsing the data and building visualizations. You can also explicitly connect to other AWS data sources that are not in your account or in a different region by providing connection details for those sources. /quicksight/resources/faqs/;My source data is not in a clean format. How do I format and transform the data before visualizing?;Amazon QuickSight lets you prepare data that is not ready for visualization. Select the “Edit/Preview Data” button in the connection dialog. Amazon QuickSight supports various functions to format and transform your data. You can alias data fields and change data types. You can subset your data using built in filters and perform database join operations using drag and drop. You can also create calculated fields using mathematical operations and built-in functions such conditional statements, string, numerical and date functions. /quicksight/resources/faqs/;How much data can I analyze with Amazon QuickSight?;With Amazon QuickSight you don’t need to worry about scale. You can seamlessly grow your data from a few hundred megabytes to many terabytes of data without managing any infrastructure. /quicksight/resources/faqs/;How does QuickSight’s integration with SageMaker work?;The first step is to connect the data source from which you want to pull data. Once you’re connected to a data source, select “Augment with SageMaker” option. From there, you pick the model you want to use from a list of SageMaker models in your AWS account and provide the schema file, which is a JSON-formatted file that contains the input, output, and run-time settings. Review the input schema mapping with the columns in your data set. Once you’re done, you can execute this job and start running the inference. /quicksight/resources/faqs/;Does QuickSight leverage SageMaker models to perform inference on incremental data or the full data every time it runs?;QuickSight does inference on the full data every time it refreshes. /quicksight/resources/faqs/;How do I manage user access for Amazon QuickSight?;When you create a new Amazon QuickSight account, you have administrative privileges by default. If you are invited to become an Amazon QuickSight user, whoever invites you assigns you either an ADMIN or a USER role. If you have an ADMIN role, you can create and delete user accounts, purchase annual subscriptions and SPICE capacity in addition to using the service. /quicksight/resources/faqs/;How do I create an analysis with Amazon QuickSight?;Creating an analysis is simple. Amazon QuickSight seamlessly discovers data in popular AWS data repositories within your AWS account. Simply point Amazon QuickSight to one of the discovered data sources. To connect to another AWS data source that is not in your AWS account or in a different region, you can provide the connection details of the source. Then, select a table and start analyzing your data. You can also upload spreadsheets and CSV files and use Amazon QuickSight to analyze your files. To create a visualization, start by selecting the data fields you want to analyze, or drag the fields directly on to the visual canvas, or a combination of both actions. Amazon QuickSight will automatically select the appropriate visualization to display based on the data you’ve selected. /quicksight/resources/faqs/;;Amazon QuickSight has an innovative technology called AutoGraph that allows it to select the most appropriate visualizations based on the properties of the data, such as cardinality and data type. The visualization types are chosen to best reveal the data and relationships in an effective way. /quicksight/resources/faqs/;How do I create a dashboard?;Dashboards are a collection of visualizations, tables, and other visual displays arranged and visible together. With Amazon QuickSight, you can compose a dashboard within an analysis by arranging the layouts and size of visualizations and then publish the dashboard to an audience within your organization. /quicksight/resources/faqs/;What types of visualizations are supported in Amazon QuickSight?;Amazon QuickSight supports assorted visualizations that facilitate different analytical approaches: /quicksight/resources/faqs/;What is a suggested visualization? How does Amazon QuickSight generate suggestions?;Amazon QuickSight comes with a built-in suggestion engine that provides you with suggested visualizations based on the properties of the underlying datasets. Suggestions serve as possible first or next-steps of an analysis and removes the time-consuming task of interrogating and understanding the schema of your data. As you work with more specific data, the suggestions will update to reflect the next steps appropriate to your current analysis. /quicksight/resources/faqs/;;Stories are guided tours through specific views of an analysis. They are used to convey key points, a thought process, or the evolution of an analysis for collaboration. You can construct them in Amazon QuickSight by capturing and annotating specific states of the analysis. When readers of the story click on an image in the story, they are then taken into the analysis at that point, where they can explore on their own. /quicksight/resources/faqs/;What type of calculations does Amazon QuickSight enable?;"You can perform typical arithmetic and comparison functions; conditional functions such as if,then; and date, numeric, and string calculations." /quicksight/resources/faqs/;How can I get sample data to explore in QuickSight?;For your convenience, sample analyses are automatically generated when you create an account in Amazon QuickSight. The raw data can also be downloaded from the links below: /quicksight/resources/faqs/;How is data transmitted to Amazon QuickSight?;You have several options to get your data into Amazon QuickSight: file upload, connect to AWS data sources, connect to external data stores over JDBC/ODBC, or through other API-based data store connectors. /quicksight/resources/faqs/;Can I choose the AWS region to connect to hosted or on-premises databases over JDBC/ODBC?;Yes. For better performance and user interactivity, customers are advised to use the region where your data is stored. The Amazon QuickSight auto discovery feature detects data sources only within the AWS region of the Amazon QuickSight endpoint to which you are connected. For a list of the supported Amazon QuickSight AWS regions, please visit the Regional Products and Services page for all AWS global infrastructure. /quicksight/resources/faqs/;Does Amazon QuickSight support multi-factor authentication?;Yes. You can enable multi-factor authentication (MFA) for your AWS account via the AWS Management console. /quicksight/resources/faqs/;How do I connect my VPC to Amazon QuickSight?;If your VPC has been set up with public connectivity, you can add Amazon QuickSight’s IP address range to your database instances’ security group rules to enable traffic flow into your VPC and database instances. /quicksight/resources/faqs/;What is row-level security?;Row-level security (RLS) enables QuickSight dataset owners to control access to data at row granularity based on permissions associated with the user interacting with the data. With RLS, Amazon QuickSight users only need to manage a single set of data and apply appropriate row-level dataset rules to it. All associated dashboards and analyses will enforce these rules, simplifying dataset management and removing the need to maintain multiple datasets for users with different data access privileges. /quicksight/resources/faqs/;What does private VPC access in the context of Amazon QuickSight mean?;If you have data in AWS (perhaps in Amazon Redshift, Amazon Relational Database Service (RDS), or on EC2) or on-premises in Teradata or SQL Server on servers without public connectivity, this feature is for you. Private VPC (Virtual Private Cloud) Access for QuickSight uses an Elastic Network Interface (ENI) for secure, private communication with data sources in a VPC. It also allows you to use AWS Direct Connect to create a secure, private link with your on-premises resources. /quicksight/resources/faqs/;How do I share an analysis, dashboard, or story in Amazon QuickSight?;You can share an analysis, dashboard, or story using the share icon from the QuickSight service interface. You will be able to select the recipients (email address, username or group name), permission levels, and other options before sharing the content with others. /datapipeline/faqs/;What is AWS Data Pipeline?;AWS Data Pipeline is a web service that makes it easy to schedule regular data movement and data processing activities in the AWS cloud. AWS Data Pipeline integrates with on-premise and cloud-based storage systems to allow developers to use their data when they need it, where they want it, and in the required format. AWS Data Pipeline allows you to quickly define a dependent chain of data sources, destinations, and predefined or custom data processing activities called a pipeline. Based on a schedule you define, your pipeline regularly performs processing activities such as distributed data copy, SQL transforms, MapReduce applications, or custom scripts against destinations such as Amazon S3, Amazon RDS, or Amazon DynamoDB. By executing the scheduling, retry, and failure logic for these workflows as a highly scalable and fully managed service, Data Pipeline ensures that your pipelines are robust and highly available. /datapipeline/faqs/;What can I do with AWS Data Pipeline?;Using AWS Data Pipeline, you can quickly and easily provision pipelines that remove the development and maintenance effort required to manage your daily data operations, letting you focus on generating insights from that data. Simply specify the data sources, schedule, and processing activities required for your data pipeline. AWS Data Pipeline handles running and monitoring your processing activities on a highly reliable, fault-tolerant infrastructure. Additionally, to further ease your development process, AWS Data Pipeline provides built-in activities for common actions such as copying data between Amazon Amazon S3 and Amazon RDS, or running a query against Amazon S3 log data. /datapipeline/faqs/;How is AWS Data Pipeline different from Amazon Simple Workflow Service?;While both services provide execution tracking, handling retries and exceptions, and running arbitrary actions, AWS Data Pipeline is specifically designed to facilitate the specific steps that are common across a majority of data-driven workflows. For example: executing activities after their input data meets specific readiness criteria, easily copying data between different data stores, and scheduling chained transforms. This highly specific focus means that Data Pipeline workflow definitions can be created rapidly, and with no code or programming knowledge. /datapipeline/faqs/;What is a data node?;A data node is a representation of your business data. For example, a data node can reference a specific Amazon S3 path. AWS Data Pipeline supports an expression language that makes it easy to reference data which is generated on a regular basis. For example, you could specify that your Amazon S3 data format is s3://example-bucket/my-logs/logdata-#{scheduledStartTime('YYYY-MM-dd-HH')}.tgz. /datapipeline/faqs/;;An activity is an action that AWS Data Pipeline initiates on your behalf as part of a pipeline. Example activities are EMR or Hive jobs, copies, SQL queries, or command-line scripts. /datapipeline/faqs/;What is a precondition?;A precondition is a readiness check that can be optionally associated with a data source or activity. If a data source has a precondition check, then that check must complete successfully before any activities consuming the data source are launched. If an activity has a precondition, then the precondition check must complete successfully before the activity is run. This can be useful if you are running an activity that is expensive to compute, and should not run until specific criteria are met. /datapipeline/faqs/;What is a schedule?;"Schedules define when your pipeline activities run and the frequency with which the service expects your data to be available. All schedules must have a start date and a frequency; for example, every day starting Jan 1, 2013, at 3pm. Schedules can optionally have an end date, after which time the AWS Data Pipeline service does not execute any activities. When you associate a schedule with an activity, the activity runs on it. When you associate a schedule with a data source, you are telling the AWS Data Pipeline service that you expect the data to be updated on that schedule. For example, if you define an Amazon S3 data source with an hourly schedule, the service expects that the data source contains new files every hour." /datapipeline/faqs/;Does Data Pipeline supply any standard Activities?;Yes, AWS Data Pipeline provides built-in support for the following activities: /datapipeline/faqs/;Does AWS Data Pipeline supply any standard preconditions?;Yes, AWS Data Pipeline provides built-in support for the following preconditions: /datapipeline/faqs/;Can I supply my own custom activities?;Yes, you can use the ShellCommandActivity to run arbitrary Activity logic. /datapipeline/faqs/;Can I supply my own custom preconditions?;Yes, you can use the ShellCommandPrecondition to run arbitrary precondition logic. /datapipeline/faqs/;Can you define multiple schedules for different activities in the same pipeline?;Yes, simply define multiple schedule objects in your pipeline definition file and associate the desired schedule to the correct activity via its schedule field. This allows you to define a pipeline in which, for example, log files are stored in Amazon S3 each hour to drive generation of an aggregate report one time per day. /datapipeline/faqs/;What happens if an activity fails?;"An activity fails if all of its activity attempts return with a failed state. By default, an activity retries three times before entering a hard failure state. You can increase the number of automatic retries to 10; however, the system does not allow indefinite retries. After an activity exhausts its attempts, it triggers any configured onFailure alarm and will not try to run again unless you manually issue a rerun command via the CLI, API, or console button." /datapipeline/faqs/;How do I add alarms to an activity?;You can define Amazon SNalarms to trigger on activity success, failure, or delay. Create an alarm object and reference it in the onFail,onSuccess, or onLate slots of the activity object. /datapipeline/faqs/;Can I manually rerun activities that have failed?;Yes. You can rerun a set of completed or failed activities by resetting their state to SCHEDULED. This can be done by using the Rerun button in the UI or modifying their state in the command line or API. This will immediately schedule a of re-check all activity dependencies, followed by the execution of additional activity attempts. Upon subsequent failures, the Activity will perform the original number of retry attempts. /datapipeline/faqs/;On what resources are activities run?;AWS Data Pipeline activities are run on compute resources that you own. There are two types of compute resources: AWS Data Pipeline–managed and self-managed. AWS Data Pipeline–managed resources are Amazon EMR clusters or Amazon EC2 instances that the AWS Data Pipeline service launches only when they're needed. Resources that you manage are longer running and can be any resource capable of running the AWS Data Pipeline Java-based Task Runner (on-premise hardware, a customer-managed Amazon EC2 instance, etc.). /datapipeline/faqs/;Will AWS Data Pipeline provision and terminate AWS Data Pipeline-managed compute resources for me?;Yes, compute resources will be provisioned when the first activity for a scheduled time that uses those resources is ready to run and those instances will be terminated when the final activity that uses the resources has completed successfully or failed. /datapipeline/faqs/;Can multiple compute resources be used on the same pipeline?;Yes, simply define multiple cluster objects in your definition file and associate the cluster to use for each activity via its runsOn field. This allows pipelines to combine AWS and on-premise resources, or to use a mix of instance types for their activities – for example, you may want to use a t1.micro to execute a quick script cheaply, but later on the pipeline may have an Amazon EMR job that requires the power of a cluster of larger instances. /datapipeline/faqs/;Can I execute activities on on-premise resources, or AWS resources that I manage?;Yes. To enable running activities using on-premise resources, AWS Data Pipeline supplies a Task Runner package that can be installed on your on-premise hosts. This package continuously polls the AWS Data Pipeline service for work to perform. When it’s time to run a particular activity on your on-premise resources, for example, executing a DB stored procedure or a database dump, AWS Data Pipeline will issue the appropriate command to the Task Runner. In order to ensure that your pipeline activities are highly available, you can optionally assign multiple Task Runners to poll for a given job. This way, if one Task Runner becomes unavailable, the others will simply pick up its work. /datapipeline/faqs/;How do I install a Task Runner on my on-premise hosts?;You can install the Task Runner package on your on-premise hosts using the following steps: /datapipeline/faqs/;How can I get started with AWS Data Pipeline?;To get started with AWS Data Pipeline, simply visit the AWS Management Console and go to the AWS Data Pipeline tab. From there, you can create a pipeline using a simple graphical editor. /datapipeline/faqs/;What can I do with AWS Data Pipeline?;With AWS Data Pipeline, you can schedule and manage periodic data-processing jobs. You can use this to replace simple systems which are current managed by brittle, cron-based solutions, or you can use it to build complex, multi-stage data processing jobs. /datapipeline/faqs/;Are there Sample Pipelines that I can use to try out AWS Data Pipeline?;Yes, there are sample pipelines in our documentation. Additionally, the console has several pipeline templates that you can use to get started. /datapipeline/faqs/;How many pipelines can I create in AWS Data Pipeline?;By default, your account can have 100 pipelines. /datapipeline/faqs/;Are there limits on what I can put inside a single pipeline?;By default, each pipeline you create can have 100 objects. /datapipeline/faqs/;Can my limits be changed?;Yes. If you would like to increase your limits, simply contact us. /datapipeline/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /glue/faqs/;What is AWS Glue?;AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all the capabilities needed for data integration, so you can start analyzing your data and putting it to use in minutes instead of months. AWS Glue provides both visual and code-based interfaces to make data integration easier. Users can easily find and access data using the AWS Glue Data Catalog. Data engineers and ETL (extract, transform, and load) developers can visually create, run, and monitor ETL workflows with a few clicks in AWS Glue Studio. Data analysts and data scientists can use AWS Glue DataBrew to visually enrich, clean, and normalize data without writing code. /glue/faqs/;What are the main components of AWS Glue?;"AWS Glue consists of a Data Catalog which is a central metadata repository; an ETL engine that can automatically generate Scala or Python code; a flexible scheduler that handles dependency resolution, job monitoring, and retries; AWS Glue DataBrew for cleaning and normalizing data with a visual interface. Together, these automate much of the undifferentiated heavy lifting involved with discovering, categorizing, cleaning, enriching, and moving data, so you can spend more time analyzing your data." /glue/faqs/;When should I use AWS Glue?;You should use AWS Glue to discover properties of the data you own, transform it, and prepare it for analytics. Glue can automatically discover both structured and semi-structured data stored in your data lake on Amazon S3, data warehouse in Amazon Redshift, and various databases running on AWS. It provides a unified view of your data via the Glue Data Catalog that is available for ETL, querying and reporting using services like Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum. Glue automatically generates Scala or Python code for your ETL jobs that you can further customize using tools you are already familiar with. You can use AWS Glue DataBrew to visually clean up and normalize data without writing code. /glue/faqs/;What data sources does AWS Glue support?;AWS Glue natively supports data stored in Amazon Aurora, Amazon RDS for MySQL, Amazon RDS for Oracle, Amazon RDS for PostgreSQL, Amazon RDS for SQL Server, Amazon Redshift, DynamoDB and Amazon S3, as well as MySQL, Oracle, Microsoft SQL Server, and PostgreSQL databases in your Virtual Private Cloud (Amazon VPC) running on Amazon EC2. AWS Glue also supports data streams from Amazon MSK, Amazon Kinesis Data Streams, and Apache Kafka. /glue/faqs/;How does AWS Glue relate to AWS Lake Formation?;Lake Formation leverages a shared infrastructure with AWS Glue, including console controls, ETL code creation and job monitoring, a common data catalog, and a serverless architecture. While AWS Glue is still focused on these types of functions, Lake Formation encompasses AWS Glue features ANprovides additional capabilities designed to help build, secure, and manage a data lake. See the AWS Lake Formation pages for more details. /glue/faqs/;What is the AWS Glue Data Catalog?;The AWS Glue Data Catalog is a central repository to store structural and operational metadata for all your data assets. For a given data set, you can store its table definition, physical location, add business relevant attributes, as well as track how this data has changed over time. /glue/faqs/;How do I get my metadata into the AWS Glue Data Catalog?;AWS Glue provides a number of ways to populate metadata into the AWS Glue Data Catalog. Glue crawlers scan various data stores you own to automatically infer schemas and partition structure and populate the Glue Data Catalog with corresponding table definitions and statistics. You can also schedule crawlers to run periodically so that your metadata is always up-to-date and in-sync with the underlying data. Alternately, you can add and update table details manually by using the AWS Glue Console or by calling the API. You can also run Hive DDL statements via the Amazon Athena Console or a Hive client on an Amazon EMR cluster. Finally, if you already have a persistent Apache Hive Metastore, you can perform a bulk import of that metadata into the AWS Glue Data Catalog by using our import script. /glue/faqs/;What are AWS Glue crawlers?;An AWS Glue crawler connects to a data store, progresses through a prioritized list of classifiers to extract the schema of your data and other statistics, and then populates the Glue Data Catalog with this metadata. Crawlers can run periodically to detect the availability of new data as well as changes to existing data, including table definition changes. Crawlers automatically add new tables, new partitions to existing table, and new versions of table definitions. You can customize Glue crawlers to classify your own file types. /glue/faqs/;How do I import data from my existing Apache Hive Metastore to the AWS Glue Data Catalog?;You simply run an ETL job that reads from your Apache Hive Metastore, exports the data to an intermediate format in Amazon S3, and then imports that data into the AWS Glue Data Catalog. /glue/faqs/;Do I need to maintain my Apache Hive Metastore if I am storing my metadata in the AWS Glue Data Catalog?;No. AWS Glue Data Catalog is Apache Hive Metastore compatible. You can point to the Glue Data Catalog endpoint and use it as an Apache Hive Metastore replacement. For more information on how to configure your cluster to use AWS Glue Data Catalog as an Apache Hive Metastore, please read our documentation here. /glue/faqs/;What analytics services use the AWS Glue Data Catalog?;The metadata stored in the AWS Glue Data Catalog can be readily accessed from Glue ETL, Amazon Athena, Amazon EMR, Amazon Redshift Spectrum, and third-party services. /glue/faqs/;What is the AWS Glue Schema Registry?;AWS Glue Schema Registry, a serverless feature of AWS Glue, enables you to validate and control the evolution of streaming data using schemas registered in Apache Avro and JSON Schema data formats, at no additional charge. Through Apache-licensed serializers and deserializers, the Schema Registry integrates with Java applications developed for Apache Kafka, Amazon Managed Streaming for Apache Kafka (MSK), Amazon Kinesis Data Streams, Apache Flink, Amazon Kinesis Data Analytics for Apache Flink, and AWS Lambda. When data streaming applications are integrated with the Schema Registry, you can improve data quality and safeguard against unexpected changes using compatibility checks that govern schema evolution. Additionally, you can create or update AWS Glue tables and partitions using Apache Avro schemas stored within the registry. /glue/faqs/;Why should I use AWS Glue Schema Registry?;With the AWS Glue Schema Registry, you can: /glue/faqs/;What data format, client language, and integrations are supported by AWS Glue Schema Registry?;The Schema Registry supports Apache Avro and JSON Schema data formats and Java client applications. We plan to continue expanding support for other data formats and non-Java clients. The Schema Registry integrates with applications developed for Apache Kafka, Amazon Managed Streaming for Apache Kafka (MSK), Amazon Kinesis Data Streams, Apache Flink, Amazon Kinesis Data Analytics for Apache Flink, and AWS Lambda. /glue/faqs/;What kinds of evolution rules does AWS Glue Schema Registry support?;The following compatibility modes are available for you to manage your schema evolution: Backward, Backward All, Forward, Forward All, Full, Full All, None, and Disabled. Visit the Schema Registry user documentation to learn more about compatibility rules. /glue/faqs/;How does AWS Glue Schema Registry maintain high availability for my applications?;The Schema Registry storage and control plane is designed for high availability and is backed by the AWS Glue SLA, and the serializers and deserializers leverage best-practice caching techniques to maximize schema availability within clients. /glue/faqs/;Is AWS Glue Schema Registry open-source?;AWS Glue Schema Registry storage is an AWS service, while the serializers and deserializers are Apache-licensed open-source components. /glue/faqs/;Does AWS Glue Schema Registry provide encryption at rest and in-transit?;Yes, your clients communicate with the Schema Registry via API calls which encrypt data in-transit using TLS encryption over HTTPS. Schemas stored in the Schema Registry are always encrypted at rest using a service-managed KMS key. /glue/faqs/;How can I privately connect to AWS Glue Schema Registry?;You can use AWS PrivateLink to connect your data producer’s VPC to AWS Glue by defining an interface VPC endpoint for AWS Glue. When you use a VPC interface endpoint, communication between your VPC and AWS Glue is conducted entirely within the AWS network. For more information, please visit the user documentation. /glue/faqs/;How can I monitor my AWS Glue Schema Registry usage?;AWS CloudWatch metrics are available as part of CloudWatch’s free tier. You can access these metrics in the CloudWatch Console. Visit the AWS Glue Schema Registry user documentation for more information. /glue/faqs/;Does AWS Glue Schema Registry provide tools to manage user authorization?;Yes, the Schema Registry supports both resource-level permissions and identity-based IAM policies. /glue/faqs/;How do I migrate from an existing schema registry to the AWS Glue Schema Registry?;Steps to migrate from a third-party schema registry to AWS Glue Schema Registry are available in the user documentation. /glue/faqs/;Does AWS Glue have a no-code interface for visual ETL?;Yes. AWS Glue Studio offers a graphical interface for authoring Glue jobs to process your data. After you define the flow of your data sources, transformations and targets in the visual interface, AWS Glue studio will generate Apache Spark code on your behalf. /glue/faqs/;What programming language can I use to write my ETL code for AWS Glue?;You can use either Scala or Python. /glue/faqs/;How can I customize the ETL code generated by AWS Glue?;AWS Glue’s ETL script recommendation system generates Scala or Python code. It leverages Glue’s custom ETL library to simplify access to data sources as well as manage job execution. You can find more details about the library in our documentation. You can write ETL code using AWS Glue’s custom library or write arbitrary code in Scala or Python by using inline editing via the AWS Glue Console script editor, downloading the auto-generated code, and editing it in your own IDE. You can also start with one of the many samples hosted in our Github repository and customize that code. /glue/faqs/;How can I develop my ETL code using my own IDE?;You can create and connect to development endpoints that offer ways to connect your notebooks and IDEs. /glue/faqs/;How can I build end-to-end ETL workflow using multiple jobs in AWS Glue?;In addition to the ETL library and code generation, AWS Glue provides a robust set of orchestration features that allow you to manage dependencies between multiple jobs to build end-to-end ETL workflows. AWS Glue ETL jobs can either be triggered on a schedule or on a job completion event. Multiple jobs can be triggered in parallel or sequentially by triggering them on a job completion event. You can also trigger one or more Glue jobs from an external source such as an AWS Lambda function. /glue/faqs/;How does AWS Glue monitor dependencies?;AWS Glue manages dependencies between two or more jobs or dependencies on external events using triggers. Triggers can watch one or more jobs as well as invoke one or more jobs. You can either have a scheduled trigger that invokes jobs periodically, an on-demand trigger, or a job completion trigger. /glue/faqs/;How does AWS Glue handle ETL errors?;AWS Glue monitors job event metrics and errors, and pushes all notifications to Amazon CloudWatch. With Amazon CloudWatch, you can configure a host of actions that can be triggered based on specific notifications from AWS Glue. For example, if you get an error or a success notification from Glue, you can trigger an AWS Lambda function. Glue also provides default retry behavior that will retry all failures three times before sending out an error notification. /glue/faqs/;Can I run my existing ETL jobs with AWS Glue?;Yes. You can run your existing Scala or Python code on AWS Glue. Simply upload the code to Amazon S3 and create one or more jobs that use that code. You can reuse the same code across multiple jobs by pointing them to the same code location on Amazon S3. /glue/faqs/;How can I use AWS Glue to ETL streaming data?;AWS Glue supports ETL on streams from Amazon Kinesis Data Streams, Apache Kafka, and Amazon MSK. Add the stream to the Glue Data Catalog and then choose it as the data source when setting up your AWS Glue job. /glue/faqs/;Do I have to use both AWS Glue Data Catalog and Glue ETL to use the service?;No. While we do believe that using both the AWS Glue Data Catalog and ETL provides an end-to-end ETL experience, you can use either one of them independently without using the other. /glue/faqs/;When should I use AWS Glue Streaming and when should I use Amazon Kinesis Data Analytics?;Both AWS Glue and Amazon Kinesis Data Analytics can be used to process streaming data. AWS Glue is recommended when your use cases are primarily ETL and when you want to run jobs on a serverless Apache Spark-based platform. Amazon Kinesis Data Analytics is recommended when your use cases are primarily analytics and when you want to run jobs on a serverless Apache Flink-based platform. /glue/faqs/;When should I use AWS Glue and when should I use Amazon Kinesis Data Firehose?;Both AWS Glue and Amazon Kinesis Data Firehose can be used for streaming ETL. AWS Glue is recommended for complex ETL, including joining streams, and partitioning the output in Amazon S3 based on the data content. Amazon Kinesis Data Firehose is recommended when your use cases focus on data delivery and preparing data to be processed after it is delivered. /glue/faqs/;Can I see a presentation on using AWS Glue (and AWS Lake Formation) to find matches and deduplicate records?;"Yes, the full recording of the AWS Online Tech Talk, ""Fuzzy Matching and Deduplicating Data with ML Transforms for AWS Lake Formation"" is available here." /glue/faqs/;How can I get started with AWS Glue Data Quality?;To get started, go to Data Quality in the Data Catalog and select a table. Then choose the Data Quality tab to get started. Alternatively, you can set up data quality rules within your pipelines by adding a Data Quality transform on AWS Glue Studio. You can also use APIs to set up data quality rules and run them. /glue/faqs/;How does AWS Glue Data Quality generate recommendations?;AWS Glue Data Quality uses Deequ, an Amazon developed open-source framework that many Amazon teams use to manage the quality of Amazon internal datasets at petabyte scale. One Amazon team uses Deequ to check dataset quality in their 60 PB data lake. Deequ uses Apache Spark to gather data statistics, such as averages, correlations, patterns, and other advanced statistics. It then uses these statistics to identify the right set of checks or rules to validate data quality. /glue/faqs/;How can I edit the recommended rules or add new rules?;You can view and edit recommended rules in the Data Catalog. If you are using other AWS services, you can programmatically access your recommendations using the AWS Glue Data Quality API. You can also add new rules in the Data Catalog. /glue/faqs/;How does AWS Glue Data Quality verify that my rules are relevant when data changes?;You can schedule the recommendation process to get new recommendations based on recent data. AWS Glue Data Quality will provide new recommendations based on recent data patterns. /glue/faqs/;What built-in actions are available on AWS Glue Data Quality?;You can use actions to respond to a data quality issue. In the Data Catalog, you can write the metrics to Amazon CloudWatch and set up alerts in CloudWatch to notify you when scores go below a threshold. On AWS Glue Studio, you can fail a job when quality deteriorates, preventing bad data from moving into data lakes. /glue/faqs/;How can I evaluate my data’s quality?;After you create data quality rules in the Data Catalog, you can create a data quality task and run it immediately or schedule it to run at certain intervals. Data quality rules on your pipelines evaluate your data quality as data is brought into your data lake through your pipelines. /glue/faqs/;Where can I view AWS Glue Data Quality scores?;You can make confident data-driven decisions using data quality scores. You can view data quality scores on the Data Quality tab of your table from Data Catalog. You can view your data pipeline scores on AWS Glue Studio by opening an AWS Glue Studio job and choosing Data Quality. You can configure your data quality tasks to write results to an Amazon Simple Storage Service (S3) bucket. You can then query this data using Amazon Athena or Amazon QuickSight. /glue/faqs/;What is the difference between data quality rules on AWS Glue DataBrew, AWS Glue Data Catalog, and AWS Glue Studio?;Business analysts and data analysts use DataBrew to transform data without writing any code. Data stewards and data engineers use Data Catalog to manage metadata. Data engineers use AWS Glue Studio to author scalable data integration pipelines. These user types must manage data quality in their workflows. Also, data engineers need more technical data quality rules compared to business analysts who write functional rules. Therefore, data quality features are made available in each of these experiences to meet unique user requirements. /glue/faqs/;What is AWS Glue DataBrew?;AWS Glue DataBrew is a visual data preparation tool that makes it easy for data analysts and data scientists to prepare data with an interactive, point-and-click visual interface without writing code. With Glue DataBrew, you can easily visualize, clean, and normalize terabytes, and even petabytes of data directly from your data lake, data warehouses, and databases, including Amazon S3, Amazon Redshift, Amazon Aurora, and Amazon RDS. AWS Glue DataBrew is generally available today in US East (NVirginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Tokyo). /glue/faqs/;What types of transformations are supported in AWS Glue DataBrew?;You can choose from over 250 built-in transformations to combine, pivot, and transpose the data without writing code. AWS Glue DataBrew also automatically recommends transformations such as filtering anomalies, correcting invalid, incorrectly classified, or duplicate data, normalizing data to standard date and time values, or generating aggregates for analyses. For complex transformations, such as converting words to a common base or root word, Glue DataBrew provides transformations that use advanced machine learning techniques such as Natural Language Processing (NLP). You can group multiple transformations together, save them as recipes, and apply the recipes directly to the new incoming data. /glue/faqs/;Can I try AWS Glue DataBrew for free?;Yes. Sign up for an AWS Free Tier account, then visit the AWS Glue DataBrew Management Console, and get started instantly for free. If you are a first-time user of Glue DataBrew, the first 40 interactive sessions are free. Visit the AWS Glue Pricing page to learn more. /glue/faqs/;Do I need to use AWS Glue Data Catalog or AWS Lake Formation to use AWS Glue DataBrew?;No. You can use AWS Glue DataBrew without using either the AWS Glue Data Catalog or AWS Lake Formation. However, if you use either the AWS Glue Data Catalog or AWS Lake Formation, DataBrew users can select the data sets available to them from their centralized data catalog. /glue/faqs/;Can I retain a record of all changes made to my data?;Yes. You can visually track all the changes made to your data in the AWS Glue DataBrew Management Console. The visual view makes it easy to trace the changes and relationships made to the datasets, projects and recipes, and all other associated jobs. In addition, Glue DataBrew keeps all account activities as logs in the AWS CloudTrail. /glue/faqs/;What is Glue Flex?;AWS Glue Flex is a flexible execution job class that allows you to reduce the cost of your non-urgent data integration workloads (e.g., pre-production jobs, testing, data loads, etc.) by up to 35%. Glue has two job execution classes: standard and flexible. The standard execution class is ideal for time-sensitive workloads that require fast job startup and dedicated resources. The flexible execution-class is appropriate for non-urgent jobs whose start and completion times may vary. AWS Glue Flex can reduce the cost of your non-time-sensitive workloads (e.g. nightly batch ETL jobs, weekend jobs, one-time bulk data ingestion jobs, etc.). /glue/faqs/;How do I get started with AWS Glue Flex flexible execution class jobs?;The flexible execution class is available for Glue Spark jobs. To use the flexible execution class, you simply change the default setting of the execution class parameter from “STANDARD to “FLEX”. You can do this via Glue Studio or CLI. Visit the AWS Glue _user documentation_ for more information. /glue/faqs/;What types of data integration and ETL workloads are not appropriate for AWS Glue Flex flexible execution class?;AWS Glue Flex flexible execution class is not appropriate for time-sensitive workloads that require consistent job start and run times, or for jobs that must complete execution by a specific time. AWS Glue Flex is also not recommended for long-running data integration workloads because they are more likely to get interrupted, resulting in frequent cancellations. /glue/faqs/;How often should I expect jobs running with AWS Glue Flex flexible execution class to be interrupted?;The availability and interruption frequency of AWS Glue Flex depends on several factors, including the Region and Availability Zone (AZ), time of day, day of week. Resource availability determines whether Glue Flex jobs will start at all. While the interruption rate can be between 5-10% during peak hours, we expect the interruption rate of Glue Flex jobs to be about 5% or the failure rate of Glue Flex jobs due to interruption to be under 5%. /glue/faqs/;Is the flexible execution class always available?;Yes, you can always choose the flexible execution class to run your Glue jobs. However, the ability of AWS Glue to execute these jobs is based on the availability of non-dedicated AWS capacity and the number of workers selected for your job. It is possible that, during peak times, Glue may not have adequate capacity for your job. In that case, your job will not start. You can specify a timeout value after which Glue will cancel the job. The longer the timeout value, the greater the chance that your job will be executed. /glue/faqs/;What happens if an AWS Glue Flex job is interrupted during execution?;If a Glue Flex job is interrupted because there are no longer sufficient workers to complete the job based on the number of workers specified, the job will fail. Glue will retry failed jobs up to the specified maximum number of retries on the job definition before cancelling the job. You should not use flexible execution class for any job that has a downstream dependency on other systems or processes. /glue/faqs/;What types of AWS Glue jobs are supported by the flexible execution class?;The flexible execution class supports only Glue Spark jobs. Pythonshell and streaming are not supported.AWS Glue Flex is supported by Glue version 3.0 and later. The flexible execution class does not currently support streaming workloads. /glue/faqs/;What is AWS Glue for Ray?;AWS Glue for Ray is an engine option that data engineers can use to process large datasets using Python and popular Python libraries. AWS Glue for Ray combines the AWS Glue serverless data integration service with Ray (ray.io), a popular new open-source framework that helps scale Python workloads. You pay only for the resources that you use while running code and don’t need to configure or tune any resources. /glue/faqs/;What is Ray?;Ray (ray.io) is an open-source distributed compute framework that scales Python applications from a laptop to a cluster consisting of hundreds of compute nodes. It provides simplified primitive types for building and running distributed applications. You can parallelize single-machine code with a few additional lines of code. You can also build complex applications using a straightforward programming model (Ray Core) and a collection of high-level libraries and tools. /glue/faqs/;How do I start using AWS Glue for Ray?;You can create and run Ray jobs by using the existing AWS Glue jobs, command line interfaces (CLIs), and APIs, and selecting the Ray engine through notebooks (Amazon SageMaker or a local notebook) or by using AWS Glue Studio. When a Ray job is ready, you can run it manually or on a schedule. /glue/faqs/;What infrastructure do I need to manage to support AWS Glue for Ray users?;AWS Glue for Ray is fully serverless, so there is no infrastructure to manage. However, administrators can manage how much infrastructure is provisioned for users by setting defaults and limits for the size of AWS Glue for Ray clusters on a per-account, per-user, and per-role basis. They can also set usage limits that will automatically initiate alerts and stop code from running when usage thresholds are exceeded. /glue/faqs/;When should I use AWS Glue vs. AWS Data Pipeline?;AWS Glue provides a managed ETL service that runs on a serverless Apache Spark environment. This allows you to focus on your ETL job and not worry about configuring and managing the underlying compute resources. AWS Glue takes a data first approach and allows you to focus on the data properties and data manipulation to transform the data to a form where you can derive business insights. It provides an integrated data catalog that makes metadata available for ETL as well as querying via Amazon Athena and Amazon Redshift Spectrum. /glue/faqs/;When should I use AWS Glue vs AWS Database Migration Service?;AWS Database Migration Service (DMS) helps you migrate databases to AWS easily and securely. For use cases which require a database migration from on-premises to AWS or database replication between on-premises sources and sources on AWS, we recommend you use AWS DMS. Once your data is in AWS, you can use AWS Glue to move, combine, replicate, and transform data from your data source into another database or data warehouse, such as Amazon Redshift. /glue/faqs/;When should I use AWS Glue vs AWS Batch?;AWS Batch enables you to easily and efficiently run any batch computing job on AWS regardless of the nature of the job. AWS Batch creates and manages the compute resources in your AWS account, giving you full control and visibility into the resources being used. AWS Glue is a fully-managed ETL service that provides a serverless Apache Spark environment to run your ETL jobs. For your ETL use cases, we recommend you explore using AWS Glue. For other batch-oriented use cases, including some ETL use cases, AWS Batch might be a better fit. /glue/faqs/;How am I charged for AWS Glue?;You will pay a simple monthly fee, above the AWS Glue Data Catalog free tier, for storing and accessing the metadata in the AWS Glue Data Catalog. You will pay an hourly rate, billed per second, for the crawler run with a 10-minute minimum. If you choose to use a development endpoint to interactively develop your ETL code, you will pay an hourly rate, billed per second, for the time your development endpoint is provisioned, with a 10-minute minimum. Additionally, you will pay an hourly rate, billed per second, for the ETL job with either a 1-minute minimum or 10-minute minimum based on the Glue version you select. For more details, please refer our pricing page. /glue/faqs/;When does billing for my AWS Glue jobs begin and end?;Billing commences as soon as the job is scheduled for execution and continues until the entire job completes. With AWS Glue, you only pay for the time for which your job runs and not for the environment provisioning or shutdown time. /glue/faqs/;How does AWS Glue keep my data secure?;We provide server-side encryption for data at rest and SSL for data in motion. /glue/faqs/;What are the service limits associated with AWS Glue?;Please refer our documentation to learn more about service limits. /glue/faqs/;What regions is AWS Glue in?;Please refer to the AWS Region Table for details of AWS Glue service availability by region. /glue/faqs/;How many DPUs (Data Processing Units) are allocated to the development endpoint?;A development endpoint is provisioned with 5 DPUs by default. You can configure a development endpoint with a minimum of 2 DPUs and a maximum of 5 DPUs. /glue/faqs/;How do I scale the size and performance of my AWS Glue ETL jobs?;You can simply specify the number of DPUs (Data Processing Units) you want to allocate to your ETL job. A Glue ETL job requires a minimum of 2 DPUs. By default, AWS Glue allocates 10 DPUs to each ETL job. /glue/faqs/;How do I monitor the execution of my AWS Glue jobs?;AWS Glue provides the status of each job and pushes all notifications to Amazon CloudWatch. You can set up SNnotifications via CloudWatch actions to be informed of job failures or completions. /glue/faqs/;What does the AWS Glue SLA guarantee?;Our AWS Glue SLA guarantees a Monthly Uptime Percentage of at least 99.9% for AWS Glue. /glue/faqs/;How do I know if I qualify for a SLA Service Credit?;You are eligible for a SLA credit for AWS Glue under the AWS Glue SLA if more than one Availability Zone in which you are running a task, within the same region has a Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle. /lake-formation/faqs/;What is a data lake?;A data lake is a scalable central repository of large quantities and varieties of data, both structured and unstructured. Data lakes let you manage the full lifecycle of your data. The first step of building a data lake is ingesting and cataloging data from a variety of sources. The data is then enriched, combined, and cleaned before analysis. This makes it easy to discover and analyze the data with direct queries, visualization, and machine learning (ML). Data lakes complement traditional data warehouses, providing more flexibility, cost-effectiveness, and scalability for ingestion, storage, transformation, and analysis of your data. The traditional challenges around the construction and maintenance of data warehouses and limitations in the types of analysis can be overcome using data lakes. /lake-formation/faqs/;What is AWS Lake Formation?;Lake Formation is an integrated data lake service that makes it easy for you to ingest, clean, catalog, transform, and secure your data and make it available for analysis and ML. Lake Formation gives you a central console where you can discover data sources, set up transformation jobs to move data to an Amazon Simple Storage Service (S3) data lake, remove duplicates and match records, catalog data for access by analytic tools, configure data access and security policies, and audit and control access from AWS analytic and ML services. Lake Formation automatically manages access to the registered data in Amazon S3 through services including AWS Glue, Amazon Athena, Amazon Redshift, Amazon QuickSight, and Amazon EMR using Zeppelin notebooks with Apache Spark to ensure compliance with your defined policies. If you’ve set up transformation jobs spanning AWS services, Lake Formation configures the flows, centralizes their orchestration, and lets you monitor the jobs. With Lake Formation, you can configure and manage your data lake without manually integrating multiple underlying AWS services. /lake-formation/faqs/;Why should I use Lake Formation to build my data lake?;Lake Formation makes it easy to build, secure, and manage your AWS data lake. Lake Formation integrates with underlying AWS security, storage, analysis, and ML services and automatically configures them to comply with your centrally defined access policies. It also gives you a single console to monitor your jobs and data transformation and analytic workflows. /lake-formation/faqs/;Can I see a presentation on AWS Lake Formation?;"Yes. You can watch the full recording of the ""Intro to AWS Lake Formation"" session from re:Invent." /lake-formation/faqs/;What kind of problems does the FindMatches ML Transform solve?;FindMatches generally solves record linkage and data deduplication problems. Deduplication is necessary when you’re trying to identify records in a database that are conceptually the same but for which you have separate records. This problem is trivial if duplicate records can be identified by a unique key (for instance, if products can be uniquely identified by a UPC Code), but it becomes very challenging when you have to do a “fuzzy match.” /lake-formation/faqs/;How does Lake Formation relate to AWS Glue?;Lake Formation uses a shared infrastructure with AWS Glue, including console controls, ETL code creation and job monitoring, blueprints to create workflows for data ingest, the same data catalog, and a serverless architecture. Although AWS Glue focuses on these types of functions, Lake Formation encompasses all AWS Glue features and provides additional capabilities designed to help build, secure, and manage a data lake. See the AWS Glue features page for more details. /lake-formation/faqs/;Can I use third party business intelligence tools with Lake Formation?;Yes. You can use third-party business applications, such as Tableau and Looker, to connect to your AWS data sources through services such as Athena or Redshift. Access to data is managed by the underlying data catalog, so regardless of which application you use, you’re assured that access to your data is governed and controlled. /iam/faqs/;What is AWS Identity and Access Management (IAM)?;IAM provides fine-grained access control across all of AWS. With IAM, you can control access to services and resources under specific conditions. Use IAM policies to manage permissions for your workforce and systems to ensure least privilege. IAM is offered at no additional charge. For more information, see What is IAM? /iam/faqs/;What are least-privilege permissions?;When you set permissions with IAM policies, grant only the permissions required to perform a task. This practice is known as granting least privilege. You can apply least-privilege permissions in IAM by defining the actions that can be taken on specific resources under specific conditions. For more information, see Access management for AWS resources. /iam/faqs/;What are IAM roles and how do they work?; You should use IAM roles to grant access to your AWS accounts by relying on short-term credentials, a security best practice. Authorized identities, which can be AWS services or users from your identity provider, can assume roles to make AWS requests. To grant permissions to a role, attach an IAM policy to it. For more information, see Common scenarios for roles. /iam/faqs/;Why should I use IAM roles?; IAM users are identities with long-term credentials. You might be using IAM users for workforce users. In this case, AWS recommends using an identity provider and federating into AWS by assuming roles. You also can use roles to grant cross-account access to services and features such as AWS Lambda functions. In some scenarios, you might require IAM users with access keys that have long-term credentials with access to your AWS account. For these scenarios, AWS recommends using IAM access last used information to rotate credentials often and remove credentials that are not being used. For more information, see Overview of AWS identity management: Users. /iam/faqs/;What are IAM users and should I still be using them?; IAM policies define permissions for the entities you attach them to. For example, to grant access to an IAM role, attach a policy to the role. The permissions defined in the policy determine whether requests are allowed or denied. You also can attach policies to some resources, such as Amazon S3 buckets, to grant direct, cross-account access. And you can attach policies to an AWS organization or organizational unit to restrict access across multiple accounts. AWS evaluates these policies when an IAM role makes a request. For more information, see Identity-based policies. /iam/faqs/;How do I grant access to services and resources by using IAM?; To assign permissions to a role or resource, create a policy, which is a JavaScript Object Notation (JSONdocument that defines permissions. This document includes permissions statements that grant or deny access to specific service actions, resources, and conditions. After you create a policy, you can attach it to one or more AWS roles to grant permissions to your AWS account. To grant direct, cross-account access to resources, such as Amazon S3 buckets, use resource-based policies. Create your policies in the IAM console or via AWS APIs or the AWS CLI. For more information, see Creating IAM policies. /iam/faqs/;How do I create IAM policies?; AWS managed policies are created and administered by AWS and cover common use cases. Getting started, you can grant broader permissions by using the AWS managed policies that are available in your AWS account and common across all AWS accounts. Then, as you refine your requirements, you can reduce permissions by defining customer managed policies specific to your use cases with the goal of achieving least-privilege permissions. For more information, see AWS managed policies. /iam/faqs/;What are AWS managed policies and when should I use them?; To grant only the permissions required to perform tasks, you can create customer managed policies that are specific to your use cases and resources. Use customer managed policies to continue refining permissions for your specific requirements. For more information, see Customer managed policies. /iam/faqs/;What are customer managed policies and when should I use them?; Inline policies are embedded in and inherent to specific IAM roles. Use inline policies if you want to maintain a strict one-to-one relationship between a policy and the identity to which it is applied. For example, you can grant administrative permissions to ensure they are not attached to other roles. For more information, see Inline policies. /iam/faqs/;What are inline policies and when should I use them?; Resource-based policies are permissions policies that are attached to resources. For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, VPC endpoints, and AWS Key Management Service encryption keys. For a list of services that support resource-based policies, see AWS services that work with IAM. Use resource-based policies to grant direct, cross-account access. With resource-based policies, you can define who has access to a resource and which actions they can perform with it. For more information, see Identity-based policies and resource-based policies. /iam/faqs/;What are resource-based policies and when should I use them?; RBAC provides a way for you to assign permissions based on a person’s job function, known outside of AWS as a role. IAM provides RBAC by defining IAM roles with permissions that align with job functions. You then can grant individuals access to assume these roles to perform specific job functions. With RBAC, you can audit access by looking at each IAM role and its attached permissions. For more information, see Comparing ABAC to the traditional RBAC model. /iam/faqs/;What is role-based access control (RBAC)?; As a best practice, grant access only to the specific service actions and resources required to perform each task. This is known as granting least privilege. When employees add new resources, you must update policies to allow access to those resources. /iam/faqs/;How do I grant access with RBAC?; ABAC is an authorization strategy that defines permissions based on attributes. In AWS, these attributes are called tags, and you can define them on AWS resources, IAM roles, and in role sessions. With ABAC, you define a set of permissions based on the value of a tag. You can grant fine-grained permissions to specific resources by requiring the tags on the role or session to match the tags on the resource. For example, you can author a policy that grants developers access to resources tagged with the job title “developers.” ABAC is helpful in environments that are growing rapidly by granting permissions to resources as they are created with specific tags. For more information, see Attribute-Based Access Control for AWS. /iam/faqs/;What is attribute-based access control (ABAC)?; To grant access by using ABAC, first define the tag keys and values you want to use for access control. Then, ensure your IAM role has the appropriate tag keys and values. If multiple identities use this role, you also can define session tag keys and values. Next, ensure that your resources have the appropriate tag keys and values. You also can require users to create resources with appropriate tags and restrict access to modify them. After your tags are in place, define a policy that grants access to specific actions and resource types, but only if the role or session tags match the resource tags. For a detailed tutorial that demonstrates how to use ABAC in AWS, see IAM tutorial: Define permissions to access AWS resources based on tags. /iam/faqs/;How do I restrict access by using IAM?;With AWS Identity and Access Management (IAM), all access is denied by default and requires a policy that grants access. As you manage permissions at scale, you might want to implement permissions guardrails and restrict access across your accounts. To restrict access, specify a Deny statement in any policy. If a Deny statement applies to an access request, it always prevails over an Allow statement. For example, if you allow access to all actions in AWS but deny access to IAM, any request to IAM is denied. You can include a Deny statement in any type of policy, including identity-based, resource-based, and service control policies with AWS Organizations. For more information, see Controlling access with AWS Identity and Access Management. /iam/faqs/;How do I work toward least-privilege permissions?; Achieving least privilege is a continuous cycle to grant the right fine-grained permissions as your requirements evolve. IAM Access Analyzer helps you streamline permissions management in each step of this cycle. Policy generation with IAM Access Analyzer generates a fine-grained policy based on the access activity captured in your logs. This means that after you build and run an application, you can generate policies that grant only the required permissions to operate the application. Policy validation with IAM Access Analyzer uses more than 100 policy checks to guide you to author and validate secure and functional policies. You can use these checks while creating new policies or to validate existing policies. Public and cross-account findings with IAM Access Analyzer help you verify and refine access allowed by your resource policies from outside your AWS organization or account. For more information, see Using IAM Access Analyzer. /iam/faqs/;What is IAM Access Analyzer?; You might have IAM users, roles, and permissions that you no longer require in your AWS account. We recommend that you remove them with the goal of achieving least-privilege access. For IAM users, you can review password and access key last used information. For roles, you can review role last used information. This information is available through the IAM console, APIs, and SDKs. Last used information helps you identify users and roles that are no longer in use and safe to remove. You also can refine permissions by reviewing service and last accessed information to identify unused permissions. For more information, see Refining permissions in AWS using last accessed information. /iam/faqs/;How do I remove unused permissions?; The IAM policy simulator evaluates policies you choose and determines the effective permissions for each of the actions you specify. Use the policy simulator to test and troubleshoot identity-based and resource-based policies, IAM permissions boundaries, and SCPs. For more information, see Testing IAM policies with the IAM policy simulator. /iam/identity-center/faqs/;What are the benefits of IAM Identity Center?; IAM Identity Center eliminates the administrative complexity of federating and managing permissions separately for each AWS account. It allows you to set up AWS applications from a single interface, and to assign access to your cloud applications from a single place. IAM Identity Center also helps improve access visibility by integrating with AWS CloudTrail and providing a central place for you to audit single sign-on access to AWS accounts and SAML-enabled cloud applications, such as Microsoft 365, Salesforce, and Box. /iam/identity-center/faqs/;What problems does IAM Identity Center solve?; IAM Identity Center is our recommended front door into AWS. It should be your primary tool to manage the AWS access of your workforce users. It allows you to manage your identities in your preferred identity source, connect them once for use in AWS, allows you to define fine-grained permissions and apply them consistently across accounts. As the number of your accounts scales, IAM Identity Center gives you the option to use it as a single place to manage user access to all your cloud applications. /iam/identity-center/faqs/;Why should I use IAM Identity Center?; You can use IAM Identity Center to quickly and easily assign your employees access to AWS accounts within AWS Organizations, business cloud applications (such as Salesforce, Microsoft 365, and Box), and custom applications that support Security Assertion Markup Language (SAML) 2.0. Employees can sign in with their existing corporate credentials or credentials they configure in IAM Identity Center to access their business applications from a single user portal. IAM Identity Center also allows you to audit users’ access to cloud services by using AWS CloudTrail. /iam/identity-center/faqs/;What can I do with IAM Identity Center?; IAM Identity Center is for administrators who manage multiple AWS accounts and business applications, want to centralize user access management to these cloud services, and want to provide employees a single location to access these accounts and applications without them having to remember yet another password. /iam/identity-center/faqs/;How much does IAM Identity Center cost?; See the AWS Region Table for IAM Identity Center availability by Region. /iam/identity-center/faqs/;Can I connect more than one identity source to IAM Identity Center?;No. At any given time, you can have only one directory or one SAML 2.0 identity provider connected to IAM Identity Center. But, you can change the identity source that is connected to a different one. /iam/identity-center/faqs/;What SAML 2.0 IdPs can I use with IAM Identity Center?;You can connect IAM Identity Center to most SAML 2.0 IdPs, such as Okta Universal Directory or Azure Active Directory. See the IAM Identity Center User Guide to learn more. /iam/identity-center/faqs/;How can I provision identities from my existing IdPs into IAM Identity Center?;Identities from your existing IdP must be provisioned into IAM Identity Center before you can assign permissions. You can synchronize user and group information from Okta Universal Directory, Azure AD, OneLogin, and PingFederate automatically using the System for Cross-domain Identity Management (SCIM) standard. For other IdPs, you can provision users from your IdP using the IAM Identity Center console. See the IAM Identity Center User Guide to learn more. /iam/identity-center/faqs/;Can I automate identity synchronization into IAM Identity Center?;Yes. If you use Okta Universal Directory, Azure AD, OneLogin, or PingFederate, you can use SCIM to synchronize user and group information from your IdP to IAM Identity Center automatically. See the IAM Identity Center User Guide to learn more. /iam/identity-center/faqs/;How do I connect IAM Identity Center to my Microsoft Active Directory?;You can connect IAM Identity Center to your on-premises Active Directory (AD) or to an AWS Managed Microsoft AD directory using AWS Directory Service. See the IAM Identity Center User Guide to learn more. /iam/identity-center/faqs/;I manage my users and groups in Active Directory on-premises. How can I leverage these users and groups in IAM Identity Center?;You have two options for connecting Active Directory–hosted on-premises to IAM Identity Center: (1) use AD Connector, or (2) use an AWS Managed Microsoft AD trust relationship. AD Connector simply connects your existing on-premises Active Directory to AWS. AD Connector is a directory gateway with which you can redirect directory requests to your on-premises Microsoft Active Directory without caching any information in the cloud. To connect an on-premises directory using AD Connector, see the AWS Directory Service Administration Guide. AWS Managed Microsoft AD makes it easy to set up and run Microsoft Active Directory in AWS. It can be used to set up a forest trust relationship between your on-premises directory and AWS Managed Microsoft AD. To set up a trust relationship, see the AWS Directory Service Administration Guide. /iam/identity-center/faqs/;Can I use my Amazon Cognito User Pools as the identity source in IAM Identity Center?;"Amazon Cognito is a service that helps you manage identities for your customer facing applications; it is not a supported identity source in IAM Identity Center. You can create and manage your workforce identities in IAM Identity Center or in your external identity source including Microsoft Active Directory, Okta Universal Directory, Azure Active Directory (Azure AD), or another supported IdP." /iam/identity-center/faqs/;Does IAM Identity Center support the browser command line and mobile interfaces?;Yes, you can use IAM Identity Center to control access to the AWS Management Console and CLI v2. IAM Identity Center enables your users to access the CLI and AWS Management Console through a single sign-on experience. The AWS Mobile Console app also supports IAM Identity Center so you get a consistent sign-in experience across browser, mobile, and command line interfaces. /iam/identity-center/faqs/;Which cloud applications can I connect to IAM Identity Center?;You can connect the following applications to IAM Identity Center: /iam/identity-center/faqs/;Which AWS accounts can I connect to IAM Identity Center?; AWS CLI Credentials fetched through IAM Identity Center are valid for 60 minutes. You can get a fresh set of credentials as often as needed. /iam/identity-center/faqs/;How do I set up IAM Identity Center to business applications, such as Salesforce?; Yes. If your application supports SAML 2.0, you can configure your application as a custom SAML 2.0 application. From the IAM Identity Center console, navigate to the applications pane, choose Configure new application, and choose Custom SAML 2.0 application. Follow the instructions to configure the application. Your application is now configured and you may assign access to it. Choose the groups or users that you want to provide with access to the application, and choose Assign Access to complete the process. /iam/identity-center/faqs/;My company uses business applications that are not in IAM Identity Center's preintegrated application list. Can I still use IAM Identity Center?; No. IAM Identity Center supports single sign-on to business applications through web browsers only. /cloud-directory/faqs/;What is Amazon Cloud Directory?;Amazon Cloud Directory is a cloud-native, highly scalable, high-performance, multi-tenant directory service that provides web-based directories to make it easy for you to organize and manage all your application resources such as users, groups, locations, devices, and policies, and the rich relationships between them. Cloud Directory is a foundational building block for developers to create directory-based solutions easily and without having to worry about deployment, global scale, availability, and performance. /cloud-directory/faqs/;What are the important characteristics of Amazon Cloud Directory?;Important characteristics include: /cloud-directory/faqs/;What are core use cases for Cloud Directory?;Customers can use Cloud Directory to build applications such as IoT device registries, social networks, network configurations, and user directories. Each of these use cases typically needs to organize data hierarchically, perform high-volume and low-latency lookups, and scale to hundreds of millions of objects with global availability. /cloud-directory/faqs/;What kind of customers can use Cloud Directory?;Customers of all sizes can use Amazon Cloud Directory to build directory-based applications easily. /cloud-directory/faqs/;When should I use Amazon Neptune and Amazon Cloud Directory?;Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Neptune supports popular graph models Property Graph and W3C's RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. /cloud-directory/faqs/;How is Cloud Directory different than traditional directories?;Amazon Cloud Directory is a foundational service for developers to build cloud-native directories for hundreds of millions of objects and relationships. It provides the necessary APIs for you to create a directory with a schema, add objects and relationships, and attach policies to those objects and relationships. /cloud-directory/faqs/;When should I use Cloud Directory versus AWS Directory Service for Microsoft Active Directory (Enterprise Edition) or Amazon Cognito User Pools?;AWS Directory Service for Microsoft Active Directory (Enterprise Edition), or AWS Microsoft AD, is designed to support Windows-based workloads that require Microsoft Active Directory. AWS Microsoft AD is intended for enterprise IT use cases and applications that depend on Microsoft Active Directory. /cloud-directory/faqs/;What are the key terms and concepts that I need to be aware of to use Amazon Cloud Directory?;To use Amazon Cloud Directory, you need to know the following key terms: /cloud-directory/faqs/;What is a directory?;A directory defines the scope for the data store (like a table in Amazon DynamoDB), completely isolating it from all other directories in the service. It also defines the transaction scope, query scope, and the like. A directory also represents the root object for a customer’s tree and can have multiple directory objects as its children. Customers must apply schemas at the directory level. /cloud-directory/faqs/;What is a schema?;A schema defines facets, attributes, and constraints allowed within a directory. This includes defining: /cloud-directory/faqs/;What is a facet?;A facet is a collection of attributes and constraints. A single or multiple facets when combined help define the objects in a directory. For example, Person and Device can be facets that define corporate employees with the associations of multiple devices. /cloud-directory/faqs/;What is an object?;An object represents a structured data entity in a directory. An object in a directory is intended to capture metadata about a physical or logical entity, usually for the purpose of information discovery and enforcing policies. For example, users, devices, and applications are all types of objects. An object’s structure and type information are expressed using a collection of facets. /cloud-directory/faqs/;What is an attribute?;An attribute is a user-defined unit of metadata associated with an object. For example, the user object can have an attribute called email-address. Attributes are always associated with an object. /cloud-directory/faqs/;What is a hierarchy?;A hierarchy is a view in which groups and objects are organized in parent-child relationships similar to a file system in which folders have files and subfolders beneath them. Amazon Cloud Directory supports organizing objects into multiple hierarchies. /cloud-directory/faqs/;What is a policy?;A policy is a specialized object type with attributes that define the type of policy and policy document. A policy can be attached to objects or the root of a hierarchy. By default, objects inherit policies from their parents. Amazon Cloud Directory does not interpret policies. /cloud-directory/faqs/;How do I provision a new directory in Amazon Cloud Directory?;You can provision a new directory in Amazon Cloud Directory with the following steps: /cloud-directory/faqs/;How do I create and manage schemas?;Amazon Cloud Directory provides an SDK and CLI to create, read, and update schemas. Cloud Directory also supports uploading a compliant JSON file to create a schema. You can also create and manage schemas using the Cloud Directory console. /cloud-directory/faqs/;Does Amazon Cloud Directory provide any sample schemas?;Yes, currently Amazon Cloud Directory provides the following sample schemas: /cloud-directory/faqs/;What are eventually consistent and strongly consistent read operations in Cloud Directory?;Amazon Cloud Directory is a distributed directory store. This means that data is distributed to multiple servers in different Availability Zones. /cognito/faqs/;What is Amazon Cognito?;In addition, Amazon Cognito enables you to synchronize data across a user’s devices so that their app experience remains consistent when they switch between devices or upgrade to a new device. Your app can save data locally on users’ devices allowing your applications to work even when the devices are offline and then automatically synchronize the data when the device is back online. /cognito/faqs/;Who should use Amazon Cognito?; You can easily get started by visiting the AWS Console. If you do not have an Amazon Web Services account, you can create an account when you sign in to the console. Once you have created a user pool for user management or an identity pool for federated identities or sync operations, you can download and integrate the AWS Mobile SDK with your app. Alternatively you can call the Cognito server-side APIs directly, instead of using the SDK. See our developer guide for more information. /cognito/faqs/;How do I start using Amazon Cognito?; Yes. Cognito exposes server-side APIs. You can create your own custom interface to Cognito by calling these APIs directly. The server-side APIs are described in the Developer Guide. /cognito/faqs/;Does Amazon Cognito expose server-side APIs?; Support for Cognito is included in the optional AWS Mobile SDK, which is available for iOS, Android, Unity, and Kindle Fire. Cognito is also available in the AWS SDK for JavaScript. Cognito Your User Pools is currently supported in the AWS Mobile SDKs for iOS and Android and in the JavaScript AWS SDK for Cognito. Visit our resource page to download the SDKs. /cognito/faqs/;Which platforms does Amazon Cognito support?; No. Cognito exposes its control and data APIs as web services. You can implement your own client library calling the server-side APIs directly. /cognito/faqs/;What is a User Pool?;A User Pool is your user directory that you can configure for your web and mobile apps. A User Pool securely stores your users’ profile attributes. You can create and manage a User Pool using the AWS console, AWS CLI, or AWS SDK. /cognito/faqs/;What user profile information is supported by Cognito Identity?;Developers can use either standard OpenID Connect-based user profile attributes (such as user name, phone number, address, time zone, etc.) or customize to add app-specific user attributes. /cognito/faqs/;Can I enable my application’s users to sign up or sign in with an email address or phone number?;Yes, you can use the aliasing feature to enable your users to sign up or sign in with an email address and a password or a phone number and a password. To learn more, visit our docs. /cognito/faqs/;Can I set up password policies?;Yes, you can set up password policies, such as strength of password and character type requirements, when setting up or configuring your user pool. /cognito/faqs/;Can I verify the email addresses and phone numbers of my application’s users?;Yes, with Cognito Identity you can require your users’ email addresses and phone numbers to be verified prior to providing them access to your application. During sign-up, a verification code will be sent to the user’s phone number or email address, and the user must input the verification code to complete sign-up and become confirmed. /cognito/faqs/;Does Cognito Identity support SMS-based multi-factor authentication (MFA)?;Yes, you can enable the end users of your application to sign in with SMS-based MFA. With SMS-based MFA enabled, your users will be prompted for their password (the first factor—what they know), and for a security code that can only be received on their mobile phone via SMS (the second factor—what they have). /cognito/faqs/;Is it possible to customize user sign-up and sign-in workflows?;Yes, you can customize sign-up and sign-in by adding app-specific logic to the user sign-up and sign-in flows using AWS Lambda. For example, you can create AWS Lambda functions to identify fraud or perform additional validations on user data. You are able to trigger developer-provided Lambda functions at pre-registration, at post-confirmation, at pre-authentication, during authentication to customize the challenges, and at post-authentication. You can also use Lambda functions to customize messages sent as part of email or phone number verification and multi-factor authentication. /cognito/faqs/;Can I remember the devices associated with my application's users in a Cognito user pool?;Yes, you can opt to remember devices used to access your application, and you associate these remembered devices with your application's users in a Cognito user pool. You can also opt to use remembered devices to supress second factor challenges for your users when you have set up multi-factor authentication. /cognito/faqs/;How can I migrate my existing users into an Amazon Cognito user pool?;You can use our import tool to migrate your existing users into an Amazon Cognito user pool. User attribute values are imported from a .csv file, which can be uploaded through the console, our APIs, or CLI. When imported users first sign in, they confirm their account and create a new password with a code sent to their email address or phone. There is no additional cost for using the import tool. To learn more, see the import tool documentation. /cognito/faqs/;Can I use Cognito Identity to federate identities and secure access to AWS resources?; You can use Amazon, Facebook, Twitter, Google and any other OpenID Connect compatible identity provider. /cognito/faqs/;Which public identity providers can I use with Amazon Cognito Identity?; Identity pools are the containers that Cognito Identity uses to keep your apps’ federated identities organized. Identity Pool associates federated identities from social identity providers with a unique user specific identifier. Identity Pools do not store any user profiles. An identity pool can be associated with one or many apps. If you use two different identity pools for two apps then the same end user will have a different unique identifier in each Identity Pool. /cognito/faqs/;What is an Identity Pool?; Your mobile app authenticates with an Identity Provider (IdP) using the provider’s SDK. Once the end user is authenticated with the IdP, the OAuth or OpenID Connect token or the SAML assertion returned from the IdP is passed by your app to Cognito Identity, which returns a new Cognito ID for the user and a set of temporary, limited-privilege AWS credentials. /cognito/faqs/;How does the login flow work with public identity providers?; Cognito Identity can integrate with your existing authentication system. With a simple API call you can retrieve a Cognito ID for your end users based on your own unique identifier for your users. Once you have retrieved the Cognito ID and OpenID Token Cognito Identity provides, you can use the Cognito Identity client SDK to access AWS resources and synchronize user data. Cognito Identity is a fully managed identity provider to make it easier for you to implement user sign-up and sign-in for your mobile and web apps. /cognito/faqs/;Can I register and authenticate my own users?; Cognito Identity assigns your users a set of temporary, limited privilege credentials to access your AWS resources so you do not have to use your AWS account credentials. The permissions for each user are controlled through AWS IAM roles that you create. You can define rules to choose the IAM role for each user, or if you are using groups in a Cognito user pool, you can assign IAM roles based on groups. Cognito Identity also allows you to define a separate IAM role with limited permissions for guest users who are not authenticated. In addition, you can use the unique identifier that Cognito generates for your users to control access to specific resources. For example you can create a policy for an S3 bucket that only allows each user access to their own folder within the bucket. /cognito/faqs/;When using public identity providers, does Amazon Cognito Identity store users’ credentials?;No, your app communicates directly with the supported public identity provider (Amazon, Facebook, Twitter, Google, or an Open ID Connect-compliant provider) to authenticate users. Cognito Identity does not receive or store user credentials. Cognito Identity uses the token from the identity provider to obtain a unique identifier for the user and then hashes it using a one-way hash so that the same user can be recognized again in the future without storing the actual user identifier. /cognito/faqs/;Does Cognito Identity receive or store confidential information about my users from the identity providers?; No. Cognito Identity supports login through Amazon, Facebook, Twitter, and Google, as well as providing support for unauthenticated users. With Cognito Identity you can support federated authentication, profile data sync store and AWS access token distribution without writing any backend code. /cognito/faqs/;Do I still need my own backend authentication systems with Cognito Identity?; Cognito Identity supports the creation and token vending process for unauthenticated users as well as authenticated users. This removes the friction of an additional login screen in your app, but still enables you to use temporary, limited privilege credentials to access AWS resources. /cognito/faqs/;What if I don’t want to force my users to log in?; Unauthenticated users are users who do not authenticate with any identity provider, but instead access your app as a guest. You can define a separate IAM role for these users to provide limited permissions to access your backend resources. /cognito/faqs/;What are unauthenticated users?; Yes. Cognito Identity supports separate identities on a single device, such as a family iPad. Each identity is treated separately and you have complete control over how your app logs users in and out and how local and remote app data is stored. /cognito/faqs/;Does Cognito Identity support separate identities for different users on the same device?; You can programmatically create a data set associated with Cognito Identity and start saving data in the form of key/value pairs. The data is stored both locally on the device and in the Cognito sync store. Cognito can also sync this data across all of the end user’s devices. /cognito/faqs/;How do I store data associated with Cognito Identity?; The number of identities in the Cognito Identity console shows you how many identities were created via the Cognito Identity APIs. For Authenticated Identities (those logging in with a login provider such as Facebook or an OpenID Connect provider), each call to Cognito Identity’s GetId API will only ever create a single identity for each user. However, for Unauthenticated identities, each time the client in an app calls the GetId API will generate a new identity. Therefore, if your app calls GetId for unauthenticated identities multiple times for a single user it will appear that a single user has multiple identities. So it is important that you cache the response from GetId when using unauthenticated identities and not call it multiple times per user. /cognito/faqs/;Does the number of identities in the Cognito Identity console tell me how many users are using my app?;The Mobile SDK provides the logic to cache the Cognito Identity automatically so you don't have to worry about this. If you're looking for a complete analytics solution for your app, including the ability to track unique users, please look at Amazon Mobile Analytics. /cognito/faqs/;What is the Amazon Cognito sync store?; No. The optional AWS Mobile SDK saves your data to an SQLite database on the local device, this way the data is always accessible to your app. The data is pushed to the Amazon Cognito sync store by calling the synchronize() method and, if push synchronization is enabled, all other devices linked to an identity are notified of the data change in the sync store via Amazon SNS. /cognito/faqs/;Is data saved directly to the Amazon Cognito sync store?; Data associated with an Amazon Cognito identity are organized as key/value pairs. A key is a label e.g. “MusicVolume”, and a value e.g. “11”. Key/value pairs are grouped and categorized using data sets. Data sets are a logical partition of key/value pairs and the most granular entity used by Amazon Cognito to perform sync operations. /cognito/faqs/;How is data stored in the Amazon Cognito sync store?; Each user information store can have a maximum size of 20MB. Each data set within the user information store can contain up to 1MB of data. Within a data set you can have up to 1024 keys. /cognito/faqs/;What is the maximum size of a user information store within the Amazon Cognito sync store?; Both keys and values within a data set are alphanumeric strings. There is no limit to the length of the strings other than the total amount of values in a dataset cannot exceed 1MB. Binary data can be stored as a base64 encoded string as a value provided it does not exceed the 1MB limit. /cognito/faqs/;What kind of data can I store in a data set?; Limiting the data set size to 1MB increases the chances of a synchronization task completing successfully even when bandwidth is limited without lots of retries that consume battery life and data plans. /cognito/faqs/;Why are data sets limited to 1MB?; No, a user identity and information store is tied to a specific AWS account. If there are multiple apps from different publishers on a particular device that use Amazon Cognito, each app will use the information store created by each publisher. /cognito/faqs/;Are user identities and user information stores shared across developers?; With Cognito Streams, you can push sync store data to a Kinesis stream in your AWS account. You can then consume this stream and store the data in a way that makes it easy for you to analyze such as a Amazon Redshift database, an RDS instance you own or even an S3 file. We have published sample Kinesis consumer application to show how to store the updates data in Amazon Redshift. /cognito/faqs/;How can I analyze and query the data stored in the Cognito Sync store?; By streaming the data to Kinesis you can receive all of the history of changes to your datasets in real-time. This means you receive all the changes an end user makes to a dataset and gives you the flexibility to store this data in a tool of your choice. /cognito/faqs/;Why should I use Kinesis stream instead of a database export?; When you enable the Kinesis stream feature you will be able to start a bulk publish. This process asynchronously sends all of the data currently stored in your Cognito sync store to the Kinesis stream you selected. /cognito/faqs/;What if I already have data stored in Cognito?; Cognito pushes the data to a Kinesis stream you own. There is no difference in Cognito’s per-synchronization price if this feature is enabled. You will be charged Kinesis’ standard rates for your shards. /cognito/faqs/;What is the price of this feature?; Amazon Cognito Events allows developers to run an AWS Lambda function in response to important events in Cognito. The Sync Trigger event is an event that occurs when any dataset is synchronized. Developers can write an AWS Lambda function to intercept the synchronization event. The function can evaluate the changes to the underlying Dataset and manipulate the data before it is stored in the cloud and synchronized back to the user's other devices. Alternatively, the AWS Lambda function could fail the sync operation so that the data is not synchronized to the user's other devices. /cognito/faqs/;Can I validate data before it is saved?; You can programmatically trigger the sync of data sets between client devices and the Amazon Cognito sync store by using the synchronize() method in the AWS Mobile SDK. The synchronize() method reads the latest version of the data available in the Amazon Cognito sync store and compares it to the local, cached copy. After comparison, the synchronize() method writes the latest updates as necessary to the local data store and the Amazon Cognito sync store. By default Amazon Cognito maintains the last-written version of the data. You can override this behavior and resolve data conflicts programmatically. In addition, push synchronization allows you to use Amazon Cognito to send a silent push notification to all devices associated with an identity to notify them that new data is available. /cognito/faqs/;How is data synchronized with Amazon Cognito?; Amazon Cognito uses the Amazon Simple Notification Service (SNS) to send silent push notifications to devices. A silent push notification is a push message that is received by your application on a user's device that will not be seen by the user. /cognito/faqs/;What is a silent push notification?; To enable push synchronization you need to declare a platform application using the Amazon SNpage in the AWS Management Console. Then, from the identity pool page in the Amazon Cognito page of the AWS Management Console, you can link the SNplatform application to your Cognito identity pool. Amazon Cognito automatically utilizes the SNplatform application to notify devices of changes. /cognito/faqs/;How do I use push synchronization?; By default Amazon Cognito maintains the last-written version of the data. You can override this behavior by choosing to respond to a callback from the AWS Mobile SDK which will contain both versions of the data. Your app can then decide which version of the data (the local one or the one in the Amazon Cognito sync store) to keep and save to the Amazon Cognito sync store. /cognito/faqs/;How much does Cognito Identity cost?;If you are using the Cognito Identity to create a User Pool, you pay based on your monthly active users (MAUs) only. A user is counted as a MAU if, within a calendar month, there is an identity operation related to that user, such as sign-up, sign-in, token refresh, password change, or a user account attribute is updated. You are not charged for subsequent sessions or for inactive users with in that calendar month. Separate charges apply for optional use of SMS messaging as described below. /cognito/faqs/;How much does Cognito Sync cost?;As part of the AWS Free Tier, eligible AWS customers receive 10 GB of cloud sync store and 1,000,000 sync operations per month for the first 12 months. Outside the Free Tier, Amazon Cognito costs $0.15 for each 10,000 sync operations and $0.15 per GB of sync store per month. /cognito/faqs/;What is a sync operation?; A user is considered active and counted as a MAU when there is an operation (e.g., sign-in, token refresh, sign-up, or password change) associated with the user during the billing month. Therefore, you are not charged for subsequent operations during the billing month or for inactive users. Typically, your total number of users as well as your number of operations will be significantly larger than your total number of MAUs. /cognito/faqs/;What are Monthly Active Users (MAUs)?; Use of SMS messaging to verify phone numbers, to send codes for forgotten or reset passwords, or for multi-factor authentication is charged separately. See the Worldwide SMS Pricing page for more information. /cognito/faqs/;What does it cost to use SMS messages with Cognito?; Yes. As part of the AWS Free Tier, Cognito offers 10GB of sync store and 1,000,000 sync operations in a month for up to the first 12 months of usage. Your user pool for Cognito Identity is free for the first 50,000 MAUs, and we offer volume-based tiers thereafter. The Federated Identities feature for authenticating users and generating unique identifiers is always free with Cognito Identity. /cognito/faqs/;Is Amazon Cognito part of the AWS Free Tier?; No. You decide when to call the synchronize() method. Every write or read from the device is to the local SQlite store. This way you are in complete control of your costs. /cognito/faqs/;Does every write or read from the app count as a sync operation?;What does push synchronization cost Cognito utilizes Amazon SNto send silent push notifications. There is no additional charge for using Cognito for push synchronization, but normal Amazon SNrates will apply for notifications sent to devices. /guardduty/faqs/;What is Amazon GuardDuty?;GuardDuty is an intelligent threat detection service that continuously monitors your AWS accounts, Amazon Elastic Compute Cloud (EC2) instances, Amazon Elastic Kubernetes Service (EKS) clusters, Amazon Aurora login activity, and data stored in Amazon Simple Storage Service (S3) for malicious activity. If potential malicious activity, such as anomalous behavior, credential exfiltration, or command and control infrastructure (C2) communication is detected, GuardDuty generates detailed security findings that can be used for security visibility and assisting in remediation. Additionally, using the Amazon GuardDuty Malware Protection feature helps to detect malicious files on Amazon Elastic Block Store (EBS) volumes attached to EC2 instance and container workloads. /guardduty/faqs/;What are the key benefits of GuardDuty?;GuardDuty makes it easier to continuously monitor your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty operates completely independently from your resources, so there is no performance or availability impacts to your workloads. The service is fully managed with integrated threat intelligence, machine learning (ML) anomaly detection, and malware scanning. GuardDuty delivers detailed and actionable alerts that are designed to be integrated with existing event management and workflow systems. There are no upfront costs and you pay only for the events analyzed, with no additional software to deploy or threat intelligence feed subscriptions required. /guardduty/faqs/;How much does GuardDuty cost?;GuardDuty prices are based on the volume of analyzed service logs, virtual CPUs (vCPUs) or Aurora Serverless v2 instance Aurora capacity units (ACUs) for Amazon RDS event analysis, the number and size of Amazon EKS workloads being monitored at runtime, and the volume of data scanned for malware. Analyzed service logs are filtered for cost-optimization, and directly integrated with GuardDuty, which means you don’t have to activate or pay for them separately. /guardduty/faqs/;Is there a free trial of GuardDuty?;Yes, any new account to GuardDuty can try the service for 30 days at no cost. You have access to the full feature set and detections during the free trial. During the trial period, you can view the post-trial costs estimate on the GuardDuty console usage page. If you are a GuardDuty administrator, you will see the estimated costs for your member accounts. After 30 days, you can view actual costs of this feature in the AWS Billing console. /guardduty/faqs/;What are the differences between GuardDuty and Amazon Macie?;"GuardDuty provides broad security monitoring of your AWS accounts, workloads, and data to help identify threats, such as attacker reconnaissance; instance, account, bucket, or Amazon EKS cluster compromises; and malware. Macie is a fully managed sensitive data discovery service that uses ML and pattern matching to discover your sensitive data in S3." /guardduty/faqs/;Is GuardDuty a regional or global service?;GuardDuty is a regional service. Even when multiple accounts are enabled and multiple Regions are used, the GuardDuty security findings remain in the same Regions where the underlying data was generated. This ensures all data analyzed is regionally based and doesn’t cross AWS regional boundaries. However, you can choose to aggregate security findings produced by GuardDuty across Regions using Amazon EventBridge or pushing findings to your data store (like S3) and then aggregating findings as you see fit. You can also send GuardDuty findings to AWS Security Hub and use its cross-Region aggregation capability. /guardduty/faqs/;Which Regions does GuardDuty support?;GuardDuty regional availability is listed in the AWS Regional Services List. /guardduty/faqs/;Which partners work with GuardDuty?;Many technology partners have integrated and built on GuardDuty. There are also consulting, system integrator, and managed security service providers with expertise about GuardDuty. For details, see the Amazon GuardDuty Partners page. /guardduty/faqs/;Does GuardDuty help address payment card industry data security standard (PCI DSS) requirements?;Foregenix published a white paper providing a detailed assessment of GuardDuty effectiveness for assisting in meeting requirements, like PCI DSS requirement 11.4, which requires intrusion detection techniques at critical points in the network. /guardduty/faqs/;How do I enable GuardDuty?;You can set up and deploy GuardDuty with a few clicks in the AWS Management Console. Once enabled, GuardDuty immediately starts analyzing continuous streams of account and network activity in near real time and at scale. There are no additional security software, sensors, or network appliances to deploy or manage. Threat intelligence is pre-integrated into the service and is continuously updated and maintained. /guardduty/faqs/;Can I manage multiple accounts with GuardDuty?;Yes, GuardDuty has a multiple account management feature, allowing you to associate and manage multiple AWS accounts from a single administrator account. When used, all security findings are aggregated to the administrator account for review and remediation. Amazon EventBridge events are also aggregated to the GuardDuty administrator account when using this configuration. Additionally, GuardDuty is integrated with AWS Organizations, allowing you to delegate an administrator account for GuardDuty for your organization. This delegated administrator (DA) account is a centralized account that consolidates all findings and can configure all member accounts. /guardduty/faqs/;Which data sources does GuardDuty analyze?;GuardDuty analyzes CloudTrail management event logs, CloudTrail S3 data event logs, VPC Flow Logs, DNquery logs, and Amazon EKS audit logs. GuardDuty can also scan EBS volume data for possible malware when GuardDuty Malware Protection is enabled and identifies suspicious behavior indicative of malicious software in EC2 instance or container workloads. The service is optimized to consume large data volumes for near real-time processing of security detections. GuardDuty gives you access to built-in detection techniques developed and optimized for the cloud, which are maintained and continuously improved upon by GuardDuty engineering. GuardDuty can also monitor Amazon Aurora login events and runtime activity for Amazon EKS. /guardduty/faqs/;How quickly does GuardDuty start working?;Once enabled, GuardDuty starts analyzing for malicious or unauthorized activity. The timeframe to begin receiving findings depends on the activity level in your account. GuardDuty does not look at historical data, only activity that starts after it is enabled. If GuardDuty identifies any potential threats, you will receive a finding in the GuardDuty console. /guardduty/faqs/;Do I have to enable CloudTrail, VPC Flow Logs, DNS query logs, or Amazon EKS audit logs for GuardDuty to work?;No, GuardDuty pulls independent data streams directly from CloudTrail, VPC Flow Logs, DNquery logs, and Amazon EKS. You don’t have to manage S3 bucket policies or modify the way you collect and store logs. GuardDuty permissions are managed as service-linked roles. You can disable GuardDuty at any time, which will remove all GuardDuty permissions. This makes it easier for you to enable the service, as it avoids complex configuration. The service-linked roles also remove the chance that an AWS Identity and Access Management (IAM) permission misconfiguration or S3 bucket policy change will affect service operation. Lastly, the service-linked roles make GuardDuty extremely efficient at consuming high volumes of data in near real time with minimal to no impact on the performance and availability of your account or workloads. /guardduty/faqs/;Is there any performance or availability impact to enabling GuardDuty on my account?;When you enable GuardDuty for the first time, it operates completely independent of your AWS resources. If you configure GuardDuty EKS Runtime Monitoring to automatically deploy the GuardDuty security agent, this could result in additional resource utilization, and will also create VPC endpoints in VPCs used to run Amazon EKS clusters. /guardduty/faqs/;Does GuardDuty manage or keep my logs?;No, GuardDuty does not manage or retain your logs. All data that GuardDuty consumes is analyzed in near real time and discarded thereafter. This allows GuardDuty to be highly efficient and cost effective, and to reduce the risk of data remanence. For log delivery and retention, you should use AWS logging and monitoring services directly, which provide full-featured delivery and retention options. /guardduty/faqs/;How can I prevent GuardDuty from looking at my logs and data sources?;You can prevent GuardDuty from analyzing your data sources at any time in the general settings by choosing to suspend the service. This will immediately stop the service from analyzing data, but it will not delete your existing findings or configurations. You can also choose to disable the service in the general settings. This will delete all remaining data, including your existing findings and configurations, before relinquishing the service permissions and resetting the service. You can also selectively disable capabilities like GuardDuty S3 Protection or GuardDuty EKS Protection through the Management Console or via the AWS CLI. /guardduty/faqs/;What can GuardDuty detect?;GuardDuty gives you access to built-in detection techniques developed and optimized for the cloud. The detection algorithms are maintained and continually improved upon by GuardDuty Engineers. The primary detection categories include the following: /guardduty/faqs/;What is GuardDuty threat intelligence?;GuardDuty threat intelligence is made up of IP addresses and domains known to be used by attackers. GuardDuty threat intelligence is provided by AWS and third-party providers, such as Proofpoint and CrowdStrike. These threat intelligence feeds are pre-integrated and continuously updated in GuardDuty at no additional cost. /guardduty/faqs/;Can I supply my own threat intelligence?;Yes, GuardDuty allows you to upload your own threat intelligence or trusted IP address list. When this feature is used, these lists are only applied to your account and not shared with other customers. /guardduty/faqs/;How are security findings delivered?;When a potential threat is detected, GuardDuty delivers a detailed security finding to the GuardDuty console and EventBridge. This makes alerts more actionable and more easily integrated into existing event management or workflow systems. The findings include the category, resource affected, and metadata associated with the resource, such as a severity level. /guardduty/faqs/;What is the format of GuardDuty findings?;GuardDuty findings come in a common JavaScript Object Notation (JSONformat, which is also used by Macie and Amazon Inspector. This makes it easier for customers and partners to consume security findings from all three services and incorporate them into broader event management, workflow, or security solutions. /guardduty/faqs/;How long are security findings made available in GuardDuty?;Security findings are retained and made available through the GuardDuty console and APIs for 90 days. After 90 days, the findings are discarded. To retain findings for longer than 90 days, you can enable EventBridge to automatically push findings to an S3 bucket in your account or another data store for long-term retention. /guardduty/faqs/;Can I aggregate GuardDuty findings?;Yes, you can choose to aggregate security findings produced by GuardDuty across regions using EventBridge or by pushing findings to your data store (like S3) and then aggregating findings as you see fit. You can also send GuardDuty findings to Security Hub and use its cross-Region aggregation capability. /guardduty/faqs/;Can I take automated preventative actions using GuardDuty?;With GuardDuty, EventBridge, and AWS Lambda, you have the flexibility to set up automated remediation actions based on a security finding. For example, you can create a Lambda function to modify your AWS security group rules based on security findings. If you receive a GuardDuty finding indicating one of your EC2 instances is being probed by a known malicious IP, you can address it through an EventBridge rule, initiating a Lambda function to automatically modify your security group rules and restrict access on that port. /guardduty/faqs/;How are GuardDuty detections developed and managed?;GuardDuty has a team focused on detection engineering, management, and iteration. This produces a steady cadence of new detections in the service, as well as continual iteration on existing detections. Several feedback mechanisms are built into the service, such as the thumbs-up and thumbs-down in each security finding found in the GuardDuty user interface (UI). This allows you to provide feedback that might be incorporated into future iterations of GuardDuty detections. /guardduty/faqs/;Can I write custom detections in Amazon GuardDuty?;No, GuardDuty removes the heavy lifting and complexity of developing and maintaining your own custom rule sets. New detections are continually added based on customer feedback, along with research from AWS security engineers and the GuardDuty engineering team. However, customer-configured customizations include adding your own threat lists and trusted IP address list. /guardduty/faqs/;How can I get started with S3 Protection if I am currently using GuardDuty?;For current GuardDuty accounts, S3 Protection can be activated in the console on the S3 Protection page, or through the API. This will start a 30-day no-cost trial of the GuardDuty S3 Protection feature. /guardduty/faqs/;Is there a free trial of GuardDuty S3 Protection?;Yes, there is a 30-day free trial. Each account in each Region gets a 30-day no-cost trial of the GuardDuty that includes the S3 Protection feature. Accounts that already have GuardDuty enabled will also get a 30-day free trial of the S3 Protection feature when they first activate it. /guardduty/faqs/;If I am a new user to GuardDuty, is S3 Protection enabled by default for my accounts?;Yes. Any new accounts that enable GuardDuty through the console or API will also have S3 Protection turned on by default. New GuardDuty accounts created using the AWS Organizations auto-enable feature will not have S3 Protection turned on by default unless the Auto-enable for S3 option is turned on. /guardduty/faqs/;Can I use GuardDuty S3 Protection without enabling the full GuardDuty service (including the analysis of VPC Flow Logs, DNS query logs, and CloudTrail management events)?;No, the GuardDuty service must be enabled in order to use S3 Protection. Current GuardDuty accounts have the option to enable S3 Protection, and new GuardDuty accounts will have the feature by default once the GuardDuty service is enabled. /guardduty/faqs/;Does GuardDuty monitor all buckets in my account to help protect my S3 deployment?;Yes, S3 Protection monitors all S3 buckets in your environment by default. /guardduty/faqs/;Do I need to turn on CloudTrail S3 data event logging for S3 Protection?;No, GuardDuty has direct access to CloudTrail S3 data event logs. You are not required to enable S3 data event logging in CloudTrail, and therefore will not incur the associated costs. Note that GuardDuty does not store the logs and only uses them for its analysis. /guardduty/faqs/;How does GuardDuty EKS Protection work?;GuardDuty EKS Protection is a GuardDuty feature that monitors Amazon EKS cluster control plane activity by analyzing Amazon EKS audit logs. GuardDuty is integrated with Amazon EKS, giving it direct access to Amazon EKS audit logs without requiring you to turn on or store these logs. These audit logs are security-relevant chronological records documenting the sequence of actions performed on the Amazon EKS control plane. These Amazon EKS audit logs give GuardDuty the visibility needed to conduct continuous monitoring of Amazon EKS API activity and apply proven threat intelligence and anomaly detection to identify malicious activity or configuration changes that might expose your Amazon EKS cluster to unauthorized access. When threats are identified, GuardDuty generates security findings that include the threat type, a severity level, and container-level detail (such as pod ID, container image ID, and associated tags). /guardduty/faqs/;What types of threats can GuardDuty EKS Protection detect on my Amazon EKS workloads?;GuardDuty EKS Protection can detect threats related to user and application activity captured in Amazon EKS audit logs. Amazon EKS threat detections include Amazon EKS clusters that are accessed by known malicious actors or from Tor nodes, API operations performed by anonymous users that might indicate a misconfiguration, and misconfigurations that can result in unauthorized access to Amazon EKS clusters. Also, using ML models, GuardDuty can identify patterns consistent with privilege-escalation techniques, such as a suspicious launch of a container with root-level access to the underlying EC2 host. See Amazon GuardDuty Finding types for a complete list of all new detections. /guardduty/faqs/;Do I need to turn on Amazon EKS audit logs?;"No, GuardDuty has direct access to Amazon EKS audit logs. Note that GuardDuty only uses these logs for analysis; it doesn’t store them, nor do you need to enable or pay for these Amazon EKS audit logs to be shared with GuardDuty. To optimize for costs, GuardDuty applies intelligent filters to only consume a subset of the audit logs that are relevant for security threat detection." /guardduty/faqs/;Is there a free trial of GuardDuty EKS Protection?;Yes, there is a 30-day free trial. Each new GuardDuty account in each Region receives a 30-day free trial of GuardDuty, including the GuardDuty EKS Protection feature. Existing GuardDuty accounts receive a 30-day trial of GuardDuty EKS Protection at no additional charge. During the trial period, you can view the post-trial costs estimate on the GuardDuty console usage page. If you are a GuardDuty administrator, you will see the estimated costs for your member accounts. After 30 days, you can view actual costs of this feature in the AWS Billing console. /guardduty/faqs/;How can I get started with GuardDuty EKS Protection if I am currently using GuardDuty?;GuardDuty EKS Protection must be turned on for each individual account. You can activate the feature for your accounts with a single action in the GuardDuty console from the GuardDuty EKS Protection console page. If you are operating in a GuardDuty multi-account configuration, you can activate GuardDuty EKS Protection across your entire organization from the GuardDuty administrator account GuardDuty EKS Protection page. This will activate continuous monitoring for Amazon EKS in all individual member accounts. For GuardDuty accounts created using the AWS Organizations auto-activate feature, you must explicitly turn on Auto-activate for Amazon EKS. Once activated for an account, all existing and future Amazon EKS clusters in the account will be monitored for threats without any configuration on your Amazon EKS clusters. /guardduty/faqs/;Is GuardDuty EKS Protection enabled by default for my accounts if I am a new GuardDuty user?;Yes, any new account that turns on GuardDuty through the console or API will also have GuardDuty EKS Protection turned on by default. New GuardDuty accounts created using the AWS Organizations Auto-activate feature will not have GuardDuty EKS Protection turned on by default unless the Auto-activate for Amazon EKS option is turned on. /guardduty/faqs/;How do I disable GuardDuty EKS Protection?;You can disable the feature in the console or by using the API. In the GuardDuty console, you can disable GuardDuty EKS Protection for your accounts on the GuardDuty EKS Protection console page. If you have a GuardDuty administrator account, you can also disable this feature for your member accounts. /guardduty/faqs/;If I disable GuardDuty EKS Protection, how do I enable it again?;If you previously disabled GuardDuty EKS Protection, you can re-enable the feature in the console or by using the API. In the GuardDuty console, you can enable GuardDuty EKS Protection for your accounts on the GuardDuty EKS Protection console page. /guardduty/faqs/;Do I have to enable GuardDuty EKS Protection on each AWS account and Amazon EKS cluster individually?;GuardDuty EKS Protection must be enabled for each individual account. If you are operating in a GuardDuty multi-account configuration, you can enable threat detection for Amazon EKS across your entire organization with a single click on the GuardDuty administrator account GuardDuty EKS Protection console page. This will enable threat detection for Amazon EKS in all individual member accounts. Once enabled for an account, all existing and future Amazon EKS clusters in the account will be monitored for threats, and no manual configuration is required on your Amazon EKS clusters. /guardduty/faqs/;Will I be charged if I don’t use Amazon EKS and I enable GuardDuty EKS Protection in GuardDuty?;You will not incur any GuardDuty EKS Protection charges if you aren’t using Amazon EKS and you have GuardDuty EKS Protection enabled. However, when you start using Amazon EKS, GuardDuty will automatically monitor your clusters and generate findings for identified issues, and you will be charged for this monitoring. /guardduty/faqs/;Can I enable GuardDuty EKS Protection without enabling the full GuardDuty service (including the analysis of VPC Flow Logs, DNS query logs, and CloudTrail management events)?;No, the GuardDuty service must be enabled for GuardDuty EKS Protection to be available. /guardduty/faqs/;Does GuardDuty EKS Protection monitor Amazon EKS audit logs for Amazon EKS deployments on AWS Fargate?;Yes, GuardDuty EKS Protection monitors Amazon EKS audit logs from both Amazon EKS clusters deployed on EC2 instances and Amazon EKS clusters deployed on Fargate. /guardduty/faqs/;Does GuardDuty monitor non-managed Amazon EKS on EC2 or Amazon EKS Anywhere?;Currently, this capability only supports Amazon EKS deployments running on EC2 instances in your account or on Fargate. /guardduty/faqs/;Will using GuardDuty EKS Protection impact the performance or cost of running containers on Amazon EKS?;No, GuardDuty EKS Protection is designed to not have any performance, availability, or cost implications to Amazon EKS workload deployments. /guardduty/faqs/;Do I have to enable GuardDuty EKS Protection in each AWS Region individually?;Yes, GuardDuty is a regional service, and thus GuardDuty EKS Protection must be enabled in each AWS Region separately. /guardduty/faqs/;How does GuardDuty EKS Runtime Monitoring work?;GuardDuty EKS Runtime Monitoring uses a fully-managed Amazon EKS add-on that adds visibility into the runtime activity of individual Kubernetes containers running on Amazon EKS, such as file access, process execution, and network connections. The add-on can be activated automatically, directly from GuardDuty, for all existing and new Amazon EKS clusters in an account, or manually from Amazon EKS for an individual cluster. The add-on automatically deploys a GuardDuty security agent as a Daemon set that collects runtime events from all pods running on the node and delivers them to GuardDuty for security analytics processing. This allows GuardDuty to identify specific containers within your Amazon EKS clusters that are potentially compromised, and detect attempts to escalate privileges from an individual container to the underlying Amazon EC2 host and the broader AWS environment. When GuardDuty detects a potential threat, a security finding is generated that includes metadata context that includes container, Kubernetes pod, and process details. /guardduty/faqs/;How can I get started with EKS Runtime Monitoring if I am currently using GuardDuty?;For current GuardDuty accounts, the feature can be activated from the GuardDuty console on the EKS Runtime Monitoring page, or through the API. Learn more about GuardDuty EKS Runtime Monitoring. /guardduty/faqs/;If I am a new user to GuardDuty, is EKS Runtime Monitoring turned on by default for my accounts?;No. GuardDuty EKS Runtime Monitoring is the only protection plan that is not enabled by default when you turn on GuardDuty for the first time. The feature can be activated from the GuardDuty console on the EKS Runtime Monitoring page, or through the API. New GuardDuty accounts created using the AWS Organizations auto-enable feature will not have EKS Runtime Monitoring turned on by default unless the auto-enable for Amazon EKS option is turned on. /guardduty/faqs/;Can I use GuardDuty EKS Runtime Monitoring without activating the full GuardDuty service?;No, the GuardDuty service must be enabled in order to use GuardDuty EKS Runtime Monitoring. /guardduty/faqs/;Is GuardDuty EKS Runtime Monitoring available in all Regions where GuardDuty is currently available?;For a full list of Regions where EKS Runtime Monitoring is available, visit Region-specific feature availability. /guardduty/faqs/;Do I have to activate GuardDuty EKS Runtime Monitoring on each AWS account and Amazon EKS cluster individually?;GuardDuty EKS Runtime Monitoring must be enabled for each individual account. If you are operating in a GuardDuty multi-account configuration, you can turn on threat detection for Amazon EKS across your entire organization with a single click on the GuardDuty administrator account GuardDuty EKS Runtime Monitoring console page. This will activate runtime monitoring for Amazon EKS in all individual member accounts. Once activated for an account, all existing and future Amazon EKS clusters in the account will be monitored for runtime threats, and no manual configuration is required on your Amazon EKS clusters. /guardduty/faqs/;Will I be charged if I don’t use Amazon EKS and I turn on GuardDuty EKS Runtime Monitoring in GuardDuty?;You will not incur any GuardDuty EKS Runtime Monitoring charges if you aren’t using Amazon EKS and you have GuardDuty EKS Runtime Monitoring turned on. However, when you start using Amazon EKS, GuardDuty will automatically monitor your clusters and generate findings for identified issues, and you will be charged for this monitoring. /guardduty/faqs/;How does Amazon GuardDuty Malware Protection work?;GuardDuty begins a malware detection scan when it identifies suspicious behavior indicative of malicious software in EC2 instance or container workloads. It scans a replica EBS volume that GuardDuty generates based on the snapshot of your EBS volume for trojans, worms, crypto miners, rootkits, bots, and more. GuardDuty Malware Protection generates contextualized findings that can help validate the source of the suspicious behavior. These findings can also be routed to the proper administrators and can initiate automated remediation. /guardduty/faqs/;Which GuardDuty EC2 finding types will initiate a malware scan?;GuardDuty EC2 findings that will initiate a malware scan are listed here. /guardduty/faqs/;Which resources and file types can GuardDuty Malware Protection scan?;Malware Protection supports detection of malicious files by scanning EBS attached to EC2 instances. It can scan any file present on the volume, and the supported file system types can be found here. /guardduty/faqs/;Which types of threats can GuardDuty Malware Protection detect?;Malware Protection scans for threats such as trojans, worms, crypto miners, rootkits, and bots, that might be used to compromise workloads, repurpose resources for malicious use, and gain unauthorized access to data. /guardduty/faqs/;Do I need to turn on logging for GuardDuty Malware Protection to work?;Service logging does not need to be enabled for GuardDuty or the Malware Protection feature to work. The Malware Protection feature is part of GuardDuty, which is an AWS service that uses intelligence from integrated internal and external sources. /guardduty/faqs/;How does GuardDuty Malware Protection accomplish scanning without agents?;Instead of using security agents, GuardDuty Malware Protection will create and scan a replica based on the snapshot of EBS volumes attached to the potentially infected EC2 instance or container workload in your account. The permissions you granted to GuardDuty via a service-linked role allows the service to create an encrypted volume replica in GuardDuty’s service account from that snapshot that remains in your account. GuardDuty Malware Protection will then scan the volume replica for malware. /guardduty/faqs/;Is there a free trial of GuardDuty Malware Protection?;Yes, each new GuardDuty account in each Region receives a 30-day free trial of GuardDuty, including the Malware Protection feature. Existing GuardDuty accounts receive a 30-day trial of Malware Protection at no additional charge the first time it is enabled in an account. During the trial period, you can view the post-trial costs estimate on the GuardDuty console usage page. If you are a GuardDuty administrator, you will see the estimated costs for your member accounts. After 30 days, you can view actual costs of this feature in the AWS Billing console. /guardduty/faqs/;If I am currently using GuardDuty, how can I get started with GuardDuty Malware Protection?;You can enable Malware Protection in the GuardDuty console by going to the Malware Protection page or using the API. If you are operating in a GuardDuty multi-account configuration, you can enable the feature across your entire organization in the GuardDuty administrator account’s Malware Protection console page. This will enable monitoring for malware in all individual member accounts. For GuardDuty accounts created using the AWS Organizations auto-enable feature, you need to explicitly enable the auto-enable for the Malware Protection option. /guardduty/faqs/;If I am a new user to GuardDuty, is Malware Protection enabled by default for my accounts?;Yes, any new account that enables GuardDuty using the console or API will also have GuardDuty Malware Protection enabled by default. For new GuardDuty accounts created using the AWS Organizations auto-enable feature, you need to explicitly enable the auto-enable for Malware Protection option. /guardduty/faqs/;How do I disable GuardDuty Malware Protection?;You can disable the feature in the console or using the API. You will see an option to disable Malware Protection for your accounts in the GuardDuty console, on the Malware Protection console page. If you have a GuardDuty administrator account, you can also disable Malware Protection for your member accounts. /guardduty/faqs/;If I disable GuardDuty Malware Protection, how do I enable it again?;If Malware Protection was disabled, you can enable the feature in the console or using the API. You can enable Malware Protection for your accounts in the GuardDuty console, on to the Malware Protection console page. /guardduty/faqs/;If no GuardDuty malware scans are performed during a billing period, will there be any charges?;No, there will be no charges for Malware Protection if there are no scans for malware during a billing period. You can view costs of this feature in the AWS Billing console. /guardduty/faqs/;Does GuardDuty Malware Protection support multi-account management?;Yes, GuardDuty has a multiple account management feature, allowing you to associate and manage multiple AWS accounts from a single administrator account. GuardDuty has multi-account management through AWS Organizations integration. This integration helps security and compliance teams ensure full coverage of GuardDuty, including Malware Protection, across all accounts in an organization. /guardduty/faqs/;Do I need to make any configuration changes, deploy any software, or modify my AWS deployments?;No. Once the feature is enabled, GuardDuty Malware Protection will initiate a malware scan in response to relevant EC2 findings. You don’t have to deploy any agents, there are no log sources to enable, and there are no other configuration changes to make. /guardduty/faqs/;Will using GuardDuty Malware Protection impact the performance of running my workloads?;GuardDuty Malware Protection is designed to not affect the performance of your workloads. For example, EBS volume snapshots created for malware analysis can only be generated once in a 24-hour period, and GuardDuty Malware Protection retains the encrypted replicas and snapshots for a few minutes after it completes a scan. Further, GuardDuty Malware Protection uses GuardDuty compute resources for malware scanning instead of customer compute resources. /guardduty/faqs/;Do I have to enable GuardDuty Malware Protection in each AWS Region individually?;Yes, GuardDuty is a regional service, and Malware Protection has to be enabled in each AWS Region separately. /guardduty/faqs/;How does GuardDuty Malware Protection use encryption?;GuardDuty Malware Protection scans a replica based on the snapshot of EBS volumes attached to the potentially infected EC2 instance or container workload in your account. If your EBS volumes are encrypted with a customer managed key, you have the option to share your AWS Key Management Service (KMS) key with GuardDuty and the service uses the same key to encrypt the replica EBS volume. For unencrypted EBS volumes, GuardDuty uses its own key to encrypt the replica EBS volume. /guardduty/faqs/;Will the EBS volume replica be analyzed in same Region as the original volume?;Yes, all replica EBS volume data (and the snapshot the replica volume is based on) stays in the same Region as the original EBS volume. /guardduty/faqs/;How can I estimate and control spend on GuardDuty Malware Protection?;Each new GuardDuty account, in each Region, receives a 30-day free trial of GuardDuty, including the Malware Protection feature. Existing GuardDuty accounts receive a 30-day trial of Malware Protection at no additional charge the first time it is enabled in an account. During the trial period, you can estimate the post-trial costs estimate on the GuardDuty console usage page. If you are a GuardDuty administrator, you will see the estimated costs for your member accounts. After 30 days, you can view actual costs of this feature in the AWS Billing console. /guardduty/faqs/;Can I keep the snapshots taken by GuardDuty Malware Protection?;Yes, there is a setting where you can enable snapshot retention when Malware Protection scan detects malware. You can enable this setting from the GuardDuty console, on the Settings page. By default, snapshots are deleted a few minutes after it completes a scan and after 24 hours if the scan did not complete. /guardduty/faqs/;By default, what is the maximum length of time a replica EBS volume will be retained?;GuardDuty Malware Protection will retain each replica EBS volume it generates and scans for up to 24 hours. By default, replica EBS volumes are deleted a few minutes after GuardDuty Malware Protection completes a scan. In some instances, however, GuardDuty Malware Protection may need to retain a replica EBS volume for longer than 24 hours if a service outage or connection problem interferes with its malware scan. When this occurs, GuardDuty Malware Protection will retain the replica EBS volume for up to seven days to give the service time to triage and address the outage or connection problem. GuardDuty Malware Protection will delete the replica EBS volume after the outage or failure is addressed or once the extended retention period lapses. /guardduty/faqs/;Will multiple GuardDuty findings for a single EC2 instance or container workload that indicate possible malware initiate multiple malware scans?;No, GuardDuty only scans a replica based on the snapshot of EBS volumes attached to the potentially infected EC2 instance or container workload once every 24 hours. Even if GuardDuty generates multiple findings that qualify to initiate a malware scan, it will not initiate additional scans if it has been less than 24 hours since a prior scan. If GuardDuty generates a qualified finding after 24 hours from the last malware scan, GuardDuty Malware Protection will initiate a new malware scan for that workload. /guardduty/faqs/;If I disable GuardDuty, do I also have to disable the Malware Protection feature?;No, disabling the GuardDuty service also disables the Malware Protection feature. /guardduty/faqs/;How does GuardDuty RDS Protection work?;GuardDuty RDS Protection can be turned on with a single action in the GuardDuty console, with no agents to manually deploy, no data sources to activate, and no permissions to configure. Using tailored ML models, GuardDuty RDS Protection begins by analyzing and profiling login attempts to existing and new Amazon Aurora databases. When suspicious behaviors or attempts by known malicious actors are identified, GuardDuty issues actionable security findings to the GuardDuty and Amazon Relational Database Service (RDS) consoles, AWS Security Hub, and Amazon EventBridge, allowing for integration with existing security event management or workflow systems. Learn more about how GuardDuty RDS Protection uses RDS login activity monitoring. /guardduty/faqs/;How can I get started with threat detection for Aurora databases if I am currently using GuardDuty?;For current GuardDuty accounts, the feature can be activated from the GuardDuty console on the RDS Protection page, or through the API. Learn more about GuardDuty RDS Protection. /guardduty/faqs/;If I am a new user to GuardDuty, is threat detection for Aurora databases enabled by default for my accounts?;Yes. Any new accounts that activate GuardDuty through the console or API will also have RDS Protection turned on by default. New GuardDuty accounts created using the AWS Organizations auto-enable feature will not have RDS Protection turned on by default unless the auto-enable for RDS option is turned on. /guardduty/faqs/;Can I use GuardDuty RDS Protection without activating the full GuardDuty service (including the analysis of Amazon Virtual Private Cloud (VPC) Flow Logs, DNS query logs, and AWS CloudTrail management events)?;No, the GuardDuty service must be enabled in order to use GuardDuty RDS Protection. /guardduty/faqs/;What Amazon Aurora version(s) does GuardDuty RDS Protection support?;Please see the list of supported Amazon Aurora database versions. /guardduty/faqs/;Will using GuardDuty RDS Protection impact the performance or cost of running Aurora databases?;No, GuardDuty threat detection for Aurora databases is designed to not have performance, availability, or cost implications to your Amazon Aurora databases. /inspector/faqs/;What is Amazon Inspector?;Amazon Inspector is an automated vulnerability management service that continually scans Amazon Elastic Compute Cloud (EC2), AWS Lambda functions, and container workloads for software vulnerabilities and unintended network exposure. /inspector/faqs/;What are the key benefits of Amazon Inspector?;Amazon Inspector removes the operational overhead associated with deploying and configuring a vulnerability management solution by allowing you to deploy Amazon Inspector across all accounts with a single step. Additional benefits include: /inspector/faqs/;How is Amazon Inspector different from Amazon Inspector Classic?;Amazon Inspector has been rearchitected and rebuilt to create a new vulnerability management service. Here are the key enhancements over Amazon Inspector Classic: /inspector/faqs/;Can I use Amazon Inspector and Amazon Inspector Classic simultaneously in the same account?;Yes, you can use both simultaneously in the same account. /inspector/faqs/;How do I migrate from Amazon Inspector Classic to the new Amazon Inspector?;You can deactivate Amazon Inspector Classic by simply deleting all assessment templates in your account. To access findings for existing assessment runs, you can download them as reports or export them using the Amazon Inspector API. You can activate the new Amazon Inspector with a few clicks in the AWS Management Console, or by using the new Amazon Inspector APIs. You can find the detailed migration steps in the Amazon Inspector Classic User Guide. /inspector/faqs/;What is the pricing for Amazon Inspector?;See the Amazon Inspector pricing page for full pricing details. /inspector/faqs/;Is there a free trial for Amazon Inspector?;All accounts new to Amazon Inspector are eligible for a 15-day free trial to evaluate the service and estimate its cost. During the trial, all eligible Amazon EC2 instances, AWS Lambda functions, and container images pushed to Amazon ECR are continually scanned at no cost. You can also review estimated spend in the Amazon Inspector console. /inspector/faqs/;In what Regions is Amazon Inspector available?;Amazon Inspector is available globally. Specific availability by Region is listed here. /inspector/faqs/;How do I get started?;You can activate Amazon Inspector for your entire organization or an individual account with a few clicks in the AWS Management Console. Once activated, Amazon Inspector automatically discovers running Amazon EC2 instances, Lambda functions, and Amazon ECR repositories and immediately starts continually scanning workloads for software vulnerabilities and unintended network exposure. If you’re new to Amazon Inspector, there’s a 15-day free trial as well. /inspector/faqs/;What is an Amazon Inspector finding?;An Amazon Inspector finding is a potential security vulnerability. For example, when Amazon Inspector detects software vulnerabilities or open network paths to your compute resources, it creates security findings. /inspector/faqs/;How do I delegate an administrator for the Amazon Inspector service?;The AWS Organizations Management account can assign a DA account for Amazon Inspector in the Amazon Inspector console or by using Amazon Inspector APIs. /inspector/faqs/;Do I have to activate specific scanning types (that is, Amazon EC2 scanning, Lambda functions scanning, or Amazon ECR container image scanning)?;If you’re starting Amazon Inspector for the first time, all scanning types, including EC2 scanning, Lambda scanning, and ECR container image scanning are activated by default. However, you can deactivate any or all of these across all accounts in your organization. Existing users can activate new features in the Amazon Inspector console or by using Amazon Inspector APIs. /inspector/faqs/;Do I need any agents to use Amazon Inspector?;It depends on which resources you’re scanning. AWS Systems Manager Agent (SSM Agent) is required for vulnerability scanning of Amazon EC2 instances. Nagents are required for network reachability of Amazon EC2 instances and vulnerability scanning of container images, or for vulnerability scanning of Lambda functions. /inspector/faqs/;;To successfully scan Amazon EC2 instances for software vulnerabilities, Amazon Inspector requires that these instances are managed by AWS Systems Manager and the SSM agent. See Systems Manager prerequisites in the AWS Systems Manager User Guide for instructions to activate and configure Systems Manager. For information about managed instances, see the Managed Instances section in the AWS Systems Manager User Guide. /inspector/faqs/;Can I exclude some Amazon EC2 instances from scanning?;No. Once Amazon Inspector is activated for Amazon EC2 scanning, all EC2 instances with SSM Agent installed and configured in an account are continually scanned. /inspector/faqs/;How do I know which Amazon ECR repositories are configured for scanning? And how do I manage which repositories should be configured for scanning?;Amazon Inspector supports the configuration of inclusion rules to select which ECR repositories are scanned. Inclusion rules can be created and managed under the registry settings page within the ECR console or using ECR APIs. The ECR repositories that match the inclusion rules are configured for scanning. Detailed scanning status of repositories is available in both the ECR and Amazon Inspector consoles. /inspector/faqs/;How do I know if my resources are being actively scanned?;The Environmental Coverage panel in the Amazon Inspector dashboard shows the metrics for accounts, Amazon EC2 instances, Lambda functions, and ECR repositories being actively scanned by Amazon Inspector. Each instance and image have a scanning status: Scanning or Not Scanning. Scanning means the resource is continually being scanned in near real time. A status of Not Scanning could mean the initial scan has not been performed yet, the OS is unsupported, or something else is preventing the scan. /inspector/faqs/;How often are the automated rescans performed?;All scans are automatically performed based on events. All workloads are initially scanned upon discovery and subsequently rescanned. /inspector/faqs/;How long are container images continually rescanned with Amazon Inspector?;Container images residing in Amazon ECR repositories that are configured for continual scanning are scanned for the duration configured in the Inspector console or APIs. Available configurations are Lifetime (by default), 180 days, or 30 days. /inspector/faqs/;Can I exclude my resources from being scanned?;For Amazon EC2 instances: No. Amazon Inspector automatically discovers all EC2 instances within an account and continually scans all instances with the Amazon SSM Agent configured. For container images residing in Amazon ECR: Yes. Although you can select which Amazon ECR repositories are configured for scanning, all images within a repository will be scanned. You can create inclusion rules to select which repositories should be scanned. For Lambda functions: Yes, a Lambda function can be excluded from scanning by adding a resource tag with the key 'InspectorExclusion' and the value 'LambdaStandardScanning'. /inspector/faqs/;How do I use Amazon Inspector to assess my Lambda functions for security vulnerabilities?;In a multi-account structure, you can activate Amazon Inspector for Lambda vulnerabilities assessments for all your accounts within the AWS Organization from the Amazon Inspector console or APIs through the Delegated Administrator (DA) account, while other member accounts can activate Amazon Inspector for their own account if the central security team hasn’t already activated it for them. Accounts that are not a part of the AWS Organization can activate Amazon Inspector for their individual account through the Amazon Inspector console or APIs. /inspector/faqs/;If a Lambda function has multiple versions, which version will Amazon Inspector assess?;Amazon Inspector will continually monitor and assess only the $LATEST version. Automated rescans will continue only for the latest version, so new findings will be generated only for the latest version. In the console, you will be able to see the findings from any version by selecting the version from the dropdown. /inspector/faqs/;How does changing the SSM inventory collection frequency from the default 30 minutes to 12 hours impact the continual scanning by Amazon Inspector?;Changing the default SSM inventory collection frequency can have an impact on the continual nature of scanning. Amazon Inspector relies on SSM Agent to collect the application inventory to generate findings. If the application inventory duration is increased from the default of 30 minutes, that will delay the detection of changes to the application inventory, and new findings might be delayed. /inspector/faqs/;What is an Inspector risk score?;The Inspector risk score is a highly contextualized score that is generated for each finding by correlating common vulnerabilities and exposures (CVE) information with network reachability results, exploitability data, and social media trends. This makes it easier for you to prioritize findings and focus on the most critical findings and vulnerable resources. You can see how the Inspector risk score was calculated and which factors influenced the score in the Inspector Score tab within the Findings Details side panel. /inspector/faqs/;How do suppression rules work?;Amazon Inspector allows you to suppress findings based on the customized criteria you define. You can create suppression rules for findings that are considered acceptable by your organization. /inspector/faqs/;How can I export my findings, and what do they include?;You can generate reports in multiple formats (CSV or JSONwith a few clicks in the Amazon Inspector console or through the Amazon Inspector APIs. You can download a full report with all findings, or generate and download a customized report based on the view filters set in the console. /inspector/faqs/;Which operating systems does Amazon Inspector support?;You can find the list of operating systems (OS) supported here. /inspector/faqs/;Which programming language packages does Amazon Inspector support for container image scanning?;You can find the list of programming language packages supported here. /inspector/faqs/;Will Amazon Inspector work with instances that use Network Address Translation (NAT)?;Yes. Instances that use NAT are automatically supported by Amazon Inspector. /inspector/faqs/;I use a proxy for my instances. Will Amazon Inspector work with these instances?;Yes. See how to configure SSM Agent to use a proxy for more information. /inspector/faqs/;Can Amazon Inspector be integrated with other AWS services for logging and notifications?;Amazon Inspector integrates with Amazon EventBridge to provide notification for events such as a new finding, change of state of a finding, or creation of a suppression rule. Amazon Inspector also integrates with AWS CloudTrail for call logging. /inspector/faqs/;Does Amazon Inspector offer “CIS Operating System Security Configuration Benchmarks” scans?;No. While Amazon Inspector does not currently support CIS scans, this capability will be added in the future. However, you can continue to use the CIS scan rules package offered in Amazon Inspector Classic. /inspector/faqs/;Does Amazon Inspector work with AWS Partner solutions?;Yes. See Amazon Inspector Partners for more information. /inspector/faqs/;Can I deactivate Amazon Inspector?;Yes. You can deactivate all scanning types (Amazon EC2 scanning, Amazon ECR container image scanning, and Lambda function scanning) by deactivating the Amazon Inspector service, or you can deactivate each scanning type individually for an account. /inspector/faqs/;Can I suspend Amazon Inspector?;No. Amazon Inspector does not support a suspended state. /certificate-manager/faqs/;What is an SSL/TLS certificate?;SSL/TLS certificates allow web browsers to identify and establish encrypted network connections to web sites using the Secure Sockets Layer/Transport Layer Security (SSL/TLS) protocol. Certificates are used within a cryptographic system known as a public key infrastructure (PKI). PKI provides a way for one party to establish the identity of another party using certificates if they both trust a third-party - known as a certificate authority. You can visit the Concepts topic in the ACM User Guide for additional information and definitions. /certificate-manager/faqs/;What are private certificates?;Private certificates identify resources within an organization, such as applications, services, devices, and users. In establishing a secure encrypted communications channel, each endpoint uses a certificate and cryptographic techniques to prove its identity to the other endpoint. Internal API endpoints, web servers, VPN users, IoT devices, and many other applications use private certificates to establish encrypted communication channels that are necessary for their secure operation. /certificate-manager/faqs/;What is the difference between public and private certificates?;Both public and private certificates help customers identify resources on networks and secure communication between these resources. Public certificates identify resources on the public Internet, whereas private certificates do the same for private networks. One key difference is that applications and browsers trust public certificates automatically by default, whereas an administrator must explicitly configure applications to trust private certificates. Public CAs, the entities that issue public certificates, must follow strict rules, provide operational visibility, and meet security standards imposed by the browser and operating system vendors that decide which CAs their browsers and operating systems trust automatically. Private CAs are managed by private organizations, and private CA administrators can make their own rules for issuing private certificates, including practices for issuing certificates and what information a certificate can include. /certificate-manager/faqs/;What are the benefits of using AWS Certificate Manager (ACM)?;ACM makes it easier to enable SSL/TLS for a website or application on the AWS platform. ACM eliminates many of the manual processes previously associated with using and managing SSL/TLS certificates. ACM can also help you avoid downtime due to misconfigured, revoked, or expired certificates by managing renewals. You get SSL/TLS protection and easy certificate management. Enabling SSL/TLS for Internet-facing sites can help improve the search rankings for your site and help you meet regulatory compliance requirements for encrypting data in transit. /certificate-manager/faqs/;What types of certificates can I manage with ACM?;ACM enables you to manage the lifecycle of your public and private certificates. ACM’s capabilities depend on whether the certificate is public or private, how you obtain the certificate, and where you deploy it. /certificate-manager/faqs/;How can I get started with ACM?;To get started with ACM, navigate to Certificate Manager in the AWS Management Console and use the wizard to request an SSL/TLS certificate. If you have already created a Private CA, you can choose whether you want a public or private certificate, and then enter the name of your site. You can also request a certificate using the AWS CLI or API. After the certificate is issued, you can use it with other AWS services that are integrated with ACM. For each integrated service, you simply select the SSL/TLS certificate you want from a drop-down list in the AWS Management Console. Alternatively, you can execute an AWS CLI command or call an AWS API to associate the certificate with your resource. The integrated service then deploys the certificate to the resource you selected. For more information about requesting and using certificates provided by ACM, learn more in the ACM User Guide. In addition to using private certificates with ACM-integrated services, you can also export private certificates for use on EC2 instances, on ECS containers, or anywhere. /certificate-manager/faqs/;With which AWS services can I use ACM certificates?;• You can use public and private ACM certificates with the following AWS services: • Elastic Load Balancing – Refer to the Elastic Load Balancing documentation • Amazon CloudFront – Refer to the CloudFront documentation • Amazon API Gateway – Refer to the API Gateway documentation • AWS CloudFormation – Support is currently limited to ACM-issued public and private certificates. Refer to the AWS CloudFormation documentation • AWS Elastic Beanstalk – Refer to the AWS Elastic Beanstalk documentation • AWS Nitro Enclaves – Refer to the AWS Nitro Enclaves documentation /certificate-manager/faqs/;In what Regions is ACM available?;Please visit the AWS Global Infrastructure pages to see the current Region availability for AWS services. To use an ACM certificate with Amazon CloudFront, you must request or import the certificate in the US East (NVirginia) region. ACM certificates in this region that are associated with a CloudFront distribution are distributed to all the geographic locations configured for that distribution. /certificate-manager/faqs/;What types of certificates does ACM manage?;ACM manages public, private, and imported certificates. Learn more about ACM's capabilities in the Issuing and Managing Certificates documentation. /certificate-manager/faqs/;Can ACM provide certificates with multiple domain names?;Yes. Each certificate must include at least one domain name, and you can add additional names to the certificate if you want to. For example, you can add the name “www.example.net” to a certificate for “www.example.com” if users can reach your site by either name. You must own or control all of the names included in your certificate request. /certificate-manager/faqs/;What is a wildcard domain name?;A wildcard domain name matches any first level subdomain or hostname in a domain. A first-level subdomain is a single domain name label that does not contain a period (dot). For example, you can use the name *.example.com to protect www.example.com, images.example.com, and any other host name or first-level subdomain that ends with .example.com. Learn more in the ACM User Guide. /certificate-manager/faqs/;Can ACM provide certificates with wildcard domain names?;Yes. /certificate-manager/faqs/;Does ACM provide certificates outside of SSL/TLS?;No. /certificate-manager/faqs/;Can I use ACM certificates for code signing or email encryption?;No. /certificate-manager/faqs/;Does ACM provide certificates used to sign and encrypt email (S/MIME certificates)?;No. /certificate-manager/faqs/;What is the validity period for ACM certificates?;Certificates issued through ACM are valid for 13 months (395 days). If you issue private certificates directly from a private CA and manage the keys and certificates without using ACM for certificate management, you can choose any validity period, including an absolute end date or a relative time that is days, months, or years from the present time. /certificate-manager/faqs/;What algorithms do ACM-issued certificates use?;By default, certificates issued in ACM use RSA keys with a 2048-bit modulus and SHA-256. Additionally, you can request Elliptic Curve Digital Signature Algorithm (ECDSA) certificates with either P-256 or P-384. Learn more about algorithms in the ACM User Guide. /certificate-manager/faqs/;How do I revoke a certificate?;You can request ACM to revoke a public certificate by visiting the AWS Support Center and creating a case. To revoke a private certificate issued by your AWS Private CA, refer to the AWS Private CA User Guide. /certificate-manager/faqs/;Can I use the same ACM certificate in more than one AWS Region?;No. ACM certificates must be in the same Region as the resource where they are being used. The only exception is Amazon CloudFront, a global service that requires certificates in the US East (NVirginia) region. ACM certificates in this region that are associated with a CloudFront distribution are distributed to all the geographic locations configured for that distribution. /certificate-manager/faqs/;Can I provision a certificate with ACM if I already have a certificate from another provider for the same domain name?;Yes. /certificate-manager/faqs/;Can I use certificates on Amazon EC2 instances or on my own servers?;You can use private certificates issued with Private CA with EC2 instances, containers, and on your own servers. At this time, public ACM certificates can be used only with specific AWS services, including AWS Nitro Enclaves. See ACM service integrations. /certificate-manager/faqs/;Does ACM allow local language characters in domain names, otherwise known as Internationalized Domain Names (IDNs)?;"ACM does not allow Unicode encoded local language characters; however, ACM allows ASCII-encoded local language characters for domain names." /certificate-manager/faqs/;Which domain name label formats does ACM allow?;ACM allows only UTF-8 encoded ASCII, including labels containing “xn—”, commonly known as Punycode for domain names. ACM does not accept Unicode input (u-labels) for domain names. /certificate-manager/faqs/;Can I import a third-party certificate and use it with AWS services?;Yes. If you want to use a third-party certificate with Amazon CloudFront, Elastic Load Balancing, or Amazon API Gateway, you may import it into ACM using the AWS Management Console, AWS CLI, or ACM APIs. ACM does not manage the renewal process for imported certificates. You can use the AWS Management Console to monitor the expiration dates of an imported certificates and import a new third-party certificate to replace an expiring one. /certificate-manager/faqs/;What are public certificates?;Both public and private certificates help customers identify resources on networks and secure communication between these resources. Public certificates identify resources on the Internet. /certificate-manager/faqs/;What type of public certificates does ACM provide?;ACM provides Domain Validated (DV) public certificates for use with websites and applications that terminate SSL/TLS. For more details about ACM certificates, see Certificate Characteristics. /certificate-manager/faqs/;Are ACM public certificates trusted by browsers, operating systems, and mobile devices?;ACM public certificates are trusted by most modern browsers, operating systems, and mobile devices. ACM-provided certificates have 99% browser and operating system ubiquity, including Windows XP SP3 and Java 6 and later. /certificate-manager/faqs/;How can I confirm that my browser trusts ACM public certificates?;Browsers that trust ACM certificates display a lock icon and do not issue certificate warnings when connected to sites that use ACM certificates over SSL/TLS, for example using HTTPS. /certificate-manager/faqs/;Does ACM provide public Organizational Validation (OV) or Extended Validation (EV) certificates?;No. /certificate-manager/faqs/;Where does Amazon describe its policies and practices for issuing public certificates?;They are described in the Amazon Trust Services Certificate Policies and Amazon Trust Services Certification Practices Statement documents. Refer to the Amazon Trust Services repository for the latest versions. /certificate-manager/faqs/;Will a certificate for www.example.com also work for example.com?;No. If you want your site to be referenced by both domain names (www.example.com and example.com), you must request a certificate that includes both names. /certificate-manager/faqs/;How can ACM help my organization meet my compliance requirements?;Using ACM helps you comply with regulatory requirements by making it easy to facilitate secure connections, a common requirement across many compliance programs such as PCI, FedRAMP, and HIPAA. For specific information about compliance, please refer to http://aws.amazon.com/compliance. /certificate-manager/faqs/;Does ACM have a service level agreement (SLA)?;No, ACM does not have an SLA. /certificate-manager/faqs/;Does ACM provide a secure site seal or trust logo that I can display on my web site?;No. If you would like to use a site seal, you can obtain one from a third-party vendor. We recommend choosing a vendor that evaluates and asserts the security of your site, or your business practices, or both. /certificate-manager/faqs/;Does Amazon allow its trademarks or logo to be used as a certificate badge, site seal, or trust logo?;No. Seals and badges of this type can be copied to sites that do not use the ACM service, and used inappropriately to establish trust under false pretenses. To protect our customers and the reputation of Amazon, we do not allow our logo to be used in this manner. /certificate-manager/faqs/;How can I provision a public certificate from ACM?;You can use the AWS Management Console, AWS CLI, or ACM APIs/SDKs. To use the AWS Management Console, navigate to the Certificate Manager, choose Request a certificate, select Request a public certificate, enter the domain name for your site, and follow the instructions on the screen to complete your request. You can add additional domain names to your request if users can reach your site by other names. Before ACM can issue a certificate, it validates that you own or control the domain names in your certificate request. You can choose DNvalidation or email validation when requesting a certificate. With DNvalidation, you write a record to the public DNconfiguration for your domain to establish that you own or control the domain. After you use DNvalidation once to establish control of your domain, you can obtain additional certificates and have ACM renew existing certificates for the domain as long as the record remains in place and the certificate remains in use. You do not have to validate control of the domain again. If you choose email validation instead of DNvalidation, emails are sent to the domain owner requesting approval to issue the certificate. After validating that you own or control each domain name in your request, the certificate is issued and ready to be provisioned with other AWS services, such as Elastic Load Balancing or Amazon CloudFront. Refer to the ACM Documentation for details. /certificate-manager/faqs/;Why does ACM validate domain ownership for public certificates?;Certificates are used to establish the identity of your site and secure connections between browsers and applications and your site. To issue a publicly trusted certificate, Amazon must validate that the certificate requester has control over the domain name in the certificate request. /certificate-manager/faqs/;How does ACM validate domain ownership before issuing a public certificate for a domain?;Prior to issuing a certificate, ACM validates that you own or control the domain names in your certificate request. You can choose DNvalidation or email validation when requesting a certificate. With DNvalidation, you can validate domain ownership by adding a CNAME record to your DNconfiguration. Refer to DNvalidation for further details. If you do not have the ability to write records to the public DNconfiguration for your domain, you can use email validation instead of DNvalidation. With email validation, ACM sends emails to the registered domain owner, and the owner or an authorized representative can approve issuance for each domain name in the certificate request. Refer to Email validation for further details. /certificate-manager/faqs/;Which validation method should I use for my public certificate: DNS or email?;We recommend that you use DNvalidation if you have the ability to change the DNconfiguration for your domain. Customers who are unable to receive validation emails from ACM and those using a domain registrar that does not publish domain owner email contact information in WHOIS should use DNvalidation. If you cannot modify your DNconfiguration, you should use email validation. /certificate-manager/faqs/;Can I convert an existing public certificate from email validation to DNS validation?;No, but you can request a new, free certificate from ACM and choose DNvalidation for the new one. /certificate-manager/faqs/;How long does it take for a public certificate to be issued?;The time to issue a certificate after all of the domain names in a certificate request have been validated may be several hours or longer. /certificate-manager/faqs/;What happens when I request a public certificate?;ACM attempts to validate ownership or control of each domain name in your certificate request, according to the validation method you chose, DNor email, when making the request. The status of the certificate request is Pending validation while ACM attempts to validate that you own or control the domain. Refer to the DNvalidation and Email validation sections below for more information about the validation process. After all of the domain names in the certificate request are validated, the time to issue certificates may be several hours or longer. When the certificate is issued, the status of the certificate request changes to Issued and you can start using it with other AWS services that are integrated with ACM. /certificate-manager/faqs/;Does ACM check DNS Certificate Authority Authorization (CAA) records before issuing public certificates?;Yes. DNCertificate Authority Authorization (CAA) records allow domain owners to specify which certificate authorities are authorized to issue certificates for their domain. When you request an ACM Certificate, AWS Certificate Manager looks for a CAA record in the DNzone configuration for your domain. If a CAA record is not present, then Amazon can issue a certificate for your domain. Most customers fall into this category. /certificate-manager/faqs/;Does ACM support any other methods for validating a domain?;Not at this time. /certificate-manager/faqs/;What is DNS validation?;With DNvalidation, you can validate your ownership of a domain by adding a CNAME record to your DNconfiguration. DNValidation makes it easy for you to establish that you own a domain when requesting public SSL/TLS certificates from ACM. /certificate-manager/faqs/;What are the benefits of DNS validation?;DNvalidation makes it easy to validate that you own or control a domain so that you can obtain an SSL/TLS certificate. With DNvalidation, you simply write a CNAME record to your DNconfiguration to establish control of your domain name. To simplify the DNvalidation process, the ACM management console can configure DNrecords for you if you manage your DNrecords with Amazon Route 53. This makes it easy to establish control of your domain name with a few mouse clicks. Once the CNAME record is configured, ACM automatically renews certificates that are in use (associated with other AWS resources) as long as the DNvalidation record remains in place. Renewals are fully automatic and touchless. /certificate-manager/faqs/;Who should use DNS validation?;Anyone who requests a certificate through ACM and has the ability to change the DNconfiguration for the domain they are requesting should consider using DNvalidation. /certificate-manager/faqs/;Does ACM still support email validation?;Yes. ACM continues to support email validation for customers who can’t change their DNconfiguration. /certificate-manager/faqs/;What records do I need to add to my DNS configuration to validate a domain?;You must add a CNAME record for the domain you want to validate. For example, to validate the name www.example.com, you add a CNAME record to the zone for example.com. The record you add contains a unique token that ACM generates specifically for your domain and your AWS account. You can obtain the two parts of the CNAME record (name and label) from ACM. For further instructions, refer to the ACM User Guide. /certificate-manager/faqs/;How can I add or modify DNS records for my domain?;For more information about how to add or modify DNrecords, check with your DNprovider. The Amazon Route 53 DNdocumentation provides further information for customers who use Amazon Route 53 DNS. /certificate-manager/faqs/;Can ACM simplify DNS validation for Amazon Route 53 DNS customers?;Yes. For customers who are using Amazon Route 53 DNto manage DNrecords, the ACM console can add records to your DNconfiguration for you when you request a certificate. Your Route 53 DNhosted zone for your domain must be configured in the same AWS account as the one you are making the request from, and you must have sufficient permissions to make a change to your Amazon Route 53 configuration. For further instructions, refer to the ACM User Guide. /certificate-manager/faqs/;Does DNS Validation require me to use a specific DNS provider?;No. You can use DNvalidation with any DNprovider as long as the provider allows you to add a CNAME record to your DNconfiguration. /certificate-manager/faqs/;How many DNS records do I need if I want more than one certificate for the same domain?;One. You can obtain multiple certificates for the same domain name in the same AWS account using one CNAME record. For example, if you make 2 certificate requests from the same AWS account for the same domain name, you need only 1 DNCNAME record. /certificate-manager/faqs/;Can I validate multiple domain names with the same CNAME record?;No. Each domain name must have a unique CNAME record. /certificate-manager/faqs/;Can I validate a wildcard domain name using DNS validation?;Yes. /certificate-manager/faqs/;How does ACM construct CNAME records?;DNCNAME records have two components: a name and a label. The name component of an ACM-generated CNAME is constructed from an underscore character (_) followed by a token, which is a unique string that is tied to your AWS account and your domain name. ACM pre-pends the underscore and token to your domain name to construct the name component. ACM constructs the label from an underscore character pre-pended to a different token which is also tied to your AWS account and your domain name. ACM pre-pends the underscore and token to a DNdomain name used by AWS for validations: acm-validations.aws. The following examples show the formatting of CNAMEs for www.example.com, subdomain.example.com, and *.example.com. /certificate-manager/faqs/;Can I validate all subdomains of a domain using one CNAME record?;No. Each domain name, including host names and subdomain names, must be validated separately, each with a unique CNAME record. /certificate-manager/faqs/;Why does ACM use CNAME records for DNS validation instead of TXT records?;Using a CNAME record allows ACM to renew certificates for as long as the CNAME record exists. The CNAME record directs to a TXT record in an AWS domain (acm-validations.aws) that ACM can update as needed to validate or re-validate a domain name, without any action from you. /certificate-manager/faqs/;Does DNS validation work across AWS Regions?;Yes. You can create one DNCNAME record and use it to obtain certificates in the same AWS account in any AWS Region where ACM is offered. Configure the CNAME record once and you can get certificates issued and renewed from ACM for that name without creating another record. /certificate-manager/faqs/;Can I choose different validation methods in the same certificate?;No. Each certificate can have only one validation method. /certificate-manager/faqs/;How do I renew a certificate validated with DNS validation?;ACM automatically renews certificates that are in use (associated with other AWS resources) as long as the DNvalidation record remains in place. /certificate-manager/faqs/;Can I revoke permission to issue certificates for my domain?;Yes. Simply remove the CNAME record. ACM does not issue or renew certificates for your domain using DNvalidation after you remove the CNAME record and the change is distributed through DNS. The propagation time to remove the record depends on your DNprovider. /certificate-manager/faqs/;What happens if I remove the CNAME record?;ACM cannot issue or renew certificates for your domain using DNvalidation if you remove the CNAME record. /certificate-manager/faqs/;What is email validation?;With email validation, an approval request email is sent to the registered domain owner for each domain name in the certificate request. The domain owner or an authorized representative (approver) can approve the certificate request by following the instructions in the email. The instructions direct the approver to navigate to the approval website and click the link in the email or paste the link from the email into a browser to navigate to the approval web site. The approver confirms the information associated with the certificate request, such as the domain name, certificate ID (ARN), and the AWS account ID initiating the request, and approves the request if the information is accurate. /certificate-manager/faqs/;When I request a certificate and choose email validation, to which email addresses is the certificate approval request sent?;"When you request a certificate using email validation, a WHOIS lookup for each domain name in the certificate request is used to retrieve contact information for the domain. Email is sent to the domain registrant, administrative contact, and technical contact listed for the domain. Email is also sent to five special email addresses, which are formed by prepending admin@, administrator@, hostmaster@, webmaster@ and postmaster@ to the domain name you’re requesting. For example, if you request a certificate for server.example.com, email is sent to the domain registrant, technical contact, and administrative contact using contact information returned by a WHOIS query for the example.com domain, plus admin@server.example.com, administrator@server.example.com, hostmaster@server.example.com, postmaster@server.example.com, and webmaster@server.example.com. The five special email addresses are constructed differently for domain names that begin with ""www"" or wildcard names beginning with an asterisk (*). ACM removes the leading ""www"" or asterisk and email is sent to the administrative addresses formed by pre-pending admin@, administrator@, hostmaster@, postmaster@, and webmaster@ to the remaining portion of the domain name. For example, if you request a certificate for www.example.com, email is sent to the WHOIS contacts, as described previously, plus admin@example.com rather than admin@www.example.com. The remaining four special email addresses are similarly formed." /certificate-manager/faqs/;Can I configure the email addresses to which the certificate approval request is sent?;No, but you can configure the base domain name to which you want the validation email to be sent. The base domain name must be a superdomain of the domain name in the certificate request. For example, if you want to request a certificate for server.domain.example.com but want to direct the approval email to admin@domain.example.com, you can do so using the AWS CLI or API. See ACM CLI Reference and ACM API Reference for further details. /certificate-manager/faqs/;Can I use domains that have proxy contact information (such as Privacy Guard or WhoisGuard)?;"Yes; however, email delivery may be delayed as a result of the proxy. Email sent through a proxy may end up in your spam folder. Refer to the ACM User Guide for troubleshooting suggestions." /certificate-manager/faqs/;Can ACM validate my identity using the technical contact for my AWS account?;No. Procedures and policies for validating the domain owner’s identity are very strict, and determined by the CA/Browser Forum which sets policy standards for publicly trusted certificate authorities. To learn more, please refer to the latest Amazon Trust Services Certification Practices Statement in the Amazon Trust Services Repository. /certificate-manager/faqs/;What should I do if I did not receive the approval email?;Refer to the ACM User Guide for troubleshooting suggestions. /certificate-manager/faqs/;How are the private keys of ACM-provided certificates managed?;A key pair is created for each certificate provided by ACM. ACM is designed to protect and manage the private keys used with SSL/TLS certificates. Strong encryption and key management best practices are used when protecting and storing private keys. /certificate-manager/faqs/;Does ACM copy certificates across AWS Regions?;No. The private key of each ACM certificate is stored in the Region in which you request the certificate. For example, when you obtain a new certificate in the US East (NVirginia) Region, ACM stores the private key in the NVirginia Region. ACM certificates are only copied across Regions if the certificate is associated with a CloudFront distribution. In that case, CloudFront distributes the ACM certificate to the geographic locations configured for your distribution. /certificate-manager/faqs/;What is ACM managed renewal and deployment?;ACM managed renewal and deployment manages the process of renewing SSL/TLS ACM certificates and deploying certificates after they are renewed. /certificate-manager/faqs/;What are the benefits of using ACM managed renewal and deployment?;ACM can manage renewal and deployment of SSL/TLS certificates for you. ACM makes configuring and maintaining SSL/TLS for a secure web service or application more operationally sound than potentially error-prone manual processes. Managed renewal and deployment can help you avoid downtime due to expired certificates. ACM operates as a service that is integrated with other AWS services. This means you can centrally manage and deploy certificates on the AWS platform by using the AWS management console, AWS CLI, or APIs. With Private CA, you can create private certificates and you can export them. ACM renews exported certificates, allowing your client side automation code to download and deploy them. /certificate-manager/faqs/;Which ACM certificates can be renewed and deployed automatically?;Public Certificates /certificate-manager/faqs/;Will I be notified before my certificate is renewed and the new certificate is deployed?;No. ACM may renew or rekey the certificate and replace the old one without prior notice. /certificate-manager/faqs/;Can ACM renew public certificates containing bare domains, such as “example.com” (also known as zone apex or naked domains)?;If you chose DNvalidation in your certificate request for a public certificate, then ACM can renew your certificate without any further action from you, as long as the certificate is in use (associated with other AWS resources) and your CNAME record remains in place. /certificate-manager/faqs/;Does my site drop existing connections when ACM deploys the renewed certificate?;No, connections established after the new certificate is deployed use the new certificate, and existing connections are not affected. /certificate-manager/faqs/;Can I use the same certificate with multiple Elastic Load Balancing load balancers and multiple CloudFront distributions?;Yes. /certificate-manager/faqs/;Can I use public certificates for internal Elastic Load Balancing load balancers with no public internet access?;Yes, but you can also consider using AWS Private CA to issue private certificates that ACM can renew without validation. See Managed Renewal and Deployment for details about how ACM handles renewals for public certificates that are not reachable from the Internet and private certificates. /certificate-manager/faqs/;Can I audit the use of certificate private keys?;Yes. Using AWS CloudTrail you can review logs that tell you when the private key for the certificate was used. /certificate-manager/faqs/;What logging information is available from AWS CloudTrail?;You can identify which users and accounts called AWS APIs for services that support AWS CloudTrail, the source IP address the calls were made from, and when the calls occurred. For example, you can identify which user made an API call to associate a certificate provided by ACM with an Elastic Load Balancer and when the Elastic Load Balancing service decrypted the key with a KMS API call. /certificate-manager/faqs/;How will I be charged and billed for my use of ACM certificates?;"Public and private certificates provisioned through AWS Certificate Manager for use with ACM-integrated services, such as Elastic Load Balancing, Amazon CloudFront, and Amazon API Gateway services are free. You pay for the AWS resources you create to run your application. AWS Private CA has pay as you go pricing; visit the AWS Private CA Pricing page for more details and examples." /certificate-manager/faqs/;Where can I find information about AWS Private CA?;Please see the AWS Private CA FAQs for questions about using AWS Private CA. /cloudhsm/faqs/;What is AWS CloudHSM?;The AWS CloudHSM service helps you meet corporate, contractual, and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) instances within the AWS cloud. AWS and AWS Marketplace partners offer a variety of solutions for protecting sensitive data within the AWS platform, but for some applications and data subject to contractual or regulatory mandates for managing cryptographic keys, additional protection may be necessary. CloudHSM complements existing data protection solutions and allows you to protect your encryption keys within HSMs that are designed and validated to government standards for secure key management. CloudHSM allows you to securely generate, store, and manage cryptographic keys used for data encryption in a way that keys are accessible only by you. /cloudhsm/faqs/;What is a Hardware Security Module (HSM)?;A Hardware Security Module (HSM) provides secure key storage and cryptographic operations within a tamper-resistant hardware device. HSMs are designed to securely store cryptographic key material and use the key material without exposing it outside the cryptographic boundary of the hardware. /cloudhsm/faqs/;What can I do with CloudHSM?;You can use the CloudHSM service to support a variety of use cases and applications, such as database encryption, Digital Rights Management (DRM), Public Key Infrastructure (PKI), authentication and authorization, document signing, and transaction processing. /cloudhsm/faqs/;How does CloudHSM work?;When you use the AWS CloudHSM service you create a CloudHSM Cluster. Clusters can contain multiple HSMs, spread across multiple Availability Zones in a region. HSMs in a cluster are automatically synchronized and load-balanced. You receive dedicated, single-tenant access to each HSM in your cluster. Each HSM appears as a network resource in your Amazon Virtual Private Cloud (VPC). Adding and removing HSMs from your Cluster is a single call to the AWS CloudHSM API (or on the command line using the AWS CLI). After creating and initializing a CloudHSM Cluster, you can configure a client on your EC2 instance that allows your applications to use the cluster over a secure, authenticated network connection. /cloudhsm/faqs/;I don’t currently have a VPC. Can I still use AWS CloudHSM?;No. To protect and isolate your AWS CloudHSM from other Amazon customers, CloudHSM must be provisioned inside an Amazon VPC. Creating a VPC is easy. Please see the VPC Getting Started Guide for more information. /cloudhsm/faqs/;Does my application need to reside in the same VPC as the CloudHSM Cluster?;No, but the server or instance on which your application and the HSM client are running must have network (IP) reachability to all HSMs in the cluster. You can establish network connectivity from your application to the HSM in many ways, including operating your application in the same VPC, with VPC peering, with a VPN connection, or with Direct Connect. Please see the VPC Peering Guide and VPC User Guide for more details. /cloudhsm/faqs/;Does CloudHSM work with on-premises HSMs?;Yes. While CloudHSM does not interoperate directly with on-premises HSMs, you can securely transfer exportable keys between CloudHSM and most commercial HSMs using one of several supported RSA key wrap methods. /cloudhsm/faqs/;How can my application use CloudHSM?;We have integrated and tested CloudHSM with a number of third-party software solutions such as Oracle Database 11g and 12c and Web servers including Apache and Nginx for SSL offload. Please see the CloudHSM User Guide for more information. /cloudhsm/faqs/;Can I use CloudHSM to store keys or encrypt data used by other AWS services?;Yes. You can do all encryption in your CloudHSM-integrated application. In this case, AWS services such as Amazon S3 or Amazon Elastic Block Store (EBS) would only see your data encrypted. /cloudhsm/faqs/;Can other AWS services use CloudHSM to store and manage keys?;AWS services integrate with AWS Key Management Service, which in turn is integrated with AWS CloudHSM through the KMS custom key store feature. If you want to use the server-side encryption offered by many AWS services (such as EBS, S3, or Amazon RDS), you can do so by configuring a custom key store in AWS KMS. /cloudhsm/faqs/;How do I get started with CloudHSM?;You can provision a CloudHSM Cluster in the CloudHSM Console, or with a few API calls through the AWS SDK or API. To learn more, please see the CloudHSM User Guide for information about getting started, the CloudHSM Documentation for information about the CloudHSM API, or the Tools for Amazon Web Services page for more information about the SDK. /cloudhsm/faqs/;How do I terminate CloudHSM service?;You can use the CloudHSM console, API, or SDK to delete your HSMs and stop using the service. Please refer to the CloudHSM User Guide for further instructions. /cloudhsm/faqs/;How will I be charged and billed for my use of the AWS CloudHSM service?;You will be charged an hourly fee for each hour (or partial hour) that an HSM is provisioned to a CloudHSM Cluster. A cluster with no HSMs in it is not billed, nor are you billed for our automatic storage of encrypted backups. For more information, please visit the CloudHSM pricing page. Note that network data transfers to and from your HSMs are charged separately. For more information please review data transfer pricing for EC2. /cloudhsm/faqs/;Is there a Free Tier for the CloudHSM service?;No, there is no free tier available for CloudHSM. /cloudhsm/faqs/;Do charges vary depending on how many users or keys I create on my HSM?;No, the hourly fee, which varies by region, does not depend on how much you use your HSM. /cloudhsm/faqs/;Do you offer reserved instance pricing for CloudHSM?;No, we do not offer reserved instance pricing for CloudHSM. /cloudhsm/faqs/;Are there any prerequisites for using CloudHSM?;Yes. In order to start using CloudHSM there are a few prerequisites, including a Virtual Private Cloud (VPC) in the region where you want CloudHSM service. Refer to the CloudHSM User Guide for more details. /cloudhsm/faqs/;Do I need to manage the firmware on my HSM?;No. AWS manages the firmware on the hardware. Firmware is maintained by a third-party, and every firmware must be evaluated by NIST for FIPS 140-2 Level 3 compliance. Only firmware that has been cryptographically signed by the FIPS key (which AWS does not have access to) can be installed. /cloudhsm/faqs/;How many HSMs should I have in my CloudHSM Cluster?;AWS strongly recommends that you use at least two HSMs in two different Availability Zones for any production workload. For mission-critical workloads, we recommend at least three HSMs in at least two separate AZs. The CloudHSM client will automatically handle any HSM failures and load balance across two or more HSMs transparently to your application. /cloudhsm/faqs/;Who is responsible for key durability?;AWS takes automatic encrypted backups of your CloudHSM Cluster on a daily basis, and additional backups when cluster lifecycle events occur (such as adding or removing an HSM).For the 24-hour period between backups, you are solely responsible for the durability of key material created or imported to your cluster. We strongly recommend ensuring that any keys created are synchronized to at least two HSMs in two different Availability Zones to ensure the durability of your keys. See the CloudHSM User Guide for more detail on verifying key synchronization. /cloudhsm/faqs/;How do I set up a high availability (HA) configuration?;High availability is provided automatically when you have at least two HSMs in your CloudHSM Cluster. Nadditional configuration is required. In the event an HSM in your cluster fails, it will be replaced automatically, and all clients will be updated to reflect the new configuration without interrupting any processing. Additional HSMs can be added to the cluster via the AWS API or SDK, increasing availability without interrupting your application. /cloudhsm/faqs/;How many HSMs can be contained in a CloudHSM Cluster?;A single CloudHSM Cluster can contain up to 28 HSMs, subject to account service limits. You can learn more about service limits and how to request a limit increase in our online documentation. /cloudhsm/faqs/;Can I back up the contents of a CloudHSM?;Your CloudHSM Cluster is backed up on a daily basis by AWS. Keys can also be exported (“wrapped”) out of your cluster and stored on-premises as long as they were not generated as “non-exportable”. /cloudhsm/faqs/;Is there an SLA for CloudHSM?;Yes, you can find the service level agreement (SLA) for AWS CloudHSM here. /cloudhsm/faqs/;Do I share my CloudHSM with other AWS customers?;No. As part of the service you receive single-tenant access to the HSM. Underlying hardware may be shared with other customers, but the HSM is accessible only to you. /cloudhsm/faqs/;How does AWS manage the HSM without having access to my encryption keys?;Separation of duties and role-based access control is inherent in the design of CloudHSM. AWS has a limited credential to the HSM that permits us to monitor and maintain the health and availability of the HSM, take encrypted backups, and to extract and publish audit logs to your CloudWatch Logs AWS has no access to any keys or data inside your CloudHSM cluster and cannot perform any operations other than those allowed for an HSM appliance user. /cloudhsm/faqs/;Can I monitor my HSM?;Yes. CloudHSM publishes multiple CloudWatch metrics for CloudHSM Clusters and for individual HSMs. You can use the AWS CloudWatch Console, API or SDK to obtain or alarm on these metrics. /cloudhsm/faqs/;What is the ‘entropy source’ (source of randomness) for CloudHSM?;Each HSM has a FIPS-validated Deterministic Random Bit Generator (DRBG) that is seeded by a True Random Number Generator (TRNG) within the HSM hardware module that conforms to SP800-90B. This is a high-quality entropy source capable of producing 20Mb/sec of entropy per HSM. /cloudhsm/faqs/;What happens if someone tampers with the HSM hardware?;CloudHSM has both physical and logical tamper detection and response mechanisms that trigger key deletion (zeroization) of the hardware. The hardware is designed to detect tampering if its physical barrier is breached. HSMs are also protected against brute-force login attacks. After a fixed number of unsuccessful attempts to access an HSM with Crypto Officer (CO) credentials, the HSM will lock the CO out. Similarly, after a fixed number of unsuccessful attempts to access an HSM with Crypto User (CU) credentials, the user will be locked and must be unlocked by a CO. /cloudhsm/faqs/;What happens in case of failure?;Amazon monitors and maintains the HSM and network for availability and error conditions. If an HSM fails or loses network connectivity, the HSM will be automatically replaced. You can check the health of an individual HSM using the CloudHSM API, SDK, or CLI Tools, and you can check the overall health of the service at any time using the AWS Service Health Dashboard. /cloudhsm/faqs/;Could I lose my keys if a single HSM fails?;If your CloudHSM cluster only has a single HSM, yes it is possible to lose keys that were created since the most recent daily backup. CloudHSM clusters with two or more HSMs, ideally in separate Availability Zones, will not lose keys if a single HSM fails. See our best practices for more information. /cloudhsm/faqs/;Can Amazon recover my keys if I lose my credentials to my HSM?;No. Amazon does not have access to your keys or credentials and therefore has no way to recover your keys if you lose your credentials. /cloudhsm/faqs/;How do I know that I can trust CloudHSM?;CloudHSM is built on hardware that is validated at Federal Information Processing Standard (FIPS) 140-2 Level 3. You can find information about the FIPS 140-2 Security Profile for the hardware used by CloudHSM, and the firmware it runs, at our compliance page. /cloudhsm/faqs/;Does the CloudHSM service support FIPS 140-2 Level 3?;Yes, CloudHSM provides FIPS 140-2 Level 3 validated HSMs. You can follow the procedure in the CloudHSM User Guide under Verify the Authenticity of Your HSM to confirm that you have an authentic HSM on the same model hardware specified in the NIST Security Policy described in the previous question. /cloudhsm/faqs/;How do I operate a CloudHSM in FIPS 140-2 mode?;CloudHSM is always in FIPS 140-2 mode. This can be verified by using the CLI tools as documented in the CloudHSM User Guide and running the getHsmInfo command, which will indicate the FIPS mode status. /cloudhsm/faqs/;Can I get a history of all CloudHSM API calls made from my account?;Yes. AWS CloudTrail records AWS API calls for your account. The AWS API call history produced by CloudTrail lets you perform security analysis, resource change tracking, and compliance auditing. Learn more about CloudTrail at the CloudTrail home page, and turn it on via CloudTrail's AWS Management Console. /cloudhsm/faqs/;Which events are not logged in CloudTrail?;CloudTrail does not include any of the HSM device or access logs. These are provided directly to your AWS account via CloudWatch Logs. See the CloudHSM User Guide for more details. /cloudhsm/faqs/;Which AWS compliance initiatives include CloudHSM?;Please refer to the AWS Compliance site for more information about which compliance programs cover CloudHSM. Unlike other AWS services, compliance requirements regarding CloudHSM are often met directly by the FIPS 140-2 Level 3 validation of the hardware itself, rather than as part of a separate audit program. /cloudhsm/faqs/;Why is FIPS 140-2 Level 3 important?;FIPS 140-2 Level 3 is a requirement of certain use cases, including document signing, payments, or operating as a public Certificate Authority for SSL certificates. /cloudhsm/faqs/;How can I request compliance reports that include CloudHSM in scope?;To see what compliance reports are in scope for CloudHSM, review the data on AWS Services in Scope by Compliance Program. To create free, self-service, on-demand compliance reports, use AWS Artifact. /cloudhsm/faqs/;How many crypto operations per second can CloudHSM perform?;The performance of the individual HSMs varies based on the specific workload. The table below shows approximate single-HSM performance for several common cryptographic algorithms. You can create additional HSMs in each CloudHSM cluster in order to achieve increased performance. Performance can vary based on exact configuration and data sizes, so we encourage load testing your application with CloudHSM to determine exact scaling needs. /cloudhsm/faqs/;How many keys can be stored on a CloudHSM cluster?;A CloudHSM cluster can store approximately 3,300 keys of any type or size. /cloudhsm/faqs/;;Not directly. You should use AWS Key Management Service with Custom Key Store to secure Amazon RDS data using keys generated and stored in your AWS CloudHSM cluster. /cloudhsm/faqs/;Can I use CloudHSM as a root of trust for other software?;Several third-party vendors support AWS CloudHSM as a root of trust. This means that you can utilize a software solution of your choice while creating and storing the underlying keys in your CloudHSM cluster. /cloudhsm/faqs/;What is the CloudHSM Client?;The CloudHSM Client is a software package supplied by AWS that allows you and your applications to interact with CloudHSM Clusters. /cloudhsm/faqs/;Does the CloudHSM Client give AWS access to my CloudHSM Cluster?;No. All communication between the client and your HSM is encrypted end to end. AWS cannot see or intercept this communication, and has no visibility into your cluster access credentials. /cloudhsm/faqs/;What are the CloudHSM Command Line Interface (CLI) Tools?;The CloudHSM Client comes with a set of CLI tools that allow you to administrate and use the HSM from the command line. Linux and Microsoft Windows are supported today. Support for Apple macOS is on our roadmap. These tools are available in the same package as the CloudHSM Client. /cloudhsm/faqs/;How can I download and get started with the CloudHSM Command Line Interface Tools?;You’ll find instructions in the CloudHSM User Guide. /cloudhsm/faqs/;Do the CloudHSM CLI Tools provide AWS with access to the contents of the HSM?;No. The CloudHSM Tools communicate directly with your CloudHSM Cluster via the CloudHSM Client over a secured, mutually authenticated channel. AWS cannot observe any communication between the client, tools, and HSM, it is encrypted end-to-end. /cloudhsm/faqs/;On what operating systems can I use the CloudHSM Client and CLI Tools?;A complete list of supported operating systems is provided in our online documentation. /cloudhsm/faqs/;What are the network connectivity requirements for using the CloudHSM Command Line Interface Tools?;The host on which you are running the CloudHSM Client and/or using the CLI Tools must have network reachability to all of the HSMs in your CloudHSM Cluster. /cloudhsm/faqs/;What can I do with the CloudHSM API & SDK?;You can create, modify, delete, and obtain the status of CloudHSM Clusters and HSMs. What you can do with the AWS CloudHSM API is limited to operations that AWS can perform with its restricted access. The API cannot access the contents of the HSM or modify any users, policies, or other settings. To learn more, please see the CloudHSM Documentation for information about the API, or the Tools for Amazon Web Services page for more information about the SDK. /cloudhsm/faqs/;How should I plan my migration to AWS CloudHSM?;Start by ensuring that the algorithms and modes you require are supported by CloudHSM. Your account manager can submit feature requests to us if needed. Next, determine your key rotation strategy. Suggestions for common use cases are in the next Q/A. We have also published an in-depth migration guide for CloudHSM. You're now ready to get started with CloudHSM. /cloudhsm/faqs/;How can I rotate my keys?;Your rotation strategy will depend on the type of application. Common examples are below. /cloudhsm/faqs/;What if I can't rotate my keys?;Each application and use case is different. Solutions to common scenarios are discussed in the migration guide for CloudHSM. For additional questions, open a support case with details of your application, the type of HSM you are using today, the type of keys you are using, and whether these keys are exportable or not. We will help you determine an appropriate migration path. /cloudhsm/faqs/;Does AWS CloudHSM have scheduled maintenance windows?;No, but AWS may need to conduct maintenance in the event of necessary upgrades or faulty hardware. We will make every effort to notify you in advance via the Personal Health Dashboard if any impact is expected. /cloudhsm/faqs/;I am having a problem with CloudHSM. What do I do?;You can find solutions to common problems in our troubleshooting guide. If you are still experiencing issues, contact AWS Support. /secrets-manager/faqs/;What can I do with AWS Secrets Manager?;AWS Secrets Manager enables you to store, retrieve, control access to, rotate, audit, and monitor secrets centrally. /secrets-manager/faqs/;What secrets can I manage in AWS Secrets Manager?;You can manage secrets such as database credentials, on-premises resource credentials, SaaS application credentials, third-party API keys, and Secure Shell (SSH) keys. Secrets Manager enables you to store a JSON document which allows you to manage any text blurb that is 64 KB or smaller. /secrets-manager/faqs/;What secrets can I rotate with AWS Secrets Manager?;You can natively rotate credentials for Amazon Relational Database Service (RDS), Amazon DocumentDB, and Amazon Redshift. You can extend Secrets Manager to rotate other secrets, such as credentials for Oracle databases hosted on EC2 or OAuth refresh tokens, by modifying sample AWS Lambda functions available in the Secrets Manager documentation. /secrets-manager/faqs/;How can my application use these secrets?;First, you must write an AWS Identity and Access Management (IAM) policy permitting your application to access specific secrets. Then, in the application source code, you can replace secrets in plain text with code to retrieve these secrets programmatically using the Secrets Manager APIs. For the complete details and examples, please see the AWS Secrets Manager User Guide. /secrets-manager/faqs/;How do I get started with AWS Secrets Manager?;To get started with AWS Secrets Manager: /secrets-manager/faqs/;In what regions is AWS Secrets Manager available?;Please visit the AWS Region Table to see the current region availability for AWS services. /secrets-manager/faqs/;How does AWS Secrets Manager implement database credential rotation without impacting applications?;AWS Secrets Manager enables you to configure database credential rotation on a schedule. This enables you to follow security best practices and rotate your database credentials safely. When Secrets Manager initiates a rotation, it uses the super database credentials provided by you to create a clone user with the same privileges, but with a different password. Secrets Manager then communicates the clone user information to databases and applications retrieving the database credentials. To learn more about rotation, refer to AWS Secrets Manager Rotation Guide. /secrets-manager/faqs/;Will rotating database credentials impact open connections?;No. Authentication happens when a connection is established. When AWS Secrets Manager rotates a database credential, the open database connection is not re-authenticated. /secrets-manager/faqs/;How do I know when AWS Secrets Manager rotates a database credential?;You can configure Amazon CloudWatch Events to receive a notification when AWS Secrets Manager rotates a secret. You can also see when Secrets Manager last rotated a secret using the Secrets Manager console or APIs. /secrets-manager/faqs/;How does AWS Secrets Manager keep my secrets secure?;AWS Secrets Manager encrypts at rest using encryption keys that you own and store in AWS Key Management Service (KMS). You can control access to the secret using AWS Identity and Access Management (IAM) policies. When you retrieve a secret, Secrets Manager decrypts the secret and transmits it securely over TLS to your local environment. By default, Secrets Manager does not write or cache the secret to persistent storage. /secrets-manager/faqs/;Who can use and manage secrets in AWS Secrets Manager?;You can use AWS Identity and Access Management (IAM) policies to control the access permissions of users and applications to retrieve or manage specific secrets. For example, you can create a policy that only enables developers to retrieve secrets used for the development environment. To learn more, visit Authentication and Access Control for AWS Secrets Manager. /secrets-manager/faqs/;How does AWS Secrets Manager encrypt my secrets?;AWS Secrets Manager uses envelope encryption (AES-256 encryption algorithm) to encrypt your secrets in AWS Key Management Service (KMS). /secrets-manager/faqs/;How will I be charged and billed for my use of AWS Secrets Manager?;With Secrets Manager, you pay only for what you use, there is no minimum fee. There are no set-up fees or commitments to begin using the service. At the end of the month, your credit card will automatically be charged for that month’s usage. You are charged for number of secrets you store and for API requests made to the service each month. /directoryservice/faqs/;What is AWS Directory Service?;AWS Directory Service is a managed service offering, providing directories that contain information about your organization, including users, groups, computers, and other resources. As a managed offering, AWS Directory Service is designed to reduce management tasks, thereby allowing you to focus more of your time and resources on your business. There is no need to build out your own complex, highly-available directory topology because each directory is deployed across multiple Availability Zones, and monitoring automatically detects and replaces domain controllers that fail. In addition, data replication and automated daily snapshots are configured for you. There is no software to install and AWS handles all of the patching and software updates. /directoryservice/faqs/;What can I do with AWS Directory Service?;AWS Directory Service makes it easy for you to setup and run directories in the AWS cloud, or connect your AWS resources with an existing on-premises Microsoft Active Directory. Once your directory is created, you can use it to manage users and groups, provide single sign-on to applications and services, create and apply group policy, join Amazon EC2 instances to a domain, as well as simplify the deployment and management of cloud-based Linux and Microsoft Windows workloads. AWS Directory Service enables your end users to use their existing corporate credentials when accessing AWS applications, such as Amazon WorkSpaces, Amazon WorkDocs and Amazon WorkMail, as well as directory-aware Microsoft workloads, including custom .NET and SQL Server-based applications. Finally, you can use your existing corporate credentials to administer AWS resources via AWS Identity and Access Management (IAM) role-based access to the AWS Management Console, so you do not need to build out more identity federation infrastructure. /directoryservice/faqs/;How do I create a directory?;You can use the AWS Management Console or the API to create a directory. All you need to provide is some basic information such as a fully qualified domain name (FQDNfor your directory, Administrator account name and password, and the VPC you want the directory to be attached to. /directoryservice/faqs/;Can I join an existing Amazon EC2 instance to an AWS Directory Service directory?;Yes, you can use the AWS Management Console or the API to add existing EC2 instances running Linux or Windows to an AWS Managed Microsoft AD directory. /directoryservice/faqs/;Are APIs supported for AWS Directory Service?;Public APIs are supported for creating and managing directories. You can now programmatically manage directories using public APIs. The APIs are available via the AWS CLI and SDK. Learn more about the APIs in the AWS Directory Service documentation. /directoryservice/faqs/;Does AWS Directory Service support CloudTrail logging?;Yes. Actions performed via the AWS Directory Service APIs or management console will be included in your CloudTrail audit logs. /directoryservice/faqs/;Can I receive notifications when the status of my directory changes?;Yes. You can configure Amazon Simple Notification Service (SNS) to receive email and text messages when the status of your AWS Directory Service changes. Amazon SNuses topics to collect and distribute messages to subscribers. When AWS Directory Service detects a change in your directory’s status, it will publish a message to the associated topic, which is then sent to topic subscribers. Visit the documentation to learn more. /directoryservice/faqs/;How much does AWS Directory Service cost?;See the pricing page for more information. /directoryservice/faqs/;Can I tag my directory?;Yes. AWS Directory Service supports cost allocation tagging. Tags make it easier for you to allocate costs and optimize spending by categorizing and grouping AWS resources. For example, you can use tags to group resources by administrator, application name, cost center, or a specific project. /directoryservice/faqs/;In which AWS regions is AWS Directory Service available?;Refer to Regional Products and Services for details of AWS Directory Service availability by region. /directoryservice/faqs/;What versions of Server Message Block (SMB) protocol does AWS Managed Microsoft AD support?;Effective 05/31/2020, client computers can use only SMB version 2.0 (SMBv2) or newer to access files stored on the SYSVOL and NETLOGON shares of the domain controllers for their AWS Managed Microsoft AD directories. However, AWS recommends customers use only SMBv2 or newer on all SMB-based file services. /directoryservice/faqs/;How do I create an AWS Managed Microsoft AD directory?;You can launch the AWS Directory Service console from the AWS Management Console to create an AWS Managed Microsoft AD directory. Alternatively, you can use the AWS SDK or AWS CLI. /directoryservice/faqs/;How are AWS Managed Microsoft AD directories deployed?;AWS Managed Microsoft AD directories are deployed across two Availability Zones in a region by default and connected to your Amazon Virtual Private Cloud (VPC). Backups are automatically taken once per day, and the Amazon Elastic Block Store (EBS) volumes are encrypted to ensure that data is secured at rest. Domain controllers that fail are automatically replaced in the same Availability Zone using the same IP address, and a full disaster recovery can be performed using the latest backup. /directoryservice/faqs/;Can I configure the storage, CPU, or memory parameters of my AWS Managed Microsoft AD directory?;No. This functionality is not supported at this time. /directoryservice/faqs/;How do I manage users and groups for AWS Managed Microsoft AD?;You can use your existing Active Directory tools—running on Windows computers that are joined to the AWS Managed Microsoft AD domain—to manage users and groups in AWS Managed Microsoft AD directories. Nspecial tools, policies, or behavior changes are required. /directoryservice/faqs/;How are my administrative permissions different between AWS Managed Microsoft AD and running Active Directory in my own Amazon EC2 Windows instances?;In order to deliver a managed-service experience, AWS Managed Microsoft AD must disallow operations by customers that would interfere with managing the service. Therefore, AWS restricts access to directory objects, roles, and groups that require elevated privileges. AWS Managed Microsoft AD does not allow direct host access to domain controllers via Windows Remote Desktop Connection, PowerShell Remoting, Telnet, or Secure Shell (SSH). When you create an AWS Managed Microsoft AD directory, you are assigned an organizational unit (OU) and an administrative account with delegated administrative rights for the OU. You can create user accounts, groups, and policies within the OU by using standard Remote Server Administration Tools such as Active Directory Users and Groups or the PowerShell ActiveDirectory module. /directoryservice/faqs/;Can I use Microsoft Network Policy Server (NPS) with AWS Managed Microsoft AD?;Yes. The administrative account created for you when AWS Managed Microsoft AD is set up has delegated management rights over the Remote Access Service (RAS) and Internet Authentication Service (IAS) security group. This enables you to register NPS with AWS Managed Microsoft AD and manage network access policies for accounts in your domain. /directoryservice/faqs/;Does AWS Managed Microsoft AD support schema extensions?;Yes. AWS Managed Microsoft AD supports schema extensions that you submit to the service in the form of a LDAP Data Interchange Format (LDIF) file. You may extend but not modify the core Active Directory schema. /directoryservice/faqs/;Which applications are compatible with AWS Managed Microsoft AD?;Amazon Chime Amazon Connect Amazon EC2 Instances Amazon FSx for Windows File Server Amazon QuickSight Amazon RDS for MySQL Amazon RDS for Oracle Amazon RDS for PostgreSQL Amazon RDS for SQL Server Amazon Single Sign On Amazon WorkDocs Amazon WorkMail Amazon WorkSpaces AWS Client VPN AWS Management Console /directoryservice/faqs/;Which third party software is compatible with AWS Managed Microsoft AD?;AWS Managed Microsoft AD is based on actual Active Directory and provides the broadest range of native AD tools and third party apps support such as: /directoryservice/faqs/;Which third party software is NOT compatible with AWS Managed Microsoft AD?;Active Directory Certificate Services (AD CS): Certificate Enrollment Web Service Active Directory Certificate Services (AD CS): Certificate Enrollment Policy Web Service Microsoft Exchange Server Microsoft Skype for Business Server /directoryservice/faqs/;Can I migrate my existing, on-premises Microsoft Active Directory to AWS Managed Microsoft AD?;AWS does not provide any migration tools to migrate a self-managed Active Directory to AWS Managed Microsoft AD. You must establish a strategy for performing migration including password resets, and implement the plans using Remote Server Administration Tools. /directoryservice/faqs/;Can I configure conditional forwarders and trusts in the Directory Service console?;Yes. You can configure conditional forwarders and trusts for AWS Managed Microsoft AD using the Directory Service console as well as the API. /directoryservice/faqs/;Can I add additional domain controllers manually to my AWS Managed Microsoft AD?;Yes. You can add additional domain controllers to your managed domain using the AWS Directory Service console or API. Note that promoting Amazon EC2 instances to domain controllers manually is not supported. /directoryservice/faqs/;Can I use Microsoft Office 365 with user accounts managed in AWS Managed Microsoft AD?;Yes. You can synchronize identities from AWS Managed Microsoft AD to Azure AD using Azure AD Connect and use Microsoft Active Directory Federation Services (AD FS) for Windows 2016 with AWS Managed Microsoft AD to authenticate Office 365 users. For step-by-step instructions, see How to Enable Your Users to Access Office 365 with AWS Microsoft Active Directory Credentials. /directoryservice/faqs/;Can I use Security Assertion Markup Language (SAML) 2.0–based authentication with cloud applications using AWS Managed Microsoft AD?;Yes. You can use Microsoft Active Directory Federation Services (AD FS) for Windows 2016 with your AWS Managed Microsoft AD managed domain to authenticate users to cloud applications that support SAML. /directoryservice/faqs/;Can I encrypt communication between my applications and AWS Managed Microsoft AD using LDAPS?;Yes. AWS Managed Microsoft AD supports Lightweight Directory Access Protocol (LDAP) over Secure Socket Layer (SSL) / Transport Layer Security (TLS), also known as LDAPS, in both client and server roles. When acting as a server, AWS Managed Microsoft AD supports LDAPS over ports 636 (SSL) and 389 (TLS). You enable server-side LDAPS communication by installing a certificate on your AWS Managed Microsoft AD domain controllers from an AWS-based Active Directory Certificate Services certificate authority (CA). To learn more, see Enable Secure LDAP (LDAPS). /directoryservice/faqs/;Can I encrypt LDAP communications between AWS applications and my self-managed AD using AWS Managed Microsoft AD?;Yes. AWS Managed Microsoft AD supports Lightweight Directory Access Protocol (LDAP) over Secure Socket Layer (SSL) / Transport Layer Security (TLS), also known as LDAPS, in both client and server roles. When acting as a client, AWS Managed Microsoft AD supports LDAPS over ports 636 (SSL). You enable client-side LDAPS communication by registering certification authority (CA) certificates from your server certificate issuer into AWS. To learn more, see Enable Secure LDAP (LDAPS). /directoryservice/faqs/;How many users, groups, computers, and total objects does AWS Managed Microsoft AD support?;AWS Managed Microsoft AD (Standard Edition) includes 1 GB of directory object storage. This capacity can support up to 5,000 users or 30,000 directory objects, including users, groups, and computers. AWS Managed Microsoft AD (Enterprise Edition) includes 17 GB of directory object storage, which can support up to 100,000 users or 500,000 objects. /directoryservice/faqs/;Can I use AWS Managed Microsoft AD as a primary directory?;Yes. You can use it as a primary directory to manage users, groups, computers, and Group Policy objects (GPOs) in the cloud. You can manage access and provide single sign-on (SSO) to AWS applications and services, and to third-party directory-aware applications running on Amazon EC2 instances in the AWS Cloud. In addition, you can use Azure AD Connect and AD FS to support SSO to cloud applications, including Office 365. /directoryservice/faqs/;Can I use AWS Managed Microsoft AD as a resource forest?;Yes. You can use AWS Managed Microsoft AD as a resource forest that contains primarily computers and groups with trust relationships to your on-premises directory. This enables your users to access AWS applications and resources with their on-premises AD credentials. /directoryservice/faqs/;What is multi-region replication?;Multi-region replication is a feature that enables you to deploy and use a single AWS Managed Microsoft AD directory across multiple AWS Regions. This makes it easier and more cost-effective for you to deploy and manage your Microsoft Windows and Linux workloads globally. With the automated multi-region replication capability you get higher resiliency, while your applications use a local directory for optimal performance. This feature is available in AWS Managed Microsoft AD (Enterprise Edition) only. You can use the feature for new and existing directories. /directoryservice/faqs/;How do I add an AWS Region to my directory?;First, you open the AWS Directory Service console in the region where your directory is already up and running (primary region). Select the directory you want to expand and choose Add Region. Then, select the Region into which you want to expand, provide the Amazon Virtual Private Cloud (VPC), and the subnets into which you want to deploy your directory. You can also use APIs to expand your directory. To learn more, see the documentation. /directoryservice/faqs/;How does multi-region replication work when I add a new AWS Region?;AWS Managed Microsoft AD automatically configures inter-region networking connectivity, deploys domain controllers, and replicates all your directory data, including users, groups, Group Policy Objects (GPOs), and schema, across your selected regions. In addition, AWS Managed Microsoft AD configures a new AD site per region which improves user authentication and domain controller replication performance within the region while lowering costs by minimizing data transfers between regions. Your directory identifier (directory_id) remains the same in the new region and is deployed in the same AWS account as your primary Region. /directoryservice/faqs/;Can I share my directory with other AWS accounts in the new AWS Region?;Yes, with multi-region replication you have the flexibility to share your directory with other AWS accounts per Region. Directory sharing configurations are not automatically replicated from the primary region. To learn how to share your directory with other AWS accounts, see the documentation. /directoryservice/faqs/;Can I add more domain controllers do my directory in the new AWS Region?;Yes, with multi-region replication you have the flexibility to define the number of domain controllers per region. To learn how to add a domain controller, see the documentation. /directoryservice/faqs/;How do I monitor the directory status across multiple AWS Regions?;With multi-region replication, you monitor your directory status per Region independently. You must enable Amazon Simple Notification Service (SNS) in each region where you deployed your directory, using the AWS Directory Service console or API. To learn more, see the documentation. /directoryservice/faqs/;How do I monitor the directory security logs across multiple AWS Regions?;With multi-region replication, you monitor your directory security logs per Region independently. You must enable Amazon CloudWatch Logs forwarding in each region where you deployed your directory, using the AWS Directory Service console or API. To learn more, see the documentation. /directoryservice/faqs/;Can I rename my directory’s AD site name?;Yes, you can rename your directory’s AD site name per region using standard AD tools. To learn more, see the documentation. /directoryservice/faqs/;Can I remove an AWS Region from my directory?;Yes. If you do not have any AWS applications registered to your directory and you have not shared the directory with any AWS account in the Region, AWS Managed Microsoft AD allows you to remove an AWS Region from your directory. You cannot remove the primary Region, unless you delete the directory. /directoryservice/faqs/;What AWS applications and services are compatible with multi-region application?;Multi-region replication is compatible with Amazon EC2, Amazon RDS (SQL Server, Oracle, MySQL, PostgreSQL, and MariaDB), Amazon Aurora (MySQL and PostgreSQL), and Amazon FSx for Windows File Server natively. You can also integrate other AWS Applications such as Amazon WorkSpaces, AWS Single Sign-On, AWS Client VPNAmazon QuickSight, Amazon Connect, Amazon WorkDocs, Amazon WorkMail, and Amazon Chime with your directory in new Regions by configuring AD Connector against your AWS Managed Microsoft AD directory per Region. /directoryservice/faqs/;What is seamless domain join?;Seamless domain join is a feature that allows you to join your Amazon EC2 for Windows Server and Amazon EC2 for Linux instances seamlessly to a domain, at the time of launch and from the AWS Management Console. You can join instances to AWS Managed Microsoft AD that you launch in the AWS Cloud. /directoryservice/faqs/;How do I join an instance seamlessly to a domain?;When you create and launch an EC2 for Windows or an EC2 for Linux instance from the AWS Management Console, you have the option to select which domain your instance will join. To learn more, see the documentation. /directoryservice/faqs/;Can I join existing EC2 for Windows Server instances seamlessly to a domain?;You cannot use the seamless domain join feature from the AWS Management Console for existing EC2 for Windows Server and EC2 for Linux instances, but you can join existing instances to a domain using the EC2 API or by using PowerShell on the instance. To learn more, see the documentation. /directoryservice/faqs/;How does AWS Directory Service enable single sign-on (SSO) to the AWS Management Console?;AWS Directory Service allows you to assign IAM roles to AWS Manage Microsoft AD or Simple AD users and groups in the AWS cloud, as well as an existing, on-premises Microsoft Active Directory users and groups using AD Connector. These roles will control users’ access to AWS services based on IAM policies assigned to the roles. AWS Directory Service will provide a customer-specific URL for the AWS Management Console which users can use to sign in with their existing corporate credentials. See our documentation for more information on this feature. /directoryservice/faqs/;Can I use AWS Managed Microsoft AD for AWS Cloud workloads that are subject to compliance standards?;Yes. AWS Managed Microsoft AD has implemented the controls necessary to enable you to meet the U.S. Health Insurance Portability and Accountability Act (HIPAA) requirements and is included as an in-scope service in the Payment Card Industry Data Security Standard (PCI DSS) Attestation of Compliance and Responsibility Summary. /directoryservice/faqs/;How can I access compliance and security reports?;To access a comprehensive list of documents relevant to compliance and security in the AWS Cloud, see AWS Artifact. /directoryservice/faqs/;What is the AWS Shared Responsibility Model?;Security, including HIPAA and PCI DSS compliance, is a shared responsibility between AWS and you. For example, it is your responsibility to configure your AWS Managed Microsoft AD password policies to meet PCI DSS requirements when using AWS Managed Microsoft AD. To learn more about the actions you may need to take to meet HIPAA and PCI DSS compliance requirements, see the compliance documentation for AWS Managed Microsoft AD, read the Architecting for HIPPA Security and Compliance on Amazon Web Services whitepaper, and see the AWS Cloud Compliance, HIPAA Compliance, and PCI DSS Compliance. /kms/faqs/;What is AWS KMS?; If you are responsible for securing your data across AWS services, you should use it to centrally manage the encryption keys that control access to your data. If you are a developer who needs to encrypt data in your applications, you should use the AWS Encryption SDK with AWS KMS to more easily generate, use and protect symmetric encryption keys in your code. If you are a developer who needs to digitally sign or verify data using asymmetric keys, you should use the service to create and manage the private keys you’ll need. If you’re looking for a scalable key management infrastructure to support your developers and their growing number of applications, you should use it to reduce your licensing costs and operational burden. If you’re responsible for proving data security for regulatory or compliance purposes, you should use it because it facilitates proving your data is consistently protected. It’s also in scope for a broad set of industry and regional compliance regimes. /kms/faqs/;Why should I use AWS KMS?; The easiest way to get started with AWS KMS is to choose to encrypt your data with an AWS service that uses AWS owned root keys that are automatically created by each service. If you want full control over the management of your keys, including the ability to share access to keys across accounts or services, you can create your own AWS KMS customer managed keys in AWS KMS. You can also use the KMS keys that you create directly within your own applications. AWS KMS can be accessed from the KMS console that is grouped under Security, Identity and Compliance on the AWS Services home page of the AWS KMS Console. AWS KMS APIs can also be accessed directly through the AWS KMS Command Line Interface (CLI) or AWS SDK for programmatic access. AWS KMS APIs can also be used indirectly to encrypt data within your own applications by using the AWS Encryption SDK. Visit the Getting Started page to learn more. /kms/faqs/;In what Regions is AWS KMS available?;Availability is listed on our global Products and Services by Region page. /kms/faqs/;What key management features are available in AWS KMS?;You can perform the following key management functions: /kms/faqs/;How does AWS KMS work?;You can start using the service by requesting the creation of an AWS KMS key. You control the lifecycle of any customer managed KMS key and who can use or manage it. Once you have created a KMS key, you can submit data directly to the service AWS KMS to be encrypted, decrypted, signed, verified, or to generate or verify an HMAC using this KMS key. You set usage policies on these keys that determine which users can perform which actions under which conditions. /kms/faqs/;Which AWS cloud services are integrated with AWS KMS?;AWS KMS is seamlessly integrated with most other AWS services to make encrypting data in those services easier. In some cases, data is encrypted by default using keys that are stored in AWS KMS but owned and managed by the AWS service in question. In many cases the AWS KMS keys are owned and managed by you within your account. Some services give you the choice of managing the keys yourself or allowing the service to manage the keys on your behalf. See the list of AWS services currently integrated with AWS KMS. See the AWS KMS Developer’s Guide for more information on how integrated services use AWS KMS. /kms/faqs/;Why use envelope encryption? Why not just send data to AWS KMS to encrypt directly?; You have the option of selecting a specific KMS key to use when you want an AWS service to encrypt data on your behalf. These are known as customer managed KMS keys and you have full control over them. You define the access control and usage policy for each key and you can grant permissions to other accounts and services to use them. In addition to customer managed keys, AWS KMS also provides two types of keys managed by AWS: (1) AWS managed KMS keys are keys created in your account but managed by AWS, and (2) AWS owned keys are keys fully owned and operated from AWS accounts. You can track AWS managed keys in your account and all usage is logged in CloudTrail, but you have no direct control over the keys themselves. AWS owned keys are the most automated and provide encryption of your data within AWS but do not provide policy controls or CloudTrail logs on their key activity. /kms/faqs/;What’s the difference between a KMS key I create and KMS keys created automatically for me by other AWS services?; Creating your own KMS key gives you more control than you have with AWS managed KMS keys. When you create a symmetric customer managed KMS key, you can choose to use key material generated by AWS KMS, generated within an AWS CloudHSM cluster or external key manager (through the custom key store), or import your own key material. You can define an alias and description for the key and opt-in to have the key automatically rotated once per year if it was generated by AWS KMS. You also define all the permissions on the key to control who can use or manage the key. With asymmetric customer managed KMS keys, there are a couple of caveats to management: the key material can only be generated within AWS KMS HSMs and there is no option for automatic key rotation. /kms/faqs/;Why should I create my own AWS KMS keys?; Yes. You can import a copy of your key from your own key management infrastructure to AWS KMS and use it with any integrated AWS service or from within your own applications. You cannot import asymmetric KMS keys into AWS KMS. /kms/faqs/;Can I bring my own keys to AWS KMS?; You can use an imported key to get greater control over the creation, lifecycle management, and durability of your key in AWS KMS. Imported keys are designed to help you meet your compliance requirements which may include the ability to generate or maintain a secure copy of the key in your infrastructure, and the ability to immediately delete the imported copy of the key from AWS infrastructure. /kms/faqs/;When would I use an imported key?; You can import 256-bit symmetric keys. /kms/faqs/;What type of keys can I import?; During the import process, your key must be wrapped by an AWS KMS-provided public key using one of two RSA PKCS#1 schemes. This verifies that your encrypted key can be decrypted by only AWS KMS. /kms/faqs/;How is the key that I import into AWS KMS protected in transit?; There are two main differences: /kms/faqs/;What’s the difference between a key I import and a key I generate in AWS KMS?;"You are responsible for maintaining a copy of your imported keys in your key management infrastructure so that you can re-import them at any time. AWS, however, verifies the availability, security, and durability of keys generated by AWS KMS on your behalf until you schedule the keys for deletion. You may set an expiration period for an imported key. AWS KMS will automatically delete the key material after the expiration period. You can also delete imported key material on demand. In both cases the key material itself is deleted but the KMS key reference in AWS KMS and associated metadata are retained so that the key material can be re-imported in the future. Keys generated by AWS KMS do not have an expiration time and cannot be deleted immediately; there is a mandatory 7 to 30 day wait period. All customer managed KMS keys, regardless of whether the key material was imported, can be manually disabled or scheduled for deletion. In this case the KMS key itself is deleted, not just the underlying key material." /kms/faqs/;Can I rotate my keys?; If you choose to have AWS KMS automatically rotate keys, you don’t have to re-encrypt your data. AWS KMS automatically keeps previous versions of keys to use for decryption of data encrypted under an old version of a key. All new encryption requests against a key in AWS KMS are encrypted under the newest version of the key. /kms/faqs/;Do I have to re-encrypt my data after keys in AWS KMS are rotated?;If you manually rotate your imported or custom key store keys, you may have to re-encrypt your data depending on whether you decide to keep old versions of keys available. /kms/faqs/;Can I delete a key from AWS KMS?;For customer AWS KMS keys with imported key material, you can delete the key material without deleting the AWS KMS key id or metadata in two ways. First, you can delete your imported key material on demand without a waiting period. Second, at the time of importing the key material into the AWS KMS key, you can define an expiration time for how long AWS can use your imported key material before it is deleted. You can re-import your key material into the AWS KMS key if you need to use it again. /kms/faqs/;What should I do if my imported key material has expired or I accidentally deleted it?; Yes. Once you import your key to an AWS KMS key, you will receive a CloudWatch Metric every few minutes that counts down the time to expiration of the imported key. You will also receive a CloudWatch Event once the imported key under your AWS KMS key expires. You can build logic that acts on these metrics or events and automatically re-imports the key with a new expiration period to avoid an availability risk. /kms/faqs/;Can I be alerted that I need to re-import the key?; Yes. AWS KMS is supported in AWS SDKs, AWS Encryption SDK, the Amazon DynamoDB Client-side Encryption, and the Amazon Simple Storage Service (S3) Encryption Client to facilitate encryption of data within your own applications wherever they run. Visit the AWS Crypto Tools and Developing on AWS website for more information. /kms/faqs/;Can I use AWS KMS to help manage encryption of data outside of AWS cloud services?; You can create up to 100,000 KMS keys per account per Region. As both enabled and disabled KMS keys count towards the limit, we recommend deleting disabled keys that you no longer use. AWS managed KMS keys created on your behalf for use within supported AWS services do not count against this limit. There is no limit to the number of data keys that can be derived using a KMS key and used in your application or by AWS services to encrypt data on your behalf. You may request a limit increase for KMS keys by visting the AWS Support Center. /kms/faqs/;Is there a limit to the number of keys I can create in AWS KMS?; AWS KMS supports 256-bit keys when creating a KMS key. Generated data keys returned to the caller can be 256-bit, 128-bit, or an arbitrary value up to 1024-bytes. When AWS KMS uses a 256-bit KMS key on your behalf, the AES algorithm in Galois Counter Mode (AES-GCM) is used. /kms/faqs/;What types of symmetric key types and algorithms are supported?; AWS KMS supports the following asymmetric key types: RSA 2048, RSA 3072, RSA 4096, ECC NIST P-256, ECC NIST P-384, ECC NIST-521, and ECC SECG P-256k1. /kms/faqs/;What kind of asymmetric key types are supported?; AWS KMS supports the RSAES_OAEP_SHA_1 and RSAES_OAEP_SHA_256 encryption algorithms with RSA 2048, RSA 3072, and RSA 4096 key types. Encryption algorithms cannot be used with the elliptic curve key types (ECC NIST P-256, ECC NIST P-384, ECC NIST-521, and ECC SECG P-256k1). /kms/faqs/;What kinds of asymmetric encryption algorithms are supported?; When using RSA key types, AWS KMS supports the RSASSA_PSS_SHA_256, RSASSA_PSS_SHA_384, RSASSA_PSS_SHA_512, RSASSA_PKCS1_V1_5_SHA_256, RSASSA_PKCS1_V1_5_SHA_384, and RSASSA_PKCS1_V1_5_SHA_512 signing algorithms. When using elliptic curve key types, AWS KMS supports the ECDSA_SHA_256, ECDSA_SHA_384, and ECDSA_SHA_512 signing algorithms. /kms/faqs/;What kinds of asymmetric signing algorithms are supported?; No. All KMS keys or the private portion of an asymmetric KMS key cannot be exported in plain text from the HSMs. Only the public portion of an asymmetric KMS key can be exported from the console or by calling the GetPublicKey API. /kms/faqs/;Can data keys and data key pairs be exported out of the HSMs in plain text?; The symmetric data key or the private portion of the asymmetric data key is encrypted under the symmetric KMS key you define when you request AWS KMS to generate the data key. /kms/faqs/;How are data keys and data key pairs protected for storage outside the service?; The public portion of the asymmetric key material is generated in AWS KMS and can be used for digital signature verification by calling the “Verify” API, or for public key encryption by calling the “Encrypt” API. The public key can also be used outside of AWS KMS for verification or encryption. You can call the GetPublicKey API to retrieve the public portion of the asymmetric KMS key. /kms/faqs/;How do I use the public portion of an asymmetric KMS key?; The size limit is 4 KB. If you want to digitally sign data larger than 4 KB, you have the option to create a message digest of the data and send it to AWS KMS. The digital signature is created over the digest of the data and returned. You specify whether you are sending the full message or a message digest as a parameter in the Sign API request. Any data submitted to the Encrypt, Decrypt, or Re-Encrypt APIs that require use of asymmetric operations must also be less than 4 KB. /kms/faqs/;How can I distinguish between asymmetric or symmetric KMS keys I have created?; No. Automatic key rotation is not supported for asymmetric or HMAC KMS keys. You can manually rotate them by creating a new KMS key and mapping an existing key alias from the old KMS key to the new KMS key. /kms/faqs/;Can I use asymmetric KMS keys for digital signing applications that require digital certificates?; The primary reason to use the AWS Private CA service is to provide a public key infrastructure (PKI) for the purpose of identifying entities and securing network connections. PKI provides processes and mechanisms, primarily using X.509 certificates, to put structure around public key cryptographic operations. Certificates provide an association between an identity and a public key. The certification process in which a certificate authority issues a certificate allows the trusted certificate authority to assert the identity of another entity by signing a certificate. PKI provides identity, distributed trust, key lifecycle management, and certificate status vended through revocation. These functions add important processes and infrastructure to the underlying asymmetric cryptographic keys and algorithms provided by AWS KMS. /kms/faqs/;Can I use my applications’ cryptographic API providers such as OpenSSL, JCE, Bouncy Castle, or CNG with AWS KMS?; Yes. The AWS KMS SLA provides for a service credit if your monthly uptime percentage is below our service commitment in any billing cycle. /kms/faqs/;Who can use and manage my keys in AWS KMS?; AWS KMS is designed so that no one, including AWS employees, can retrieve your plaintext KMS keys from the service. AWS KMS uses hardware security modules (HSMs) that have been validated under FIPS 140-2, or are in the process of being validated, to protect the confidentiality and integrity of your keys. Your plaintext KMS keys never leave the HSMs, are never written to disk, and are only ever used in the volatile memory of the HSMs for the time needed to perform your requested cryptographic operation. Updates to software on the service hosts and to the AWS KMS HSM firmware is controlled by multi-party access control that is audited and reviewed by an independent group within Amazon and a NIST-certified lab in compliance with FIPS 140-2. /kms/faqs/;How does AWS secure the KMS keys that I create?;More details about these security controls can be found in the AWS KMS cryptographic details tech paper. You can also review the FIPS 140-2 certificate for AWS KMS HSM along with the associated Security Policy to get more details about how AWS KMS HSM meets the security requirements of FIPS 140-2. Also, you can download a copy of the Service Organization Controls (SOC) report from AWS Artifact to learn more about security controls used by the service to protect your KMS keys. /kms/faqs/;How do I migrate my existing AWS KMS keys to use FIPS 140-2 validated HSMs?; FIPS 140-2 validated HSMs are available in all AWS Regions where AWS KMS is offered. /kms/faqs/;Which AWS Regions have FIPS 140-2 validated HSMs?; AWS KMS is a two-tier service. The API endpoints receive client requests over an HTTPS connection using only TLS ciphersuites that support perfect forward secrecy. These API endpoints authenticate and authorize the request before passing the request for a cryptographic operation to the AWS KMS HSMs or your AWS CloudHSM cluster if you’re using the KMS custom key store feature. /kms/faqs/;What is the difference between the FIPS 140-2 validated endpoints and the FIPS 140-2 validated HSMs in AWS KMS?; You configure your applications to connect to the unique regional FIPS 140-2 validated HTTPS endpoints. AWS KMS FIPS 140-2 validated HTTPS endpoints are powered by the OpenSSL FIPS Object Module. You can review the security policy of the OpenSSL module here. FIPS 140-2 validated API endpoints are available in all commercial Regions where AWS KMS is available. /kms/faqs/;How do I make API requests to AWS KMS using the FIPS 140-2 validated endpoints?; Yes. AWS KMS has been validated as having the functionality and security controls to help you meet the encryption and key management requirements (primarily referenced in sections 3.5 and 3.6) of the PCI DSS 3.2.1. /kms/faqs/;Can I use AWS KMS to help me comply with the encryption and key management requirements in the Payment Card Industry Data Security Standard (PCI DSS 3.2.1)?;For more details on PCI DSS compliant services in AWS, you can read the PCI DSS FAQs. /kms/faqs/;How does AWS KMS secure the data keys I export and use in my application?; No. AWS KMS keys are created and used only within the service to help verify their security, implement your policies to be consistently enforced, and provide a centralized log of their use. /kms/faqs/;Can I export an AWS KMS key and use it in my own applications?; A single-Region KMS key generated by AWS KMS is stored and used only in the Region in which it was created. With AWS KMS multi-Region keys you can choose to replicate a multi-Region primary key into multiple Regions within the same AWS partition. /kms/faqs/;What geographic Region are my keys stored in?; Logs in AWS CloudTrail will show all AWS KMS API requests, including both management requests (such as create, rotate, disable, policy edits) and cryptographic requests (such as encrypt/decrypt). Turn on CloudTrail in your account to view these logs. /kms/faqs/;How can I tell who used or changed the configuration of my keys in AWS KMS?; CloudHSM provides you with a FIPS 140-2 Level 3 overall validated single-tenant HSM cluster in your Amazon Virtual Private Cloud (VPC) to store and use your keys. You have exclusive control over how your keys are used through an authentication mechanism independent from AWS. You interact with keys in your CloudHSM cluster similar to the way you interact with your applications running in Amazon EC2. You can use CloudHSM to support a variety of use cases, such as Digital Rights Management (DRM), Public Key Infrastructure (PKI), document signing, and cryptographic functions using PKCS#11, Java JCE, or Microsoft CNinterfaces. /kms/faqs/;How does AWS KMS compare to CloudHSM?;AWS KMS helps you to create and control the encryption keys used by your applications and supported AWS services in multiple Regions around the world from a single console. The service uses an FIPS HSM that has been validated under FIPS 140-2, or are in the process of being validated, to protect the security of your keys. Centralized management of all your keys in AWS KMS helps you enforce who can use your keys under which conditions, when they get rotated, and who can manage them. AWS KMS integration with CloudTrail gives you the ability to audit the use of your keys to support your regulatory and compliance activities. You interact with AWS KMS from your applications using the AWS SDK if you want to call the service APIs directly, through other AWS services that are integrated with AWS KMS or by using the AWS Encryption SDK if you want to perform client-side encryption. /kms/faqs/;How will I be charged and billed for my use of AWS KMS?;You are charged for all KMS keys you create and for API requests made to the service each month above a free tier. /kms/faqs/;Is there a free tier?;*API requests involving asymmetric KMS keys and API requests to the GenerateDataKeyPair and GenerateDataKeyPairWithoutPlaintext APIs are excluded from the Free Tier. Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. You can learn more here. /kms/faqs/;How can I connect AWS KMS to CloudHSM?;Additional guidance for deciding if using a custom key store it is right for you can be found in this blog. /kms/faqs/;Why would I need to use a CloudHSM?; There are two differences when managing keys in a custom key store backed by CloudHSM compared to the default AWS KMS key store. You cannot import key material into your custom key store and you cannot have AWS KMS automatically rotate keys. In all other respects, including the type of keys that can be generated, the way that keys use aliases and how policies are defined, keys that are stored in a custom key store are managed in the same way as any other AWS KMS customer managed KMS key. /kms/faqs/;How does CloudHSM change the way KMS keys are managed?; No, only customer managed KMS keys can be stored and managed in an AWS KMS custom key store backed by CloudHSM. AWS managed KMS keys that are created on your behalf by other AWS services to encrypt your data are always generated and stored in the AWS KMS default key store. /kms/faqs/;Can I use a CloudHSM to store an AWS managed KMS key?; No, API requests to AWS KMS to use a KMS key to encrypt and decrypt data are handled in the same way. Authentication and authorization processes operate independently of where the key is stored. All activity using a key in a custom key store backed by CloudHSM is also logged to CloudTrail in the same way. However, the actual cryptographic operations happen exclusively in either the custom key store or the default AWS KMS key store. /kms/faqs/;Does integration with CloudHSM affect how encryption APIs function in KMS?; In addition to the activity that is logged to CloudTrail by AWS KMS, the use of a custom key store provides three further auditing mechanisms. First, CloudHSM also logs all API activity to CloudTrail, such as creating clusters and adding or removing HSMs. Second, each cluster also captures its own local logs to record user and key management activity. Third, each CloudHSM instance copies the local user and key management activity logs to AWS CloudWatch. /kms/faqs/;How can I audit the use of keys in a custom key store?; The use of an AWS KMS custom key store helps you be responsible for verifying that your keys are available for use by AWS KMS. Errors in configuration of CloudHSM and accidental deletion of key material within a CloudHSM cluster could impact availability. The number of HSMs you use and your choice of Availability Zones (AZs) can also affect the resilience of your cluster. As in any key management system, it is important to understand how the availability of keys can impact the recovery of your encrypted data. /kms/faqs/;What impact does using CloudHSM have on availability of keys?; The rate at which keys stored in an AWS KMS custom key store backed by CloudHSM can be used through AWS KMS API calls are lower than for keys stored in the default AWS KMS key store. See the AWS KMS Developer Guide for the current performance limits. /kms/faqs/;What are the performance limitations associated with CloudHSM?; AWS KMS prices are unaffected by the use of a custom key store. However, each custom key store does require that your AWS CloudHSM cluster contains at least two HSMs. These HSMs are charged at the standard AWS CloudHSM prices. There are no additional charges for using a custom key store. /kms/faqs/;What are the costs associated with using a custom key store backed by CloudHSM?; AWS KMS users that want to use a custom key store will need to set up an AWS CloudHSM cluster, add HSMs, manage HSMs users and potentially restore HSMs from backup. These are security sensitive tasks and you can verify that you have the appropriate resources and organizational controls in place. /kms/faqs/;What additional skills and resources are required to configure CloudHSM?; No, the ability to import your own key material into an AWS KMS custom key store is not supported. Keys that are stored in a custom key store can only be generated in the HSMs that form your CloudHSM cluster. /kms/faqs/;Can I import keys into a custom key store?; No, the ability to migrate keys between the different types of AWS KMS key store is not currently supported. All keys must be created in the key store in which they will be used, except in situations where you import you own key material into the default AWS KMS key store. /kms/faqs/;Can I migrate keys between the default AWS KMS keys store and a custom key store?; The ability to automatically rotate key material in an AWS KMS custom key store is not supported. Key rotation must be performed manually by creating new keys and re-mapping AWS KMS key aliases used by your application code to use the new keys for future encryption operations. /kms/faqs/;Can I rotate keys stored in a custom key store?; Yes, AWS KMS does not require exclusive access to your CloudHSM cluster. If you already have a cluster you can use it as a custom key store and continue to use it for your other applications. However, if your cluster is supporting high, non-AWS KMS, workloads you may experience reduced throughput for operations using KMS keys in your custom key store. Similarly, a high AWS KMS request rate to your custom key store could impact your other applications. /kms/faqs/;Can I use my CloudHSM cluster for other applications?; Visit the AWS CloudHSM website for an overview of the service and for more details on configuring and using the service read the AWS CloudHSM User Guide. /kms/faqs/;What is an external key store (XKS)?; XKS can help you comply with rules or regulations that require encryption keys to be stored and used outside of AWS under your control. /kms/faqs/;Why would I use an external key store?; Requests to AWS KMS from integrated AWS services on your behalf or from your own applications are forwarded to a component in your network called an XKS Proxy. The XKS Proxy is an open source API specification that helps you and your key management vendor build a service that accepts these requests and forwards them to your key management infrastructure to use its keys for encryption and decryption. /kms/faqs/;How does AWS KMS connect to my external key manager?; Thales, Entrust, Salesforce, T-Systems, Atos, Fortanix, and HashiCorp have all begun to develop solutions that integrate with the XKS Proxy specification. For information about availability, pricing, and how to use solutions from these vendors, see their respective documentation. We encourage you and your key management infrastructure partner to leverage the open source XKS Proxy specification to build a solution that meets your needs. The API specification for XKS Proxy is published here. /kms/faqs/;Which external vendors support the XKS Proxy specification?; External keys support the following symmetric encryption operations: Encrypt, ReEncrypt, Decrypt, and GenerateDataKey. /kms/faqs/;Which AWS KMS features support external keys?; You can use XKS keys to encrypt data in any AWS service that integrates with AWS KMS using customer managed keys. See the list of supported services here. AWS services call the AWS KMS GenerateDataKey API to request a unique plaintext data key to encrypt your data. The plaintext data key is returned to the service along with an encrypted copy of the data key to be stored alongside your encrypted data. To produce the encrypted copy of the data key, the plaintext data key is first encrypted by a key stored in AWS KMS unique to your AWS account. This encrypted data key is then forwarded to your XKS Proxy implementation connected to your external key manager to be encrypted a second time under the key you define in your external key manager. The resulting double-encrypted data key is returned in the response to the GenerateDataKey API request. /kms/faqs/;How does XKS work with AWS services that integrate with AWS KMS for data encryption?; The network connection between AWS KMS, your XKS Proxy implementation, and your external key manager should be protected with a point-to-point encryption protocol like TLS. However, in order to protect your data leaving AWS KMS until it reaches your external key manager, AWS KMS first encrypts it with an internally managed KMS key in your account specific to each KMS key defined in your external key store. The resulting ciphertext is forwarded to your external key manager, which encrypts it using the key in your external key manager. Double encryption provides the security control that no ciphertext can ever be decrypted without using the key material in your external key manager. It also provides the security control that the ciphertext leaving the AWS network is encrypted using the FIPS 140 certified AWS KMS HSMs. Because your external key manager must be used to decrypt data, if you revoke access to AWS KMS, your underlying encrypted data becomes is inaccessible. /kms/faqs/;What is double encryption and how does it work?; Yes. XKS keys can also be used from within your own applications when using a client-side symmetric encryption solution that uses AWS KMS as its key provider. AWS open source client-side encryption solutions like the AWS Encryption SDK, S3 Encryption Client and DynamoDB Encryption Client support XKS keys. /kms/faqs/;I’ve already been using AWS KMS with standard KMS keys, imported KMS keys, or keys stored in my CloudHSM cluster. Can I migrate these KMS keys to XKS or re-encrypt existing encrypted under XKS keys?;You can re-encrypt existing data under newly generated XKS keys, assuming the AWS service or your own application supports the action. Many AWS services will help you copy an encrypted resource and designate a new KMS key to use to encrypt the copy. You can configure the XKS key in the COPY command provided by the AWS service. You can re-encrypt client-side encrypted data in your own applications by calling the KMS ReEncrypt API and configuring the XKS key. /kms/faqs/;How is XKS priced in KMS?; No. Automatic key rotation provided by AWS KMS is not supported on XKS keys. This is because AWS KMS cannot generate new key material on your behalf, as you control the external key manager that performs all key generation and storage. In order to rotate an XKS key in AWS KMS, you need to create a brand new XKS key in your external key manager and configure that key to a new KMS key that you create using AWS KMS. You can then configure your AWS services or client-side encryption applications to use the new XKS key for future encryption operations. As long as previous XKS keys used to create earlier ciphertexts are still enabled in AWS KMS and available in your external key manager, you will be able to successfully make Decrypt API request under those XKS keys. /kms/faqs/;If I disable, block, or delete keys in the external key store, where will my data still be accessible in the cloud?; To authenticate to your external key store proxy, AWS KMS signs all requests to the proxy using AWS SigV4 credentials that you configure on your proxy and provide to KMS. AWS KMS authenticates your external key store proxy using server-side TLS certificates. Optionally, your proxy can enable mutual TLS for additional assurance that it only accepts requests from AWS KMS. /kms/faqs/;How do I authenticate XKS proxy requests from AWS KMS to my external key manager?; All of the usual AWS KMS authorization mechanisms — IAM policies, AWS KMS key policies, grants — that you use with other KMS keys, work the same way for KMS keys in external key stores. /kms/faqs/;What types of authorization policies can I build for XKS keys?;In addition, your and/or your external key manager partners have the ability to implement a secondary layer of authorization controls based on request metadata included with each request sent from AWS KMS to the XKS Proxy. This metadata includes the calling AWS user/role, the KMS key ARNand the specific KMS API that was requested. This allows you to apply fine-grained authorization policy on the use of a key in your external key manager beyond simply trusting any request from AWS KMS. The choice of policy enforcement using these request attributes are left to your individual XKS Proxy implementations. /kms/faqs/;How does logging and auditing work with XKS?; Availability risk: You are responsible for the availability of the XKS Proxy and external key material. This system must have high availability to verify that whenever you need an XKS key to decrypt an encrypted resource or encrypt new data, AWS KMS can successfully connect to the XKS proxy, which itself can connect to your external key manager to complete the necessary cryptographic operation using the key. For example, suppose you encrypted an EBS volume using an XKS key and now you want to launch an EC2 instance and attach that encrypted volume. The EC2 service will pass the unique encrypted data key for that volume to AWS KMS to decrypt it so it can be provisioned in volatile memory of the Nitro card in order to decrypt and encrypt read/write operations to the volume. If your XKS Proxy or external key manager isn’t available to decrypt the volume key, your EC2 instance will fail to launch. In these types of failures, AWS KMS returns a KMSInvalidStateException stating that the XKS Proxy is not available. It is now up to you to determine why your XKS Proxy and key manager is unavailable based on the error messages provided by KMS. /kms/faqs/;What risks do I accept if I choose XKS instead of using standard KMS keys generated and stored in AWS KMS?;Durability risk: Because keys are under your control in systems outside of AWS, you are solely responsible for the durability of all external keys you create. If the external key for a XKS key is permanently lost or deleted, all ciphertext encrypted under the XKS key is unrecoverable. /organizations/faqs/;What is AWS Organizations?;AWS Organizations helps you centrally govern your environment as you scale your workloads on AWS. Whether you are a growing startup or a large enterprise, Organizations helps you to programmatically create new accounts and allocate resources, simplify billing by setting up a single payment method for all of your accounts, create groups of accounts to organize your workflows, and apply policies to these groups for governance. In addition, AWS Organizations is integrated with other AWS services so you can define central configurations, security mechanisms, and resource sharing across accounts in your organization. /organizations/faqs/;Which central governance and management capabilities does AWS Organizations enable?;Automate AWS account creation and management, and provision resources with AWS CloudFormation Stacksets Maintain a secure environment with policies and management of AWS security services Govern access to AWS services, resources, and regions Centrally manage policies across multiple AWS accounts Audit your environment for compliance View and manage costs with consolidated billing Configure AWS services across multiple accounts /organizations/faqs/;Which AWS Regions is AWS Organizations available in?;AWS Organizations is available in all AWS commercial regions, AWS GovCloud (US) regions, and China regions The service endpoints for AWS Organizations are located in US East (NVirginia) for commercial organizations and AWS GovCloud (US-West) for AWS GovCloud (US) organizations, and AWS China (Ningxia) region, operated by NWCD. /organizations/faqs/;How do I get started?;To get started, you must first decide which of your AWS accounts will become the management account (formerly known as master account). You can either create a new AWS account or select an existing one. /organizations/faqs/;What is AWS Control Tower?;AWS Control Tower, built on AWS services such as AWS Organizations, offers the easiest way to set up and govern a new, secure, multi-account AWS environment. It establishes a landing zone, which is a well-architected, multi-account environment based on best-practice blueprints, and enables governance using guardrails you can choose. Guardrails are SCPs and AWS Config rules that implement governance for security, compliance, and operations. /organizations/faqs/;What is the difference between AWS Control Tower and AWS Organizations?;AWS Control Tower offers an abstracted, automated, and prescriptive experience on top of AWS Organizations. It automatically sets up AWS Organizations as the underlying AWS service to organize accounts and implements preventive guardrails using SCPs. Control Tower and Organizations work well together. You can use Control Tower to set up your environment and set guardrails, then using AWS Organizations, you can further create custom policies (such as tag, backup or SCPs) that centrally control the use of AWS services and resources across multiple AWS accounts. /organizations/faqs/;AWS Control Tower uses guardrails. What is a guardrail?;Guardrails are pre-packaged SCP and AWS Config governance rules for security, operations, and compliance that customers can select and apply enterprise-wide or to specific groups of accounts. A guardrail is expressed in plain English, and enforces a specific governance policy for your AWS environment that can be enabled within an organizational unit (OU). /organizations/faqs/;When should I use AWS Control Tower?;AWS Control Tower is for customers who want to create or manage their multi-account AWS environment with built-in best practices. It offers prescriptive guidance to govern your AWS environment at scale and gives you control over your environment without sacrificing the speed and agility AWS provides for builders. You will benefit from AWS Control Tower if you are building a new AWS environment, starting out on your journey on AWS, starting a new cloud initiative, are completely new to AWS, or have an existing multi-account AWS environment. /organizations/faqs/;What is an organization?;An organization is a collection of AWS accounts that you can organize into a hierarchy and manage centrally. /organizations/faqs/;What is an AWS account?;An AWS account is a container for your AWS resources. You create and manage your AWS resources in an AWS account, and the AWS account provides administrative capabilities for access and billing. /organizations/faqs/;What is a management account (formerly known as master account)?;A management account is the AWS account you use to create your organization. From the management account, you can create other accounts in your organization, invite and manage invitations for other accounts to join your organization, and remove accounts from your organization. You can also attach policies to entities such as administrative roots, organizational units (OUs), or accounts within your organization. The management account is the ultimate owner of the organization, having final control over security, infrastructure, and finance policies. This account has the role of a payer account and is responsible for paying all charges accrued by the accounts in its organization. You cannot change which account in your organization is the management account. /organizations/faqs/;What is a member account?;A member account is an AWS account, other than the management account, that is part of an organization. If you are an administrator of an organization, you can create member accounts in the organization and invite existing accounts to join the organization. You also can apply policies to member accounts. A member account can belong to only one organization at a time. /organizations/faqs/;What is an administrative root?;An administrative root is contained in the management account and is the starting point for organizing your AWS accounts. The administrative root is the top-most container in your organization’s hierarchy. Under this root, you can create OUs to logically group your accounts and organize these OUs into a hierarchy that best matches your business needs. /organizations/faqs/;What is an organizational unit (OU)?;An organizational unit (OU) is a group of AWS accounts within an organization. An OU can also contain other OUs enabling you to create a hierarchy. For example, you can group all accounts that belong to the same department into a departmental OU. Similarly, you can group all accounts running security services into a security OU. OUs are useful when you need to apply the same controls to a subset of accounts in your organization. Nesting OUs enables smaller units of management. For example, you can create OUs for each workload, then create two nested OUs in each workload OU to divide production workloads from pre-production. These OUs inherit the policies from the parent OU in addition to any controls assigned directly to the team-level OU. /organizations/faqs/;What is a policy?;A policy is a “document” with one or more statements that define the controls that you want to apply to a group of AWS accounts. AWS Organizations supports the following policies: /organizations/faqs/;Can I change which AWS account is the management account?;No. You cannot change which AWS account is the management account. Therefore, you should select your management account carefully. /organizations/faqs/;How do I add an AWS account to my organization?;Use one of the following two methods to add an AWS account to your organization: /organizations/faqs/;Can an AWS account be a member of more than one organization?;No. An AWS account can be a member of only one organization at a time. /organizations/faqs/;How can I access an AWS account that was created in my organization?;As part of AWS account creation, AWS Organizations creates an IAM role with full administrative permissions in the new account. IAM users and IAM roles with appropriate permissions in the master account can assume this IAM role to gain access to the newly created account. /organizations/faqs/;Can I set up multi-factor authentication (MFA) on the AWS account that I create in my organization programmatically?;No. This currently is not supported. /organizations/faqs/;Can I move an AWS account that I have created using AWS Organizations to another organization?;Yes. However, you must first remove the account from your organization and make it a standalone account (see below). After making the account standalone, it can then be invited to join another organization. /organizations/faqs/;Can I remove an AWS account that I created using Organizations and make it a standalone account?;Yes. When you create an account in an organization using the AWS Organizations console, API, or CLI commands, AWS does not collect all of the information required of standalone accounts. For each account that you want to make standalone, you need to update this information, which can include: providing contact information, agreeing to the AWS Customer Agreement, providing a valid payment method, and choosing a support plan option. AWS uses the payment method to charge for any billable (not AWS Free Tier) AWS activity that occurs while the account is not attached to an organization. For more information, see Removing a Member Account from Your Organization. /organizations/faqs/;How many AWS accounts can I manage in my organization?;This can vary. If you need additional accounts, go to the AWS Support Center and open a support case to request an increase. /organizations/faqs/;How can I remove an AWS member account from an organization?;You can remove a member account by using one of the following two methods. You might have to provide additional information to remove an account that you created using Organizations. If the attempt to remove an account fails, go to the AWS Support Center and ask for help with removing an account. /organizations/faqs/;How can I create an organizational unit (OU)?;To create an OU, follow these steps: /organizations/faqs/;How can I add a member AWS account to an OU?;Follow these steps to add member accounts to an OU: /organizations/faqs/;Can an OU be a member of multiple OUs?;No. An OU can be a member of only one OU at a time. /organizations/faqs/;How many levels can I have in my OU hierarchy?;You can nest your OUs five levels deep. Including root and AWS accounts created in the lowest OUs, your hierarchy can be five levels deep. /organizations/faqs/;At what levels of my organization can I apply a policy?;You can attach a policy to the root of your organization (applies to all accounts in your organization), to individual organizational units (OUs), which applies to all accounts in the OU including nested OUs, or to individual accounts. /organizations/faqs/;How can I attach a policy?;You can attach a policy in one of two ways: /organizations/faqs/;Are policies inherited through hierarchical connections in my organization?;Yes. For example, let’s assume that you have arranged your AWS accounts into OUs according to your application development stages: DEV, TEST, and PROD. Policy P1 is attached to the organization’s root, policy P2 is attached to the DEV OU, and policy P3 is attached to AWS account A1 in the DEV OU. With this setup, P1+P2+P3 all apply to account A1. For more information, see About Service Control Policies. /organizations/faqs/;What types of policies does AWS Organizations support?;Currently, AWS Organizations supports the following policies: /organizations/faqs/;What is a Service Control Policy (SCP)?;Service Control Policies (SCPs) allow you to control which AWS service actions are accessible to principals (account root, IAM users, and IAM roles) in the accounts of your organization. An SCP is required but is not the only control that determines which principals in an account can access resources to grant principals in an account access to resources. The effective permission on a principal in an account that has an SCP attached is the intersection of what is allowed explicitly in the SCP and what is allowed explicitly in the permissions attached to the principal. For example, if an SCP applied to an account states that the only actions allowed are Amazon EC2 actions, and the permissions on a principal in the same AWS account allow both EC2 actions and Amazon S3 actions, the principal is able to access only the EC2 actions. Principals in a member account (including the root user for the member account) cannot remove or change SCPs that are applied to that account. /organizations/faqs/;What does an SCP look like?;SCPs follow the same rules and grammar as IAM policies. For information about SCP syntax, see SCP Syntax. For example SCPs, see Example Service Control Policies. /organizations/faqs/;If I attach an empty SCP to an AWS account, does that mean that I allow all AWS service actions in that AWS account?;No. SCPs behave the same way as IAM policies: an empty IAM policy is equivalent to a default DENY. Attaching an empty SCP to an account is equivalent to attaching a policy that explicitly denies all actions. /organizations/faqs/;What are the effective permissions if I apply an SCP to my organization and my principals also have IAM policies?;"The effective permissions granted to a principal (account root, IAM user, and IAM role) in an AWS account with an SCP applied are the intersection between those allowed by the SCP and the permissions granted to the principal by IAM permission policies. For example, if an IAM user has ""Allow"": ""ec2:* "" and ""Allow"": ""sqs:* "", and the SCP attached to the account has ""Allow"": ""ec2:* "" and ""Allow"": ""s3:* "", the resultant permission for the IAM user is ""Allow"": ""ec2:* "" The principal cannot perform any Amazon SQS (not allowed by the SCP) or S3 actions (not granted by the IAM policy)." /organizations/faqs/;Can I simulate the effect of an SCP on an AWS account?;Yes, the IAM policy simulator can include the effects of SCPs. You can use the policy simulator in a member account in your organization to understand the effect on individual principals in that account. An administrator in a member account with the appropriate AWS Organizations permissions can see if an SCP is affecting the access for the principals (account root, IAM user, and IAM role) in your member account. For more information, see Service Control Policies. /organizations/faqs/;Can I create and manage an organization without enforcing an SCP?;Yes. You decide which policies that you want to enforce. For example, you could create an organization that takes advantage only of the consolidated billing functionality. This allows you to have a single-payer account for all accounts in your organization and automatically receive default tiered-pricing benefits. /organizations/faqs/;What does AWS Organizations cost?;AWS Organizations is offered at no additional charge. /organizations/faqs/;Who pays for usage incurred by users under an AWS member account in my organization?;The owner of the management account is responsible for paying for all usage, data, and resources used by the accounts in the organization. /organizations/faqs/;Will my bill reflect the organizational unit structure that I created in my organization?;No. For now, your bill will not reflect the structure that you have defined in your organization. You can use cost allocation tags in individual AWS accounts to categorize and track your AWS costs, and this allocation will be visible in the consolidated bill for your organization. /organizations/faqs/;Why should I enable an AWS service integrated with AWS Organizations?;AWS services have integrated with AWS Organizations to provide customers with centralized management and configuration across accounts in their organization. This enables you to manage services across your accounts from a single place, simplifying deployment and configurations. /organizations/faqs/;Which AWS services are currently integrated with AWS Organizations?;For a list of AWS services integrated with AWS Organizations, see AWS Services That You Can Use with AWS Organizations. /organizations/faqs/;How do I enable an AWS service integration?;To get started using an AWS service integrated with AWS Organization, navigate in the AWS Management Console to that service and enable the integration. /shield/faqs/;What is AWS Shield Standard?;AWS Shield Standard provides protection for all AWS customers against common and most frequently occurring infrastructure (layer 3 and 4) attacks like SYN/UDP floods, reflection attacks, and others to support high availability of your applications on AWS. /shield/faqs/;What is AWS Shield Advanced?;AWS Shield Advanced provides enhanced protections for your applications running on protected Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53 resources against more sophisticated and larger attacks. AWS Shield Advanced protection provides always-on, flow-based monitoring of network traffic and active application monitoring to provide near real-time notifications of suspected DDoS incidents. AWS Shield Advanced also employs advanced attack mitigation and routing techniques for automatically mitigating attacks. Customers with Business or Enterprise support can also engage the Shield Response Team (SRT) 24x7 to manage and mitigate their application layer DDoS attacks. The DDoS cost protection for scaling protects your AWS bill against higher fees due to usage spikes from protected Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 during a DDoS attack. /shield/faqs/;What is DDoS cost protection for scaling?;AWS Shield Advanced includes DDoS cost protection, a safeguard from scaling charges as a result of a DDoS attack that causes usage spikes on protected Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, or Amazon Route 53. If any of the AWS Shield Advanced protected resources scale up in response to a DDoS attack, you can request credits via the regular AWS Support channel. /shield/faqs/;Can I use AWS Shield to protect web sites not hosted in AWS?;Yes, AWS Shield is integrated with Amazon CloudFront, which supports custom origins outside of AWS. /shield/faqs/;Can I use IPv6 with all AWS Shield features?;Yes. All of AWS Shield’s detection and mitigations work with IPv6 and IPv4 without any discernable changes to performance, scalability, or availability of the service. /shield/faqs/;How can I test AWS Shield?;AWS Acceptable Use Policy describes permitted and prohibited behavior on AWS, and it includes descriptions of prohibited security violations and network abuse. However, because DDoS simulation testing, penetration testing, and other simulated events are frequently indistinguishable from these activities, we have established policies for customers to request permission to conduct DDoS tests, penetration tests and vulnerability scans. Visit our Penetration testing page and DDoS Simulation Testing policy for more details. /shield/faqs/;In which AWS regions is AWS Shield Standard available?;AWS Shield Standard is available on all AWS services in every AWS Region and AWS edge location worldwide. /shield/faqs/;In which AWS regions is AWS Shield Advanced available?;AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations worldwide. You can protect your web applications hosted anywhere in the world by deploying Amazon CloudFront in front of your application. Your origin servers can be Amazon Simple Storage Service (S3), Amazon EC2, Elastic Load Balancing, or a custom server outside of AWS. You can also enable AWS Shield Advanced directly on Elastic Load Balancing or Amazon EC2 in the following AWS Regions - Northern Virginia, Ohio, Oregon, Northern California, Montreal, São Paulo, Ireland, Frankfurt, London, Paris, Stockholm, Singapore, Tokyo, Sydney, Seoul, Mumbai, Milan, and Cape Town. /shield/faqs/;Is AWS Shield HIPAA eligible?;Yes, AWS has expanded its HIPAA compliance program to include AWS Shield as a HIPAA eligible service. If you have an executed Business Associate Agreement (BAA) with AWS, you can use AWS Shield to safeguard your web applications running on AWS from Distributed Denial of Service (DDoS) attacks. For more information, see HIPAA Compliance. /shield/faqs/;What types of attacks can AWS Shield Standard help protect me from?;AWS Shield Standard automatically provides protection for web applications running on AWS against the most common, frequently occurring Infrastructure layer attacks like UDP floods, and State exhaustion attacks like TCP SYN floods. Customers can also use AWS WAF to protect against Application layer attacks like HTTP POST or GET floods. Find more details on how to deploy application layer protections in the AWS WAF and AWS Shield Advanced Developer Guide. /shield/faqs/;How many resources can I enable for AWS Shield Standard protection?;There is no limit on the number of resources subject to AWS Shield Standard protection. You can get the full benefits of AWS Shield Standard protections by following the best practices of DDoS resiliency on AWS. /shield/faqs/;How many resources can I enable for AWS Shield Advanced protection?;You can enable up to 1000 AWS resources of each supported resource type (Classic / Application Load Balancers, Amazon CloudFront distributions, Amazon Route 53 hosting zones, Elastic IPs, AWS Global Accelerator accelerators) for AWS Shield Advanced protection. If you want to enable more than 1000, you can request a limit increase by creating an AWS Support case. /shield/faqs/;Can I activate AWS Shield Advanced protection via API?;Yes. AWS Shield Advanced can be activated via APIs. You can also add or remove AWS resources from AWS Shield Advanced protection via APIs. /shield/faqs/;How quickly are attacks mitigated?;Typically, 99% of infrastructure layer attacks detected by AWS Shield are mitigated in less than 1 second for attacks on Amazon CloudFront and Amazon Route 53, and less than 5 minutes for attacks on Elastic Load Balancing. The remaining 1% of infrastructure attacks are typically mitigated in under 20 minutes. Application layer attacks are mitigated by writing rules on AWS WAF, which are inspected and mitigated inline with incoming traffic. /shield/faqs/;Can I protect resources outside of AWS?;Yes, a number of our customers choose to use AWS endpoints in front of their backend instances. Most commonly, these endpoints are our globally distributed services of CloudFront and Route 53. These services are also our best practice suggestions for DDoS resiliency. Customers can then protect these CloudFront distributions and Route 53 hosted zones with Shield Advanced. Please note that you need to lock down their backend resources to only accept traffic from these AWS endpoints. /shield/faqs/;What tools does AWS Shield Advanced provide me to mitigate DDoS attacks?;AWS Shield Advanced manages mitigation of layer 3 and layer 4 DDoS attacks. This means that your designated applications are protected from attacks like UDP Floods, or TCP SYN floods. In addition, for application layer (layer 7) attacks, AWS Shield Advanced can detect attacks like HTTP floods and DNfloods. You can use AWS WAF to apply your own mitigations, or, if you have Business or Enterprise support, you can engage the 24X7 AWS Shield Response Team (SRT), who can write rules on your behalf to mitigate Layer 7 DDoS attacks. /shield/faqs/;Do I need a special support plan to contact the AWS Shield Response Team?;Yes, you need a Business or Enterprise support plan in order to escalate to or engage the AWS Shield Response Team (SRT). See the AWS Support website for more details about AWS Support plans. /shield/faqs/;How can I contact the AWS Shield Response Team?;You can engage the AWS Shield Response Team (SRT) via regular AWS support, or contact AWS Support. /shield/faqs/;How quickly can I engage the AWS Shield Response Team (SRT)?;Response times for SRT depends on the AWS Support plan you are subscribed to. We will make every reasonable effort to respond to your initial request within the corresponding timeframes. See the AWS Support website for more details about AWS Support plans. /shield/faqs/;How quickly will I get an attack notification?;Typically, AWS Shield Advanced provides notification of an attack within a few minutes of attack detection. /shield/faqs/;Can I get a history of all DDoS attacks on my AWS resources?;Yes. With AWS Shield Advanced you will be able to see the history of all incidents in the trailing 13 months. /shield/faqs/;Can I see attacks across AWS?;Yes, AWS Shield Advanced customers get access to the Global threat environment dashboard, which gives an anonymized and sampled view of all DDoS attacks seen on AWS within the last 2 weeks. /shield/faqs/;How can I see if my AWS WAF rules are working?;AWS WAF includes two different ways to see how your website is being protected: one-minute metrics are available in CloudWatch and Sampled Web Requests are available in the AWS WAF API or AWS Management Console. Additionally, you can enable comprehensive logs that are delivered through Amazon Kinesis Firehose to a destination of your choice. These allow you to see which requests were blocked, allowed, or counted and what rule was matched on a given request (i.e., this web request was blocked due to an IP address condition, etc.). For more information see the AWS WAF and AWS Shield Advanced Developer Guide. /shield/faqs/;I need to do a pen-test to evaluate the service and my application. What is the approved procedure?;Please refer to Penetration testing on AWS. However, this does not include a “DDoS load test”, which is not authorized on AWS. If you'd like to do a live DDoS test, you can request approval for the same by raising a ticket through AWS Support. Approval for the same involves agreement on the conditions of the test between AWS, the customer, and the DDoS test vendor. Please note that we only work with approved DDoS test vendors, and the whole process takes 3-4 weeks. /shield/faqs/;How am I charged for AWS Shield Advanced?;With AWS Shield Advanced, you pay a monthly fee of $3,000 per month per organization. In addition, you also pay for AWS Shield Advanced Data Transfer usage fees for AWS resources enabled for advanced protection. AWS Shield Advanced charges are in addition to standard fees on Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53. Please see the AWS Shield Pricing page for more details. /shield/faqs/;Can I choose to only protect some of my resources with AWS Shield Advanced?;Yes, AWS Shield Advanced allows you the flexibility to choose the resources that you'd like to protect. You will only be charged for AWS Shield Advanced Data Transfer on these protected resources. /shield/faqs/;How can I enable AWS Shield Advanced across multiple AWS Accounts?;If your organization has multiple AWS accounts, you can subscribe multiple AWS Accounts to AWS Shield Advanced by individually enabling it on each account using the AWS Management Console or API. You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and resources in those accounts. /waf/faqs/;What is AWS WAF?;AWS WAF is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define. These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection and cross-site scripting. /waf/faqs/;How does AWS WAF block or allow traffic?;As the underlying service receives requests for your web sites, it forwards those requests to AWS WAF for inspection against your rules. Once a request meets a condition defined in your rules, AWS WAF instructs the underlying service to either block or allow the request based on the action you define. /waf/faqs/;How does AWS WAF protect my web site or application?;AWS WAF is tightly integrated with Amazon CloudFront, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync – services that AWS customers commonly use to deliver content for their websites and applications. When you use AWS WAF on Amazon CloudFront, your rules run in all AWS Edge Locations, located around the world close to your end users. This means security doesn’t come at the expense of performance. Blocked requests are stopped before they reach your web servers. When you use AWS WAF on regional services, such as Application Load Balancer, Amazon API Gateway, and AWS AppSync, your rules run in region and can be used to protect internet-facing resources as well as internal resources. /waf/faqs/;Can I use AWS WAF to protect web sites not hosted in AWS?;Yes, AWS WAF is integrated with Amazon CloudFront, which supports custom origins outside of AWS. /waf/faqs/;Which types of attacks can AWS WAF help me to stop?;AWS WAF helps protects your website from common attack techniques like SQL injection and Cross-Site Scripting (XSS). In addition, you can create rules that can block or rate-limit traffic from specific user-agents, from specific IP addresses, or that contain particular request headers. See the AWS WAF Developer Guide for examples. /waf/faqs/;Which bot mitigation capabilities are available with AWS WAF?;AWS WAF Bot Control gives you visibility and control over common and pervasive bot traffic to your applications. With Bot Control, you can easily monitor, block, or rate-limit pervasive bots, such as scrapers, scanners, and crawlers, and you can allow common bots, such as status monitors and search engines. You can use the Bot Control managed rule group alongside other Managed Rules for WAF or with your own custom WAF rules to protect your applications. See the AWS WAF Bot Control section in the developer guide. /waf/faqs/;Can I get a history of all AWS WAF API calls made on my account for security, operational or compliance auditing?;Yes. To receive a history of all AWS WAF API calls made on your account, you simply turn on AWS CloudTrail in the CloudTrail's AWS Management Console. For more information, visit AWS CloudTrail home page or visit the AWS WAF Developer Guide. /waf/faqs/;Does AWS WAF support IPv6?;Yes, support for IPv6 allows the AWS WAF to inspect HTTP/S requests coming from both IPv6 and IPv4 addresses. /waf/faqs/;Does IPSet match condition for an AWS WAF Rule support IPv6?;Yes, you can setup new IPv6 match condition(s) for new and existing WebACLs, as per the documentation. /waf/faqs/;Can I expect to see IPv6 address appear in the AWS WAF sampled requests where applicable?;Yes. The sampled requests will show the IPv6 address where applicable. /waf/faqs/;Can I use IPv6 with all AWS WAF features?;Yes. You will be able to use all the existing features for traffic both over IPv6 and IPv4 without any discernable changes to performance, scalability or availability of the service. /waf/faqs/;What services does AWS WAF support?;AWS WAF can be deployed on Amazon CloudFront, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync. As part of Amazon CloudFront it can be part of your Content Distribution Network (CDNprotecting your resources and content at the Edge locations. As part of the Application Load Balancer it can protect your origin web servers running behind the ALBs. As part of Amazon API Gateway, it can help secure and protect your REST APIs. As part of AWS AppSync, it can help secure and protect your GraphQL APIs. /waf/faqs/;In what AWS Regions is AWS WAF available in?;Please refer to the AWS Region Services table. /waf/faqs/;Is AWS WAF HIPAA eligible?;Yes, AWS has expanded its HIPAA compliance program to include AWS WAF as a HIPAA eligible service. If you have an executed Business Associate Agreement (BAA) with AWS, you can use AWS WAF to protect your web applications from common web exploits. For more information, see HIPAA Compliance. /waf/faqs/;How does AWS WAF pricing work? Are there any upfront costs?;AWS WAF charges based on the number of web access control lists (web ACLs) that you create, the number of rules that you add per web ACL, and the number of web requests that you receive. There are no upfront commitments. AWS WAF charges are in addition to Amazon CloudFront pricing, the Application Load Balancer (ALB) pricing, Amazon API Gateway pricing, and/or AWS AppSync pricing. /waf/faqs/;What is Rate-based Rule in AWS WAF?;Rate-based Rules are type of Rule that can be configured in AWS WAF, allowing you to specify the number of web requests that are allowed by a client IP in a trailing, continuously updated, 5 minute period. If an IP address breaches the configured limit, new requests will be blocked until the request rate falls below the configured threshold. /waf/faqs/;How does a Rate-based rule compare to a regular AWS WAF Rule?;Rate-based Rules are similar to regular Rules, with one addition: the ability to configure a rate-based threshold. If, for example, the threshold for the Rate-based Rule is set to (say) 2,000, the rule will block all IPs that have more than 2,000 requests in the last 5 minute interval. A Rate-based Rule can also contain any other AWS WAF Condition that is available for a regular rule. /waf/faqs/;What does the Rate-based Rule cost?;A Rate-based Rule costs the same as a regular AWS WAF Rule which is $1 per rule per WebACL per month /waf/faqs/;What are the use cases for the Rate-based Rule?;Here are some popular use cases customers can address with Rate-based rules: /waf/faqs/;Are the existing matching conditions compatible with the Rate-base Rule?;Yes. Rate-based rules are compatible with existing AWS WAF match conditions. This allows you to further refine your match criteria and limit rate-based mitigations to specific URLs of your website or traffic coming from specific referrers (or user agents) or add other custom match criteria. /waf/faqs/;Can I use Rate-based rule to mitigate Web layer DDoS attacks?;Yes. This new rules type is designed to protect you from use cases such web-layer DDoS attacks, brute force login attempts and bad bots. /waf/faqs/;What visibility features does Rate-based Rules offer?;Rate-based Rules support all the visibility features currently available on the regular AWS WAF Rules. Additionally, they will get visibility into the IP addresses blocked as a result of the Rate-based Rule. /waf/faqs/;Can I use Rate-based rule to limit access to a certain parts of my Webpage?;Yes. Here is an example. Suppose that you want to limit requests to the login page on your website. To do this, you could add the following string match condition to a rate-based rule: /waf/faqs/;Can I exempt certain high-traffic source IP ranges from being blocked by my Rate-based Rule(s)?;Yes. You can do this by having a separate IP match condition that allows the request within the Rate-base Rule. /waf/faqs/;How accurate is your GeoIP database?;The accuracy of the IP Address to country lookup database varies by region. Based on recent tests, our overall accuracy for the IP address to country mapping is 99.8%. /waf/faqs/;What are Managed Rules for AWS WAF?;Managed Rules are an easy way to deploy pre-configured rules to protect your applications common threats like application vulnerabilities like OWASP, bots, or Common Vulnerabilities and Exposures (CVE). AWS Managed Rules for AWS WAF are managed by AWS, whereas Managed Rules from AWS Marketplace is managed by third-party security sellers. /waf/faqs/;How can I subscribe to Managed Rules through AWS Marketplace?;You can subscribe to a Managed Rule provided by a Marketplace security Seller from the AWS WAF console or from the AWS Marketplace. All subscribed Managed Rules will be available for you to add to an AWS WAF web ACL. /waf/faqs/;Can I use Managed Rules along with my existing AWS WAF rules?;Yes, you can use Managed Rules along with your custom AWS WAF rules. You can add Managed Rules to your existing AWS WAF web ACL to which you might have already added your own rules. /waf/faqs/;Will Managed Rules add to my existing AWS WAF limit on number of rules?;The number of rules inside a Managed Rule does not count towards your limit. However, each Managed Rule added to your web ACL will count as 1 rule. /waf/faqs/;How can I disable a Managed Rule?;You can add a Managed Rule to a web ACL or remove it from the web ACL anytime. The Managed Rules are disabled once you disassociate a Managed Rule from any web ACLs. /waf/faqs/;How can I test a Managed Rule?;AWS WAF allows you to configure a “count” action for a Managed Rule, which counts the number of web requests that are matched by the rules inside the Managed Rule. You can look at the number of counted web requests to estimate how many of your web requests would be blocked if you enable the Managed Rule. /waf/faqs/;Can I configure custom error pages?;Yes, you can configure CloudFront to present a custom error page when requests are blocked. Please see the CloudFront Developer Guide for more information /waf/faqs/;How long does it take AWS WAF to propagate my rules?;After an initial setup, adding or changing to rules typically takes around a minute to propagate worldwide. /waf/faqs/;How can I see if my rules are working?;AWS WAF includes two different ways to see how your website is being protected: one-minute metrics are available in CloudWatch and Sampled Web Requests are available in the AWS WAF API or management console. These allow you to see which requests were blocked, allowed, or counted and what rule was matched on a given request (i.e., this web request was blocked due to an IP address condition, etc.). For more information see the AWS WAF Developer Guide. /waf/faqs/;How can I test my rules?;AWS WAF allows you to configure a “count” action for rules, which counts the number of web requests that meet your rule conditions. You can look at the number of counted web requests to estimate how many of your web requests would be blocked or allowed if you enable the rule. /waf/faqs/;Can AWS WAF inspect HTTPS traffic?;Yes. AWS WAF helps protect applications and can inspect web requests transmitted over HTTP or HTTPS. /waf/faqs/;How does Account Takeover Prevention safeguard the credential under inspection?;Traffic between user devices and your application is secured by the SSL/TLS protocol that you configure for the AWS service you use to front your application, such as Amazon CloudFront, Application Load Balancer, Amazon API Gateway, or AWS AppSync. Once a user credential reaches AWS WAF, AWS WAF inspects the credential and then immediately hashes and discards it, and the credential never leaves the AWS network. Any communication between AWS services you use in your application and AWS WAF is encrypted in transit and at rest. /waf/faqs/;How does Account Takeover Prevention compare to Bot Control?;Bot Control gives you visibility and control over common and pervasive bot traffic that can consume resources, skew metrics, cause downtime, and perform other undesired activities. Bot Control checks various header fields and request properties against known bot signatures to detect and categorize automated bots, such as scrapers, scanners, and crawlers. /waf/faqs/;How do I get started with Account Takeover Prevention and AWS WAF?;On the AWS WAF console, create a new web ACL, or modify an existing web ACL if you are using AWS WAF already. You can use the wizard to help you configure basic settings, such as which resource you want to protect and which rules to add. When prompted to add rules, select Add Managed Rules and then select Account Takeover Prevention from the list of managed rules. To configure ATP, enter the URL of your application’s login page and indicate where the user name and password form fields are located within the request’s body. /waf/faqs/;What benefit does JavaScript SDK or Mobile SDK provide?;JavaScript and Mobile SDKs provide additional telemetry on user devices that attempt to log in to your application to better protect your application against automated login attempts by bots. You do not need to use one of the SDKs, but we recommend that you do so for additional protection. /waf/faqs/;How do I customize the default behavior of Account Takeover Prevention?;When ATP determines that a user’s credential has been compromised, it generates a label to indicate a match. By default, AWS WAF automatically blocks login attempts that are determined to be malicious or anomalous (for example, abnormal levels of failed login attempts, repeat offenders, and login attempts from bots). You can change how AWS WAF responds to matches by writing AWS WAF rules that act on the label. /amplify/faqs/;What is AWS Amplify?;AWS Amplify consists of a set of tools (open source framework, visual development environment, console) and services (web app and static website hosting) to accelerate the development of mobile and web applications on AWS. Amplify's open source framework includes an opinionated set of libraries, UI components, and a command line interface (CLI) to build an app backend and integrate it with your iOS, Android, Web, and React Native apps. The framework leverages a core set of AWS Cloud Services to offer capabilities including offline data, authentication, analytics, push notifications, and bots at high scale. Amplify Studio further simplifies the configuration of backends and frontend UIs with a visual point-and-click experience that works seamlessly with the Amplify CLI. Amplify Studio also includes functionality for managing app content and users. AWS Amplify also offers a fully managed web app and static website hosting service to host your front-end web app, create/delete backend environments, setup CI/CD on the front end and backend. Finally, as part of the broader set of front-end web and mobile development tools and services, you can use AWS Device Farm for testing apps on real iOS devices, Android devices, and web browsers. /amplify/faqs/;What does it cost to use AWS Amplify?;When you use Amplify's open source framework (libraries, UI components, CLI) or Amplify Studio, you pay only for the underlying AWS services you use. There are no additional charges for using these tools. To learn about pricing for AWS Amplify Hosting, Amplify’s fully managed web app and static website hosting service, visit the AWS Amplify pricing page. To learn about pricing for AWS Device Farm, visit the AWS Device Farm pricing page. /amplify/faqs/;How does hosting with AWS Amplify relate to Amplify's open source framework?;AWS Amplify consists of tools (open source framework and visual development environment) and a fully managed web hosting service. The tools in the framework (libraries, UI components, CLI), Amplify Studio, the console, and the static web hosting service can be used together or individually. For example, you can go to AWS Amplify from the AWS console to deploy and host Single Page App (SPA) frontends and static websites, whether or not they use Amplify libraries. If you are using the Amplify CLI to configure backend resources for your app, AWS Amplify's static web hosting service offers additional functionality. On each check-in, AWS Amplify provisions or updates these backend resources prior to deploying your front end. There is support for a variety of configurations, such as isolated backend deployments per branch or shared backend deployments across branches when you use AWS Amplify's web hosting service. /amplify/faqs/;Where can I find the latest news on AWS Amplify?;Visit our blog and What’s New page. /amplify/faqs/;What can I do with the Amplify libraries, CLI and Amplify Studio?;With the Amplify libraries, you can quickly add features such as offline data, multifactor authentication, analytics, and others to your application with a few lines of code. You can configure the underlying cloud services like AWS AppSync, Amazon Cognito, Amazon Pinpoint, AWS Lambda, Amazon S3, or Amazon Lex directly from the Amplify CLI or Amplify Studio with intuitive guided workflows, minimizing the time required to set-up and manage your backend services. /amplify/faqs/;What languages and platforms do Amplify libraries support?;Amplify libraries support iOS, Android, Web, Flutter, and React Native apps. For Web apps, there is deep integration with React, Ionic, Angular, and Vue.js. /amplify/faqs/;Can I use the Amplify libraries even if I do not use the CLI?;Yes. The libraries can be used to access backend resources that were created without the Amplify CLI. /amplify/faqs/;How do Amplify features work with AWS cloud services?;Amplify features are organized based on the use cases you need to integrate with your app, such as offline data, multi factor authentication, analytics, and others. When you configure these features using the Amplify CLI or the Amplify Studio, the necessary AWS cloud services are provisioned for you. The configuration is persisted in CloudFormation templates that can be checked into source control and shared with other developers. When you add these features to your app via the Amplify libraries, the library makes the necessary calls to AWS services. For example, 'amplify add analytics' will configure Amazon Pinpoint. Then, when you use the Analytics APIs from the Amplify library in your app, the necessary calls will be made to Pinpoint. /amplify/faqs/;How is AWS Amplify related to the AWS Mobile SDKs for iOS and Android?;Amplify iOS and Amplify Android are the recommended ways to build iOS and Android apps that leverage AWS services, whether or not you have configured them using the Amplify CLI. Get started here. If your app is already built using the previous AWS Mobile SDKs for iOS and Android, documentation is available here. /amplify/faqs/;What is Amplify Studio?;Amplify Studio is a visual interface for configuring and maintaining app backends and creating frontend UIs outside the AWS console. Once you've launched your app, Amplify Studio also enables developers and non-developers to manage app content and users. /amplify/faqs/;Why is Amplify Studio outside the AWS console?;Amplify Studio is accessible outside the AWS console to provide front-end developers new to AWS the opportunity to engage with AWS tools quickly and more efficiently. Amplify Studio provides a simplified view of the features needed to build a cloud-connected web or mobile app, both the backend and frontend UI. Amplify Studio also provides easy access for non-developers (QA testers, PMs) to manage the app content and users without requiring developers to figure out the right IAM roles and policies. /amplify/faqs/;What is the Amplify console and how is it different from the Amplify Studio?;The Amplify console is the control center for your app inside the AWS management console. The AWS Amplify console shows you all the front-end environments and backend environments for your apps, whereas Amplify Studio has a unique instance tied to each individual backend environment. /amplify/faqs/;What is AWS Amplify's web hosting service?;In addition to AWS Amplify's development tools and features, AWS Amplify offers a fully managed hosting service for web apps and static websites that can be accessed directly from the AWS console. AWS Amplify's static web hosting service provides a complete workflow for building, deploying, and hosting single page web apps or static sites with serverless backends. Continuous deployment allows developers to deploy updates to their web app on every code commit to their Git repository. When the build succeeds, the app is deployed and hosted on an amplifyapp.com subdomain. Developers can connect their custom domain to start receiving production traffic. /amplify/faqs/;What type of web apps can I build and deploy?;In addition to AWS Amplify's development tools and features, AWS Amplify offers a fully managed static web hosting service for web apps and static websites that can be accessed directly from the AWS console. AWS Amplify's static web hosting service provides a complete workflow for building, deploying, and hosting single page web apps or static sites with serverless backends. Continuous deployment allows developers to deploy updates to their web app on every code commit to their Git repository. When the build succeeds, the app is deployed and hosted on an amplifyapp.com subdomain. Developers can connect their custom domain to start receiving production traffic. /amplify/faqs/;How do I get started with AWS Amplify web hosting?;To get started, go to AWS Amplify in the AWS console and connect your source repository. AWS Amplify automatically determines the front-end framework used, and then builds and deploys the app to a globally available content delivery network (CDN). Amplify detects backend functionality added using the Amplify CLI or Amplify Studio, and can deploy the necessary AWS resources in the same deployment as the front end. AWS Amplify will build and deploy your web app quickly, and host your web app on a globally available content delivery network (CDNwith a friendly URL (example: https://master.appname.amplifyapp.com). To get started, go to AWS Amplify on the AWS console. /amplify/faqs/;What is an AWS Amplify 'app'?;An AWS Amplify 'app' is your project container. Each app project contains a list of branches you have connected from your source repository. You can connect additional feature branches, a custom domain, or access your build logs from your app project. /amplify/faqs/;What is continuous deployment?;Continuous deployment is a DevOps strategy for software releases where every code commit to a repository is automatically released to production or staging environment. This practice reduces time to market by ensuring that your hosted web app is always a reflection of the latest code in your repository. /amplify/faqs/;What Git source code providers does AWS Amplify static web hosting support?;You can connect private and public repositories from GitHub, BitBucket, GitLab, and AWS CodeCommit. /amplify/faqs/;Does AWS Amplify web hosting store my Git access tokens?;AWS Amplify never stores access tokens from repositories. Once you authorize AWS Amplify, we fetch an access token from your source provider. We simply pass the token to our console, and from then on, all communication with the GitHub API happens straight from the browser. After configuring continuous deployment, the token is permanently discarded. /amplify/faqs/;Does AWS Amplify web hosting support private Git servers?;We currently do not support private Git servers. /amplify/faqs/;What are environment variables? How do I use them?;Environment variables are configurations required by apps at runtime. These configurations could include database connection details, third-party API keys, different customization parameters and secrets. The best way to expose these configurations is to do so with environment variables. You can add environment variables when creating an app or by going to the app settings. All environment variables are encrypted to prevent rogue access. Add all your app environment variables in the key and value textboxes. By default, AWS Amplify applies the environment variables across all branches, so you don't have to re-enter variables when you connect a new branch. Once you enter all the variables hit Save. /amplify/faqs/;What happens when a build is run?;AWS Amplify will create a temporary compute container (4 vCPU, 7GB RAM), download the source code, run the commands configured in the project, deploy the generated artifact to a web hosting environment, and then destroy the compute container. During the build, AWS Amplify will stream the build output to the service console. /amplify/faqs/;How can I leverage AWS Amplify web hosting to work with multiple environments?;AWS Amplify leverages Git’s branching model to create new environments every time a developer pushes code to a new branch. In typical development teams, developers deploy their ‘master’ branch to production, keep the ‘dev’ branch as staging, and create feature branches when working on new functionality. AWS Amplify Console can create frontend and backend environments linked to each connected branch. This allows developers to work in sandbox environments, and use ‘Git’ as a mechanism to merge code and resolve conflicts. Changes are automatically pushed to production once they are merged into the master (or production) branch. /amplify/faqs/;What are atomic deployments?;Every deployment is atomic, which means the site is ready to view after the deployment is complete. Atomic deployments eliminate maintenance windows by ensuring the web app is only updated once the entire deploy has finished. The new version of the web app is then made available instantly to end-users, without the developer having to invalidate CDN caches. /amplify/faqs/;How is hosting a modern web app different from a traditional web app?;Hosting a modern web app does not require web servers and can use content delivery networks to store static content (HTML, CSS and JavaScript files). AWS Amplify leverages the Amazon CloudFront Global Edge Network to distribute your web app globally. /amplify/faqs/;How do I connect my custom domain?;Connecting your custom domain is easy – if your domain is registered on Route53, simply pick it from a dropdown and AWS Amplify will automatically configure the DNrecords to point the apex and ‘www’ subdomain to your website. Additionally, we automatically create subdomains for all branches that are connected. For example, connecting a ‘dev’ branch creates a deployment at https://dev.appname.amplifyapp.com. As part of the custom domain setup, we generate a free HTTPS certificate to ensure traffic to your site is secure. /amplify/faqs/;What domain registrars does AWS Amplify web hosting support?;Domains purchased through all domain registrars can be connected to an app by defining a custom domain. For developers using Amazon Route53 as their registrar, AWS Amplify automatically updates the DNrecords to point to their deployed app. For 3rd party registrars, AWS Amplify provides instructions on how to update their DNrecords. /amplify/faqs/;Is all web traffic served over HTTPS?;AWS Amplify web hosting generates a free HTTPS on all sites and will enable it automatically on all Route53-managed domains. The SSL certificate is generated by Amazon Certificate Manager and has wildcard domain support. ACM handles the complexity of creating and managing public SSL/TLS certificates for your AWS based websites and applications. With the wildcard option, the main domain and all subdomains can be covered by a single certificate. /amplify/faqs/;Can I password protect my web deployments?;All web deployments can be password protected with basic access authentication. When working on new features, developers can share updates with internal stakeholders by setting up a username and password for a branch deployment. /amplify/faqs/;What are redirects and rewrites? How do I use them?;A redirect is a client-side request to have the web browser go to another URL. This means that the URL that you see in the browser will update to the new URL. A rewrite is a server-side rewrite of the URL. This will not change what you see in the browser because the changes are hidden from the user. Reverse proxies are cross-origin rewrites. From the AWS Amplify console settings, developers can specify redirects, HTTP response code, custom 404s, and proxies to external services. /amplify/faqs/;How will I be charged for my use of AWS Amplify Hosting?;AWS Amplify web hosting is priced for two features – build & deploy, and web hosting. For the build & deploy feature the price per build minute is $0.01. For the hosting feature the price per GB served is $0.15 and price per GB stored is $0.023.With the AWS Free Usage Tier, you can get started for free. Upon sign up, new AWS customers receive 1,000 build minutes per month for the build and deploy feature, and 15 GB served per month and 5 GB data storage per month for the hosting feature. /amplify/faqs/;Does your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /amplify/faqs/;Are prices different per region?;Prices are the same across all regions. /appsync/faqs/;What application developer languages are supported in AWS AppSync?;AWS AppSync SDKs support iOS, Android, and JavaScript. The JavaScript support spans web frameworks such as React and Angular as well as technologies such as React Native and Ionic. You can also use open source clients to connect to the AppSync GraphQL endpoint for using other platform such as generic HTTP libraries or even a simple CURL command. /appsync/faqs/;What is GraphQL ?;GraphQL is a data language to enable client apps to fetch, change and subscribe to data from servers. In a GraphQL query, the client specifies how the data is to be structured when it is returned by the server. This makes it possible for the client to query only for the data it needs, in the format that it needs it in. /appsync/faqs/;What is a GraphQL Schema?;A GraphQL schema is a definition of what data capabilities are available for the client application to operate on. For example, a schema might say what queries are available or how an app can subscribe to data without needing to know about the underlying data source. Schemas are defined by a type system, which an application's data model can leverage. /appsync/faqs/;Do I need to know GraphQL to get started?;No, AWS AppSync can automatically setup your entire API, schema, and connect data sources with a simple UI builder that allows you to type in your data model in seconds. You can then immediately begin using the endpoint in a client application. The console also provides many sample schema and data sources for fully functioning applications. /appsync/faqs/;Can I use AWS AppSync with my existing AWS resources?;Yes. With AWS AppSync you can use existing tables, functions, and domains from Amazon DynamoDB, AWS Lambda and Amazon OpenSearch Service with a GraphQL schema. AWS AppSync allows you to create data sources using existing AWS resources and configure the interactions using Mapping Templates. /appsync/faqs/;What is a Mapping Template?;"GraphQL requests execute as ""resolvers"" and need to be converted into the appropriate message format for the different AWS Services that AWS AppSync integrates. For example, a GraphQL query on a field will need to be converted into a unique format for Amazon DynamoDB, AWS Lambda, and Amazon OpenSearch Service respectively. AWS AppSync provides Mapping Templates for this, which are written in Apache Velocity Template Language (VTL) allowing you to provide custom logic to meet your needs. AWS AppSync also provides built-in templates for the different services and utility functions for enhanced usability." /appsync/faqs/;I don't want to use VTL. Is it required?;No. You can configure your resolvers as Direct Lambda resolvers. This will enable you to bypass the VTL mapping templates and use a Lambda in your account to drive your business logic. /appsync/faqs/;How is data secured with AWS AppSync?;Application data is stored at rest in your AWS account and not in the AWS AppSync service. You can protect access to this data from applications by using security controls with AWS AppSync including AWS Identity and Access Management (IAM), as well as Amazon Cognito User Pools. Additionally, user context can be passed through for authenticated requests so that you can perform fine-grained access control logic against your resources with Mapping Templates in AWS AppSync. /appsync/faqs/;Can I make my data real-time with AWS AppSync?;Yes. Subscriptions are supported with AWS AppSync against any of the data sources, so that when a mutation occurs, the results can be passed down to clients subscribing to the event stream immediately using over WebSockets. /appsync/faqs/;What AWS Regions are available for AWS AppSync?;AWS AppSync is available in different regions around the globe, please refer to the AWS Regions table for more details. /appsync/faqs/;Can I import existing Amazon DynamoDB tables?;AWS AppSync can automatically generate a GraphQL schema from an existing DynamoDB table, including the inference of your table’s key schema and indexes. Once the import is complete GraphQL queries, mutations, and subscriptions can be used with zero coding. AppSync will also “auto-map” non-key attributes from your GraphQL types to DynamoDB attributes. /appsync/faqs/;Can AWS AppSync create a database for me?;"Customers can create a GraphQL schema, either by hand or using the console, and AWS AppSync can automatically provision Amazon DynamoDB tables and appropriate indexes for you. Additionally, it will connect the data sources to ""GraphQL resolvers"" allowing you to just focus on your application code and data structures." /appsync/faqs/;What clients can I use to connect my application to my AppSync API?;You can use any HTTP or GraphQL client to connect to a GraphQL API on AppSync. We do recommend using the Amplify clients which are optimized to connect to the AppSync backend. There are some options depending on your application's use case: /appsync/faqs/;Can I use my own domain name to access my AppSync GraphQL endpoint?;AWS AppSync enables customers to use custom domain names with their AWS AppSync API to access their GraphQl endpoint and real-time endpoint. To create a custom domain name in AppSync, you simply provide a domain name you own and indicate a valid AWS Certificate Manager (ACM) certificate that covers your domain. Once the custom domain name is created, you can associate the domain name with any available AppSync API in your account. After you have updated your DNrecord to map to to the AppSync-provided domain name, you can configure your applications to use the new GraphQL and real-time endpoints. You can change the API association on your custom domain at any time without having to update your applications. When AppSync receives a request on the custom domain endpoint, it routes it to the associated API for handling. /device-farm/faqs/;What is AWS Device Farm?;AWS Device Farm allows developers to increase application quality, time to market, and customer satisfaction by testing and interacting with real Android and iOS devices in the AWS Cloud. Developers can upload their app and test scripts and run automated tests in parallel across 100s of real devices, getting results, screenshots, video, and performance data in minutes. They can also debug and reproduce customer issues by swiping, gesturing, and interacting with a device through their web browser. /device-farm/faqs/;Who should use AWS Device Farm and why?;AWS Device Farm is designed for developers, QA teams, and customer support representatives who are building, testing, and supporting mobile apps to increase the quality of their apps. Application quality is increasingly important, and also getting complex due to the number of device models, variations in firmware and OS versions, carrier and manufacturer customizations, and dependencies on remote services and other apps. AWS Device Farm accelerates the development process by executing tests on multiple devices, giving developers, QA and support professionals the ability to perform automated tests and manual tasks like reproducing customer issues, exploratory testing of new functionality, and executing manual test plans. AWS Device Farm also offers significant savings by eliminating the need for internal device labs, lab managers, and automation infrastructure development. /device-farm/faqs/;What types of apps does AWS Device Farm support?;AWS Device Farm supports native and hybrid Android, iOS, and web apps, and cross-platform apps including those created with PhoneGap, Titanium, Xamarin, Unity, and other frameworks. /device-farm/faqs/;Does AWS Device Farm use simulators or emulators?;AWS Device Farm tests are run on real, non-rooted devices. The devices are a mixture of OEM and carrier-branded devices. /device-farm/faqs/;How do I get started with AWS Device Farm?;Please see our getting started guide. /device-farm/faqs/;Which browsers does the AWS Device Farm console support?;AWS Device Farm works on Internet Explorer 9 or later and the latest versions of Chrome, Firefox, and Safari. /device-farm/faqs/;Which browsers are supported for testing web applications?;Your web applications will be tested in Chrome on Android and Safari on iOS. /device-farm/faqs/;What is the maximum file size for apps and tests?;AWS Device Farm supports files up to 4 GB. /device-farm/faqs/;Do I need to instrument my app or supply source code?;Ninstrumentation or source code is required to use the built-in tests. Android apps can be submitted as is. iOS apps should be built with “iOS Device” as the target instead of a simulator. /device-farm/faqs/;Do you store my app, tests, and other files on your servers? For how long?;Apps and test packages are automatically removed after 30 days. Logs, video recordings, and other artifacts are stored for 400 days. You can also choose to delete files and results at any time through the AWS Device Farm console or API. /device-farm/faqs/;How do you clean up devices after my testing is completed?;After test execution completes, we perform a series of cleanup tasks on each device, including uninstallation of your app. If we cannot verify uninstallation of your app or any of the other cleanup steps, the device will be removed and will no longer be available. /device-farm/faqs/;Do you modify my app?;On iOS, we replace the embedded provisioning profile with a wildcard profile and resign the app. If you provide it, we will add auxiliary data to the application package before installation so the data will be present in your app’s sandbox. Resigning the iOS app results in the removal of certain entitlements. This includes App Group, Associated Domains, Game Center, HealthKit, HomeKit, Wireless Accessory Configuration, In-App Purchase, Inter-App Audio, Apple Pay, Push Notifications, and VPN Configuration & Control. /device-farm/faqs/;Which devices are available in AWS Device Farm? How do you select the devices in your fleet?;AWS Device Farm has a large (and growing) selection of Android, iOS, and Fire OS devices. We add popular new devices as they are released by manufacturers. We also add new devices as new OS versions are released. See the list of available devices. /device-farm/faqs/;Does AWS Device Farm have international devices from markets like Europe, China, and India?;We currently have international devices from India. We use market data and customer feedback to continuously update the fleet. If you would like to see a device that isn’t in our fleet, please let us know. /device-farm/faqs/;How do I select devices? Can I retest on the same device?;For Automated Testing, devices are selected through a collection called a device pool. Some curated device pools are provided automatically, but you can create your own pools, too. During execution, tests will be run against all devices in the specified pool that are compatible with your application and tests. For Remote Access, you select the desired device based on make, model, carrier variant, and operating system version. You can then optionally upload apps and other data as well as configure other device settings. Device Farm then locates an available device matching your request and displays the device’s display in your browser. You can then interact with the device and capture screenshots and video. /device-farm/faqs/;Are any apps pre-installed on AWS Device Farm test devices?;Yes, test devices will have a number of apps pre-installed by the device manufacturer or carrier. /device-farm/faqs/;Are devices able to communicate with other services or systems that are available on the Internet?;Yes. All devices have a WiFi connection with Internet access. If your systems are internal (that is, behind a corporate firewall), you can whitelist the IP range 54.244.50.32-54.244.50.63. All device traffic will come from those IPs. /device-farm/faqs/;Can I test different carrier connections and conditions?;"While you can't test actual carrier connections, you can simulate connection types and conditions using the network shaping functionality. When scheduling a run, you can select a curated network profile like ""3G"" or ""Lossy LTE,"" or you can create your own, controlling parameters like throughput, jitter, and loss. All WiFi traffic from the device will be shaped and manipulated for the duration of your tests according to the profile you choose. You can also simulate dynamic environments by changing network parameters from your test scripts." /device-farm/faqs/;Can I make phone calls or send SMS from the devices?;No, devices do not have carrier connections and cannot make phone calls or send SMS messages. /device-farm/faqs/;Can I use the device camera?;Yes, you can use the device cameras, both front- and rear-facing. Due to the way the devices are mounted, images and videos may look dark and blurry. /device-farm/faqs/;I don’t have any automated test scripts yet. What do the built-in tests do?;The built-in compatibility test suite allows you to install, uninstall, launch, and run Fuzz on the app. /device-farm/faqs/;What does Fuzz do?;Fuzz will perform fuzz testing on your UI immediately after launch. It streams random user input (touches, swipes, keyboard input) in a rapid fashion to your app. You can configure the number of events, the delay between events, and the seed used to randomize events. Using the same seed across test runs will result in the same sequence of events. /device-farm/faqs/;I test using an automation framework. Which frameworks do you support?;For testing iOS, Android, and FireOS apps, we currently support Appium Java JUnit, Appium Java TestNG, Appium Python, Calabash, Instrumentation (Including JUnit, Espresso, Robotium, and any instrumentation-based tests), UI Automation, UI Automator, and XCTest (Including XCUI and KIF). For more information and updated list, visit our documentation. /device-farm/faqs/;Which test frameworks do you support for web applications?;You can run tests written in Appium Java JUnit, Appium Java TestNG, or Appium Python. /device-farm/faqs/;Can you add support for a modified framework or one I designed myself?;We’re always evaluating frameworks to support. Please contact us. /device-farm/faqs/;How does AWS Device Farm decide when to take a screenshot during a test?;If you use one of the supported automation frameworks, you are in full control and can decide when to take screenshots. Those screenshots are included in your reports automatically. /device-farm/faqs/;Android: Is Google Play Services available on your devices? Which version is installed?;Yes, Google Play Services is installed on devices that support it. The services are updated as new versions become available. /device-farm/faqs/;Android: Is there a default Google account on the devices?;No, devices do not have an active Google account. /device-farm/faqs/;Does AWS Device Farm support record and playback automation or do I have to write my scripts?;AWS Device Farm supports frameworks like Espresso and Robotium that have record and playback scripting tools. /device-farm/faqs/;iOS: Do I need to add your UDIDs to my provisioning profile?;No, AWS Device Farm will automatically replace a provisioning profile and resign your app so it can be deployed on our devices. /device-farm/faqs/;iOS: My app does not contain debug symbols. Can I supply a dSYM file to AWS Device Farm?;No, but you can download the logs and symbolicate the stack traces locally. /device-farm/faqs/;Android: My app is obfuscated. Can I still test my app on AWS Device Farm?;Yes, if you use ProGuard. If you use DexGuard with anti-piracy measures, we are unable to re-sign the app and run tests against it. /device-farm/faqs/;My app serves ads. Will they be displayed on your devices? Will my ad provider flag this as abuse and ban my account?;Although devices have access to the Internet, we make no guarantee that ads will be displayed. We recommend that you remove ads from the builds tested on AWS Device Farm. /device-farm/faqs/;Can I access the machine hosting the device or access its shell as part of my tests? Can I reach the Internet from it?;Yes. If you’re using a client-server framework like Calabash, Appium, or UI Automation, you can access the Internet and execute limited shell commands from the host. /device-farm/faqs/;I’d like to supply media or other data for my app to consume. How do I do that?;"You can provide a .zip archive up to 4 GB in size. On Android, it will be extracted to the root of external memory; on iOS, to your app’s sandbox. For Android expansion files (OBB), we will automatically place the file into the location appropriate to the OS version. For more information, see the Developer Guide." /device-farm/faqs/;My app requires dependencies to test all functionality. Can I install other apps?;Yes, you can select multiple apps and the order in which to install them. These dependent apps will be installed before your tests begin. /device-farm/faqs/;Can I test upgrade flows for my app? How do I install an old version of my app?;Yes, in order to test your upgrade flow, you can upload and install an old version of your app before the new version is installed and tested. /device-farm/faqs/;My app makes use of location services. Can I specify the physical location of the device?;Yes, you can supply latitude and longitude coordinates that will be used to override a device’s GPS. /device-farm/faqs/;Can I run localization tests? How do I change the language of the device?;Yes, you can provide a locale (for example, “en_US”) to override the default locale setting on a device. /device-farm/faqs/;How long does it take before my test starts?;Tests are immediately queued for execution and usually start within minutes. If one or more devices are not available, test execution for those devices will remain queued until they become available. Testing on the other devices in your test run will continue. /device-farm/faqs/;What is the maximum test time allowed?;The maximum time allowed is 150 minutes. /device-farm/faqs/;Does AWS Device Farm provide a way to run tests and get results through an API?;Yes. We have a plug-in for the Jenkins continuous integration environment and a Gradle plugin compatible with Android Studio. AWS Device Farm also provides programmatic support for all console features, including setting up a test and downloading test results through an API. For more information, see the AWS Device Farm API Reference. In addition to the API, you can access AWS Device Farm from the AWS SDKs. /device-farm/faqs/;What’s in an AWS Device Farm test report?;AWS Device Farm test reports contain pass/fail information, crash reports, test logs, device logs, screenshots, videos, and performance data. Reports include both detailed per-device data and high-level results like the number of occurrences of a given error. Remote Access results contain logs and a video of the session. /device-farm/faqs/;Which device logs are included in an AWS Device Farm report?;AWS Device Farm reports include complete logcat (Android) and device logs (iOS), as well as logs from the device host and specified test framework. /device-farm/faqs/;My tests generate and save additional log files. Will I see them in my AWS Device Farm reports?;If you write data to logcat (Android) or the device log (iOS), those log entries will be included in the report. AWS Device Farm does not collect any non-standard logs or other artifacts, although you may transfer files via your test script using the device's or device host's Internet connection. /device-farm/faqs/;How much does AWS Device Farm cost?;Pricing is based on device minutes, which are determined by the duration of tests on each selected device. AWS Device Farm comes with a free trial of 1000 device minutes.* After that, customers are charged $0.17 per device minute. As your testing needs grow, you can opt for an unmetered testing plan, which allows unlimited testing for a flat monthly fee of $250 per device. /device-farm/faqs/;How does the free trial work?;Your first 1000 device minutes are provided free of charge. This is a one-time trial and does not renew. Once your trial allocation is depleted, you will be billed at the standard rate of $0.17 per device minute. /device-farm/faqs/;What is a device minute?;A device minute is the billing unit. Device minutes are a measurement of the time it takes (in minutes) to install, execute, and uninstall your app and tests on every device you have selected for your test run. The unit price is constant regardless of the device, test, or application type. Device minutes are only billed for tests that complete without any device or system errors. Similarly, for Remote Access sessions, device minutes are measured from the time it takes to prepare a device to your specification to completely removing any apps and data you placed on the device. /device-farm/faqs/;How does the free trial work?;Your first 1000 device minutes are provided free of charge.* This is a one-time trial and does not renew. Once your trial allocation is depleted, you will be billed at the standard rate of $0.17 per device minute. /device-farm/faqs/;What is the unmetered plan and how do device slots work?;Unmetered plans allow unlimited testing and remote access starting at $250 per month. Unmetered pricing is based on the number of device slots you purchase for each usage type (i.e. automated test or remote access) and device family (i.e. Android or iOS) and are priced at $250 per slot per month. Device slots correspond to concurrency. /device-farm/faqs/;What if my testing needs change and I need to add or remove device slots?;You can add device slots at any time and they will be available to you immediately. You can also cancel your subscription for one or more device slots at any time and the cancellation will take effect at your next renewal date (the day of the month that you purchased your first active device slot). /device-farm/faqs/;If I'm on an unmetered plan, can I still make use of metered billing?;Yes. When creating a run, you can choose to make use of your unmetered device slots or use metered device minutes instead. Because concurrency is not limited on metered billing, this gives you the flexibility of running tests faster than would otherwise happen using your device slots. /device-farm/faqs/;What is a private device?;A private device is a physical instance of a phone or tablet that is exclusive to your account. Private devices can have custom, static configurations and run custom OS images. Each device is deployed on your behalf and removed at the end of your subscription. /device-farm/faqs/;How do private device subscriptions work and how are they priced?;Each private device under your account is considered a private device subscription. The monthly subscription price is tiered on the cost of the device and starts at $200/month. After the minimum subscription period, you can choose to cancel your subscription at any time. Please contact us for more information. /device-farm/faqs/;Can I use both private devices and public devices?;Yes. When selecting devices for a test run or remote access session you will see your private devices as well as public devices. You can also create device pools comprised of both private and public devices. For more information about private devices, please contact us. /device-farm/faqs/;What is Selenium?;Selenium is an open-source framework that automates web browser interaction. You can learn more about Selenium here. /device-farm/faqs/;What is Desktop Browser Testing on AWS Device Farm?;Device Farm enables you to execute your Selenium tests on different desktop browsers and browser versions that are hosted in the AWS Cloud. Device Farm follows a client-side execution model for Selenium testing i.e., your tests execute on your own local machine but will interact with browsers hosted on AWS Device Farm through the Selenium API. /device-farm/faqs/;How do I get started with Desktop Browser Testing on AWS Device Farm?;To get started, please see our getting started guide here. /device-farm/faqs/;What operating system are the browsers hosted on?;All browsers are hosted on EC2 Windows instances which run on Microsoft Windows Server. /device-farm/faqs/;What desktop browsers does AWS Device Farm support?;You can see the list of desktop browsers and browser versions supported here. /device-farm/faqs/;What desired capabilities does AWS Device Farm support?;You can see the list of Selenium desired capabilities that Device Farm supports here. /device-farm/faqs/;What artifacts are available for troubleshooting test failures?;Device Farm generates console logs, web driver logs, action logs, and video recordings of the entire test to help you troubleshoot test failures. /device-farm/faqs/;Can I use AWS Device Farm to test my web app on real mobile devices?;Yes. Device Farm supports testing web apps on real mobile devices using Appium. Visit our developer guide for Appium Web Testing to learn more. Please note that for testing on real mobile devices, Device Farm follows a server-side execution model and hence you will need to upload your tests to the service. /device-farm/faqs/;What are limits of Desktop Browser Testing on AWS Device Farm?;You can find all limits for this feature here. /device-farm/faqs/;How much does Desktop Browser Testing on AWS Device Farm cost?;Pricing is based on instance minutes, which are determined by the duration of tests on each selected browser instance. You will be charged $0.005 per browser instance minute. /device-farm/faqs/;What is instance minute?;An instance minute is the billing unit for Desktop Browser Testing on Device Farm. Instance minutes are a measurement of the time it takes (in minutes) to execute your tests on every browser instance you have selected for your test run. The unit price of $0.005 is constant regardless of the browser or browser version you have selected. We do not charge you for the time it takes to launch, initialize, or teardown the EC2 instance that host the browser. /sumerian/faqs/;What is Amazon Sumerian transitioning to?;You can use this template as a starting point: https://github.com/aws-samples/aws-tools-for-babylonjs-editor /sumerian/faqs/;I can’t access the Amazon Sumerian Dashboard. What do I do?;If you are an existing customer of Amazon Sumerian and are unable to access the dashboard, please verify your region. If you are still unable to access your existing scenes, please contact customer support. /sumerian/faqs/;I have existing scenes in Amazon Sumerian. How do I transition them?; To learn more, visit Amazon Sumerian documentation. /amazon-mq/faqs/;What is Amazon MQ?;Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers in the cloud. You get direct access to the ActiveMQ and RabbitMQ consoles and industry standard APIs and protocols for messaging, including JMS, NMS, AMQP 1.0 and 0.9.1, STOMP, MQTT, and WebSocket. You can easily move from any message broker that uses these standards to Amazon MQ because you don’t have to rewrite any messaging code in your applications. /amazon-mq/faqs/;Who should use Amazon MQ?;Amazon MQ is suitable for enterprise IT pros, developers, and architects who are managing a message broker themselves–whether on-premises or in the cloud–and want to move to a fully managed cloud service without rewriting the messaging code in their applications. /amazon-mq/faqs/;What does Amazon MQ manage on my behalf?;Amazon MQ manages the work involved in setting up a message broker, from provisioning the infrastructure capacity you request–including broker instances and storage–to installing the broker software. Once your broker is up and running, Amazon manages ongoing software upgrades, security updates, and fault detection and recovery. Amazon MQ stores messages redundantly across multiple Availability Zones (AZs) for message durability. With active/standby brokers, Amazon MQ automatically fails over to a standby instance in the event of a failure so you can continue sending and receiving messages. /amazon-mq/faqs/;When would I use Amazon MQ vs. managing ActiveMQ, or RabbitMQ, on Amazon EC2 myself?;The choice depends on how closely you want to manage your message broker and underlying infrastructure. Amazon MQ provides a managed message broker service that takes care of operating your message broker, including set up, monitoring, maintenance, and provisioning the underlying infrastructure for high availability and durability. You may want to consider Amazon MQ when you want to offload operational overhead and associated costs. If you want greater control in order to customize features and configurations or to use custom plugins, you may want to consider installing and running your message broker on Amazon EC2 directly. /amazon-mq/faqs/;How do I migrate if I'm using a different message broker instead of ActiveMQ or RabbitMQ?;Amazon MQ provides compatibility with the most common messaging APIs, such as Java Message Service (JMS) and .NET Message Service (NMS), and protocols, including AMQP, STOMP, MQTT, and WebSocket. This makes it easy to switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications. In most cases, you can simply update the endpoints of your Amazon MQ broker to connect to your existing applications, and start sending messages. /amazon-mq/faqs/;How does Amazon MQ work with other AWS services?;Any application that runs on an AWS compute service, such as Amazon EC2, Amazon ECS, or AWS Lambda, can use Amazon MQ. Amazon MQ is also integrated with the following AWS services: /amazon-mq/faqs/;How can I get started with Amazon MQ?;Amazon MQ makes it easy to setup and operate message brokers in the cloud. With Amazon MQ, you can use the AWS Management Console, CLI, or API calls to launch a production-ready message broker in minutes. In most cases, you can simply update the endpoints of your Amazon MQ broker to connect to your existing applications and start sending messages. /amazon-mq/faqs/;How am I charged for Amazon MQ?;With Amazon MQ, you pay only for what you use. You are charged for broker instance and storage usage, and standard data transfer fees. It’s easy to get started with Amazon MQ with our free tier for one year. See Amazon MQ pricing for details. /amazon-mq/faqs/;Does Amazon MQ meet compliance standards?;Yes. Amazon MQ is HIPAA eligible, and meets standards for PCI, SOC, and ISO compliance. /amazon-mq/faqs/;When should I use Amazon MQ vs. Amazon SQS and SNS?;Amazon MQ, Amazon SQS, and Amazon SNare messaging services that are suitable for anyone from startups to enterprises. If you're using messaging with existing applications, and want to move your messaging to the cloud quickly and easily, we recommend you consider Amazon MQ. It supports industry-standard APIs and protocols so you can switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications. If you are building brand new applications in the cloud, we recommend you consider Amazon SQS and Amazon SNS. Amazon SQS and SNare lightweight, fully managed message queue and topic services that scale almost infinitely and provide simple, easy-to-use APIs. You can use Amazon SQS and SNto decouple and scale microservices, distributed systems, and serverless applications, and improve reliability. /amazon-mq/faqs/;When should I use Amazon MQ vs. AWS IoT Message Broker?;You can use Amazon MQ when you want to offload operational overhead and associated costs with an open source messaging application such as ActiveMQ or any commercial message brokers. You can use Amazon MQ when you are migrating from commercial brokers or open source brokers such as ActiveMQ to reduce broker maintenance, licensing costs and improve broker stability. Amazon MQ is also suitable for Application Integration use cases where you are developing new cloud based applications using micro-services that communicate with complex messaging patterns and require low-latency, high availability and message durability. Amazon MQ supports industry standard APIs such as JMS and NMS, and protocols for messaging, including AMQP, STOMP, MQTT, and WebSocket. /amazon-mq/faqs/;How do I use my own custom keys to encrypt the data in Amazon MQ?;Amazon MQ supports the AWS Key Management Service (AWS KMS) to create and manage keys for at-rest encryption of your data in Amazon MQ. When you create a broker, you can select the KMS key used to encrypt your data for Amazon MQ for ActiveMQ from the following three options: a KMS key in the Amazon MQ service account, a KMS key in your account that Amazon MQ creates and manages, or a KMS key in your account that you create and manage. For Amazon MQ for RabbitMQ a KMS key in the Amazon MQ service account is used. In addition to encryption at rest, all data transferred between Amazon MQ and client applications is securely transmitted using TLS/SSL. /amazon-mq/faqs/;How can I monitor my broker instances, queues, and topics?;Amazon MQ and Amazon CloudWatch are integrated so you can view and analyze metrics for your broker instances, as well as your queues and topics. You can view and analyze metrics from the Amazon MQ console, the CloudWatch console, the command line, or programmatically. Metrics are automatically collected and pushed to CloudWatch every minute. /amazon-mq/faqs/;Does Amazon MQ have a Service Level Agreement?;"Yes. AWS will use commercially reasonable efforts to make Active/Standby ActiveMQ Brokers, and RabbitMQ Clusters, available with a Monthly Uptime Percentage of at least 99.9% during any monthly billing cycle (the ""Service Commitment""). In the event Amazon MQ does not meet the Monthly Uptime Percentage commitment, you will be eligible to receive a Service Credit. For details, please review the full Amazon MQ Service Level Agreement." /amazon-mq/faqs/;What type of storage is available with Amazon MQ for ActiveMQ?;Amazon MQ for ActiveMQ supports two types of broker storage – durability optimized using Amazon Elastic File System (Amazon EFS) and throughput optimized using Amazon Elastic Block Store (EBS). To take advantage of high durability and replication across multiple Availability Zones, use durability optimized brokers backed by Amazon EFS. To take advantage of high throughput for your high volume applications, use throughput optimized brokers backed by EBS. Throughput optimized message brokers reduce the number of brokers required, and cost of operating, high-volume applications using Amazon MQ. /amazon-mq/faqs/;What plugins are available for Amazon MQ for RabbitMQ?;Amazon MQ for RabbitMQ includes the management, shovel, federation, and consistent hash exchange plugins on all brokers. /amazon-mq/faqs/;What is an Amazon MQ network of brokers?;Amazon MQ for ActiveMQ uses the “network of brokers” feature that is part of Apache ActiveMQ. A network of brokers consists of multiple brokers connected together. Brokers in the network share information about the clients and destinations each broker hosts. The brokers use this information to route messages through the network. With Amazon MQ, the brokers in the network can either be active-standby brokers (each active broker in the network has a standby node, with shared storage, that will take over if the active node fails), or single-instance brokers (if the node fails, it will be unavailable until it is restarted). Each broker in the network maintains its own unique message store which is replicated across multiple AZs within a region. The nodes in the network forward messages to each other, so messages are stored by a single broker at any given time. /sqs/faqs/;What are the benefits of Amazon SQS over homegrown or packaged message queuing systems?;Amazon SQS provides several advantages over building your own software for managing message queues or using commercial or open-source message queuing systems that require significant upfront time for development and configuration. /sqs/faqs/;How is Amazon SQS different from Amazon Simple Notification Service (SNS)?;Amazon SNallows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates. Amazon SQS is a message queue service used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components. /sqs/faqs/;How is Amazon SQS different from Amazon MQ?;If you're using messaging with existing applications, and want to move your messaging to the cloud quickly and easily, we recommend you consider Amazon MQ. It supports industry-standard APIs and protocols so you can switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications. If you are building brand new applications in the cloud, we recommend you consider Amazon SQS and Amazon SNS. Amazon SQS and SNare lightweight, fully managed message queue and topic services that scale almost infinitely and provide simple, easy-to-use APIs. /sqs/faqs/;Does Amazon SQS provide message ordering?;Yes. FIFO (first-in-first-out) queues preserve the exact order in which messages are sent and received. If you use a FIFO queue, you don't have to place sequencing information in your messages. For more information, see FIFO Queue Logic in the Amazon SQS Developer Guide. /sqs/faqs/;Does Amazon SQS guarantee delivery of messages?;Standard queues provide at-least-once delivery, which means that each message is delivered at least once. /sqs/faqs/;How is Amazon SQS different from Amazon Kinesis Streams?;Amazon SQS offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components. Amazon SQS provides common middleware constructs such as dead-letter queues and poison-pill management. It also provides a generic web services API and can be accessed by any programming language that the AWS SDK supports. Amazon SQS supports both standard and FIFO queues. /sqs/faqs/;Does Amazon use Amazon SQS for its own applications?;Yes. Developers at Amazon use Amazon SQS for a variety of applications that process large numbers of messages every day. Key business processes in both Amazon.com and AWS use Amazon SQS. /sqs/faqs/;How much does Amazon SQS cost?;You pay only for what you use, and there is no minimum fee. /sqs/faqs/;What can I do with the Amazon SQS Free Tier?;The Amazon SQS Free Tier provides you with 1 million requests per month at no charge. /sqs/faqs/;Will I be charged for all Amazon SQS requests?;Yes, for any requests beyond the free tier. All Amazon SQS requests are chargeable, and they are billed at the same rate. /sqs/faqs/;Do Amazon SQS batch operations cost more than other requests?;No. Batch operations (SendMessageBatch, DeleteMessageBatch, and ChangeMessageVisibilityBatch) all cost the same as other Amazon SQS requests. By grouping messages into batches, you can reduce your Amazon SQS costs. /sqs/faqs/;How will I be charged and billed for my use of Amazon SQS?;There are no initial fees to begin using Amazon SQS. At the end of the month, your credit card will be automatically charged for the month’s usage. /sqs/faqs/;How can I track and manage the costs associated with my Amazon SQS queues?;You can tag and track your queues for resource and cost management using cost allocation tags. A tag is a metadata label comprised of a key-value pair. For example, you can tag your queues by cost center and then categorize and track your costs based on these cost centers. /sqs/faqs/;Do your prices include taxes?;Except as noted otherwise, our prices don't include any applicable taxes and duties such as VAT or applicable sales tax. /sqs/faqs/;Can I use Amazon SQS with other AWS services?;Yes. You can make your applications more flexible and scalable by using Amazon SQS with compute services such as Amazon EC2, Amazon Elastic Container Service (ECS), and AWS Lambda, as well as with storage and database services such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB. /sqs/faqs/;How can I interact with Amazon SQS?;You can access Amazon SQS using the AWS Management Console, which helps you create Amazon SQS queues and send messages easily. /sqs/faqs/;What API actions are available for Amazon SQS?;For information on message queue operations, see the Amazon SQS API Reference. /sqs/faqs/;Who can perform operations on a message queue?;Only an AWS account owner (or an AWS account that the account owner has delegated rights to) can perform operations on an Amazon SQS message queue. /sqs/faqs/;How does Amazon SQS identify messages?;All messages have a global unique ID that Amazon SQS returns when the message is delivered to the message queue. The ID isn’t required to perform any further actions on the message, but it is useful for tracking the receipt of a particular message in the message queue. /sqs/faqs/;How does Amazon SQS handle messages that can't be processed?;In Amazon SQS, you can use the API or the console to configure dead letter queues, which receive messages from other source queues. When configuring a dead letter queue, you are required to set appropriate permissions for the dead letter queue redrive using RedriveAllowPolicy. /sqs/faqs/;What is a visibility timeout?;The visibility timeout is a period of time during which Amazon SQS prevents other consuming components from receiving and processing a message. For more information, see Visibility Timeout in the Amazon SQS Developer Guide. /sqs/faqs/;Does Amazon SQS support message metadata?;Yes. An Amazon SQS message can contain up to 10 metadata attributes. You can use message attributes to separate the body of a message from the metadata that describes it. This helps process and store information with greater speed and efficiency because your applications don't have to inspect an entire message before understanding how to process it. /sqs/faqs/;How can I determine the time-in-queue value?;To determine the time-in-queue value, you can request the SentTimestamp attribute when receiving a message. Subtracting that value from the current time results in the time-in-queue value. /sqs/faqs/;What is the typical latency for Amazon SQS?;Typical latencies for SendMessage, ReceiveMessage, and DeleteMessage API requests are in the tens or low hundreds of milliseconds. /sqs/faqs/;For anonymous access, what is the value of the SenderId attribute for a message?;When the AWS account ID is not available (for example, when an anonymous user sends a message), Amazon SQS provides the IP address. /sqs/faqs/;What is Amazon SQS long polling?;Amazon SQS long polling is a way to retrieve messages from your Amazon SQS queues. While the regular short polling returns immediately, even if the message queue being polled is empty, long polling doesn’t return a response until a message arrives in the message queue, or the long poll times out. /sqs/faqs/;Is there an additional charge for using Amazon SQS long polling?;No. Long-polling ReceiveMessage calls are billed exactly the same as short-polling ReceiveMessage calls. /sqs/faqs/;When should I use Amazon SQS long polling, and when should I use Amazon SQS short polling?;In almost all cases, Amazon SQS long polling is preferable to short polling. Long-polling requests let your queue consumers receive messages as soon as they arrive in your queue while reducing the number of empty ReceiveMessageResponse instances returned. /sqs/faqs/;What value should I use for my long-poll timeout?;In general, you should use a maximum of 20 seconds for a long-poll timeout. Because higher long-poll timeout values reduce the number of empty ReceiveMessageResponse instances returned, try to set your long-poll timeout as high as possible. /sqs/faqs/;What is the AmazonSQSBufferedAsyncClient for Java?;The AmazonSQSBufferedAsyncClient for Java provides an implementation of the AmazonSQSAsyncClient interface and adds several important features: /sqs/faqs/;Where can I download the AmazonSQSBufferedAsyncClient for Java?;You can download the AmazonSQSBufferedAsyncClient as part of the AWS SDK for Java. /sqs/faqs/;Do I have to rewrite my application to use the AmazonSQSBufferedAsyncClient for Java?;No. The AmazonSQSBufferedAsyncClient for Java is implemented as a drop-in replacement for the existing AmazonSQSAsyncClient. /sqs/faqs/;How can I subscribe Amazon SQS message queues to receive notifications from Amazon SNS topics?;In the Amazon SQS console, select an Amazon SQS standard queue. Under Queue Actions, select Subscribe Queue to SNTopic from the drop-down list. In the dialog box, select the topic from the Choose a Topic drop-down list, and click Subscribe. /sqs/faqs/;Can I delete all messages in a message queue without deleting the message queue itself?;Yes. You can delete all messages in an Amazon SQS message queue using the PurgeQueue action. /sqs/faqs/;What regions are FIFO queues available in?;FIFO queues are available in all AWS regions where Amazon SQS is available. See here for details on Amazon SQS region availability. /sqs/faqs/;How many copies of a message will I receive?;FIFO queues are designed to never introduce duplicate messages. However, your message producer might introduce duplicates in certain scenarios: for example, if the producer sends a message, does not receive a response, and then resends the same message. Amazon SQS APIs provide deduplication functionality that prevents your message producer from sending duplicates. Any duplicates introduced by the message producer are removed within a 5-minute deduplication interval. /sqs/faqs/;Are the Amazon SQS queues I used previously changing to FIFO queues?;"No. Amazon SQS standard queues (the new name for existing queues) remain unchanged, and you can still create standard queues. These queues continue to provide the highest scalability and throughput; however, you will not get ordering guarantees and duplicates might occur." /sqs/faqs/;Can I convert my existing standard queue to a FIFO queue?;No. You must choose the queue type when you create it. However, it is possible to move to a FIFO queue. For more information, see Moving From a Standard Queue to a FIFO Queue in the Amazon SQS Developer Guide. /sqs/faqs/;Are Amazon SQS FIFO queues backwards-compatible?;To take advantage of FIFO queue functionality, you must use the latest AWS SDK. /sqs/faqs/;With which AWS or external services are Amazon SQS FIFO queues compatible?;Some AWS or external services that send notifications to Amazon SQS might not be compatible with FIFO queues, despite allowing you to set a FIFO queue as a target. /sqs/faqs/;Are Amazon SQS FIFO queues compatible with the Amazon SQS Buffered Asynchronous Client, the Amazon SQS Extended Client Library for Java, or the Amazon SQS Java Message Service (JMS) Client?;FIFO queues aren't currently compatible with the Amazon SQS Buffered Asynchronous Client. /sqs/faqs/;Which AWS CloudWatch metrics do Amazon SQS FIFO queues support?;FIFO queues support all metrics that standard queues support. For FIFO queues, all approximate metrics return accurate counts. For example, the following AWS CloudWatch metrics are supported: /sqs/faqs/;What are message groups?;"Messages are grouped into distinct, ordered ""bundles"" within a FIFO queue. For each message group ID, all messages are sent and received in strict order. However, messages with different message group ID values might be sent and received out of order. You must associate a message group ID with a message. If you don't provide a message group ID, the action fails." /sqs/faqs/;Do Amazon SQS FIFO queues support multiple producers?;Yes. One or more producers can send messages to a FIFO queue. Messages are stored in the order that they were successfully received by Amazon SQS. /sqs/faqs/;Do Amazon SQS FIFO queues support multiple consumers?;By design, Amazon SQS FIFO queues don't serve messages from the same message group to more than one consumer at a time. However, if your FIFO queue has multiple message groups, you can take advantage of parallel consumers, allowing Amazon SQS to serve messages from different message groups to different consumers. /sqs/faqs/;What is the throughput quota for an Amazon SQS FIFO queue?;By default, FIFO queues support up to 3,000 messages per second with batching or up to 300 messages per second (300 send, receive, or delete operations per second) without batching. If you require higher throughput, you can enable high throughput mode for FIFO on the Amazon SQS console, which will support up to 30,000 messages per second with batching, or up to 3,000 messages per second without batching. /sqs/faqs/;Are there any limits specific to FIFO queue attributes?;The name of a FIFO queue must end with the .fifo suffix. The suffix counts towards the 80-character queue name limits. To determine whether a queue is FIFO, you can check whether the queue name ends with the suffix. /sqs/faqs/;How reliable is the storage of my data in Amazon SQS?;Amazon SQS stores all message queues and messages within a single, highly-available AWS region with multiple redundant Availability Zones (AZs), so that no single computer, network, or AZ failure can make messages inaccessible. For more information, see Regions and Availability Zones in the Amazon Relational Database Service User Guide. /sqs/faqs/;How can I secure the messages in my message queues?;Authentication mechanisms ensure that messages stored in Amazon SQS message queues are secured against unauthorized access. You can control who can send messages to a message queue and who can receive messages from a message queue. For additional security, you can build your application to encrypt messages before they are placed in a message queue. /sqs/faqs/;Why are there separate ReceiveMessage and DeleteMessage operations?;When Amazon SQS returns a message to you, the message stays in the message queue whether or not you actually receive the message. You're responsible for deleting the message and the deletion request acknowledges that you’re done processing the message. /sqs/faqs/;Can a deleted message be received again?;No. FIFO queues never introduce duplicate messages. /sqs/faqs/;What happens if I issue a DeleteMessage request on a previously-deleted message?;When you issue a DeleteMessage request on a previously-deleted message, Amazon SQS returns a success response. /sqs/faqs/;What are the benefits of SSE for Amazon SQS?;SSE lets you transmit sensitive data in encrypted queues. SSE protects the contents of messages in Amazon SQS queues using keys managed in the AWS Key Management Service (AWS KMS). SSE encrypts messages as soon as Amazon SQS receives them. The messages are stored in encrypted form and Amazon SQS decrypts messages only when they are sent to an authorized consumer. /sqs/faqs/;Can I use SNS, Cloud Watch Events and S3 Events with encrypted queues?;Yes. To do this you need to enable compatibility between AWS services (eg. Amazon CloudWatch Events, Amazon S3, and Amazon SNS), and Queues with SSE. For detailed instructions see the Compatibility section of the SQS Developer Guide. /sqs/faqs/;What regions are queues with SSE available in?;Server-side encryption (SSE) for Amazon SQS is available in all AWS regions where Amazon SQS is available. See here for details on Amazon SQS region availability. /sqs/faqs/;How do I enable SSE for a new or existing Amazon SQS queue?;To enable SSE for a new or existing queue using the Amazon SQS API, specify the customer master key (CMK) ID: the alias, alias ARNkey ID, or key ARN of an AWS-managed CMK or a custom CMK by setting the KmsMasterKeyId attribute of the CreateQueue or SetQueueAttributes action. /sqs/faqs/;What Amazon SQS queue types can use SSE?;Both standard and FIFO queues support SSE. /sqs/faqs/;What permissions do I need to use SSE with Amazon SQS?;Before you can use SSE, you must configure AWS KMS key policies to allow encryption of queues and encryption and decryption of messages. /sqs/faqs/;Are there any charges for using SSE with Amazon SQS?;There are no additional Amazon SQS charges. However, there are charges for calls from Amazon SQS to AWS KMS. For more information, see AWS Key Management Service Pricing. /sqs/faqs/;What does SSE for Amazon SQS encrypt and how is it encrypted?;SSE encrypts the body of a message in an Amazon SQS queue. /sqs/faqs/;What algorithm does SSE for Amazon SQS use to encrypt messages?;SSE uses the AES-GCM 256 algorithm. /sqs/faqs/;How can I estimate my AWS KMS usage costs?;To predict costs and better understand your AWS bill, you might want to know how often Amazon SQS uses your CMK. /sqs/faqs/;Is Amazon SQS PCI DSS certified?;Yes. Amazon SQS is PCI DSS Level 1 certified. For more information, see PCI Compliance. /sqs/faqs/;Is Amazon SQS HIPAA-eligible?;Yes, AWS has expanded its HIPAA compliance program to include Amazon SQS as a HIPAA Eligible Service. If you have an executed Business Associate Agreement (BAA) with AWS, you can use Amazon SQS to build your HIPAA-compliant applications, store messages in transit, and transmit messages—including messages containing protected health information (PHI). /sqs/faqs/;How long can I keep my messages in Amazon SQS message queues?;Longer message retention provides greater flexibility to allow for longer intervals between message production and consumption. /sqs/faqs/;How do I configure Amazon SQS to support longer message retention?;To configure the message retention period, set the MessageRetentionPeriod attribute using the console or using the Distributiveness method. Use this attribute to specify the number of seconds a message will be retained in Amazon SQS. /sqs/faqs/;How do I configure the maximum message size for Amazon SQS?;To configure the maximum message size, use the console or the SetQueueAttributes method to set the MaximumMessageSize attribute. This attribute specifies the number of bytes that an Amazon SQS message can contain. Set this attribute to a value between 1,024 bytes (1 KB), and 262,144 bytes (256 KB). For more information, see Using Amazon SQS Message Attributes in the Amazon SQS Developer Guide. /sqs/faqs/;What kind of data can I include in a message?;Amazon SQS messages can contain up to 256 KB of text data, including XML, JSON and unformatted text. The following Unicode characters are accepted: /sqs/faqs/;How large can Amazon SQS message queues be?;A single Amazon SQS message queue can contain an unlimited number of messages. However, there is a quota of 120,000 for the number of inflight messages for a standard queue and 20,000 for a FIFO queue. Messages are inflight after they have been received from the queue by a consuming component, but have not yet been deleted from the queue. /sqs/faqs/;How many message queues can I create?;You can create any number of message queues. /sqs/faqs/;Is there a size limit on the name of Amazon SQS message queues?;Queue names are limited to 80 characters. /sqs/faqs/;Are there restrictions on the names of Amazon SQS message queues?;You can use alphanumeric characters, hyphens (-), and underscores (_). /sqs/faqs/;Can I reuse a message queue name?;A message queue's name must be unique within an AWS account and region. You can reuse a message queue's name after you delete the message queue. /sqs/faqs/;How do I share a message queue?;You can associate an access policy statement (and specify the permissions granted) with the message queue to be shared. Amazon SQS provides APIs for creating and managing access policy statements: /sqs/faqs/;Who pays for shared queue access?;The message queue owner pays for shared message queue access. /sqs/faqs/;How do I identify another AWS user I want to share a message queue with?;The Amazon SQS API uses the AWS account number to identify AWS users. /sqs/faqs/;What do I need to provide to an AWS user I want to share a message queue with?;To share a message queue with an AWS user, provide the full URL from the message queue you want to share. The CreateQueue and ListQueues operations return this URL in their responses. /sqs/faqs/;Does Amazon SQS support anonymous access?;Yes. You can configure an access policy that allows anonymous users to access a message queue. /sqs/faqs/;When should I use the permissions API?;The permissions API provides an interface for sharing access to a message queue to developers. However, this API cannot allow conditional access or more advanced use cases. /sqs/faqs/;When should I use the SetQueueAttributes operation with JSON objects?;The SetQueueAttributes operation supports the full access policy language. For example, you can use the policy language to restrict access to a message queue by IP address and time of day. For more information, see Amazon SQS Policy Examples in the Amazon SQS Developer Guide. /sqs/faqs/;What regions is Amazon SQS available in?;For service region availability, see the AWS Global Infrastructure Region Table. /sqs/faqs/;Can I share messages between queues in different regions?;No. Each Amazon SQS message queue is independent within each region. /sqs/faqs/;Is there a pricing difference between regions?;Amazon SQS pricing is the same for all regions, except China (Beijing). For more information, see Amazon SQS Pricing. /sqs/faqs/;What is the pricing structure between various regions?;You can transfer data between Amazon SQS and Amazon EC2 or AWS Lambda free of charge within a single region. /sqs/faqs/;What are dead-letter queues?;A dead-letter queue is an Amazon SQS queue to which a source queue can send messages if the source queue’s consumer application is unable to consume the messages successfully. Dead-letter queues make it easier for you to handle message consumption failures and manage the life cycle of unconsumed messages. You can configure an alarm for any messages delivered to a dead-letter queue, examine logs for exceptions that might have caused them to be delivered to the queue, and analyze message contents to diagnose consumer application issues. Once you recover your consumer application, you can redrive the messages from your dead-letter queue to the source queue. /sqs/faqs/;How do dead-letter queues work?;When you create your source queue, Amazon SQS allows you to specify a dead-letter queue (DLQ) and the condition under which SQS should move messages to the DLQ. The condition is the number of times a consumer can receive a message from the queue, defined as maxReceiveCount. This configuration of a dead-letter queue with a source queue and the maxReceiveCount is known as the redrive policy. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS is designed to move the message to a dead-letter queue (with its original message ID). For example, if the source queue has a redrive policy with maxReceiveCount set to five, and the consumer of the source queue receives a message six times without successfully consuming it, SQS moves the message to the dead-letter queue. /sqs/faqs/;How does the dead-letter queue redrive to source queue work?;First, it allows you to investigate a sample of messages available in the dead-letter queue by showing message attributes and related metadata. Then, once you have investigated the messages, you can move them back to their source queue(s). You can also select the redrive velocity to configure the rate at which Amazon SQS will move the messages from the dead-letter queue to the source queue. /sqs/faqs/;Can I use a dead letter queue with FIFO queues?;Yes. However, you must use a FIFO dead letter queue with a FIFO queue. (Similarly, you can use only a standard dead letter queue with a standard queue.) /sns/faqs/;What is Amazon Simple Notification Service (Amazon SNS)?;Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud. It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications. It is designed to make web-scale computing easier for developers. Amazon SNfollows the “publish-subscribe” (pub-sub) messaging paradigm, with notifications being delivered to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates. With simple APIs requiring minimal up-front development effort, no maintenance or management overhead and pay-as-you-go pricing, Amazon SNgives developers an easy mechanism to incorporate a powerful notification system with their applications. /sns/faqs/;How can I get started using Amazon SNS?;You can create an Amazon SNtopic and publish messages in a few steps by completing our 10-minute tutorial, Send Fanout Notifications. /sns/faqs/;What are the benefits of using Amazon SNS?;Amazon SNoffers several benefits making it a versatile option for building and integrating loosely-coupled, distributed applications: /sns/faqs/;What are some example uses for Amazon SNS notifications?;The Amazon SNservice can support a wide variety of needs including event notification, monitoring applications, workflow systems, time-sensitive information updates, mobile applications, and any other application that generates or consumes notifications. For example, Amazon SNcan be used in workflow systems to relay events among distributed computer applications, move data between data stores or update records in business systems. Event updates and notifications concerning validation, approval, inventory changes and shipment status are immediately delivered to relevant system components as well as end-users. A common pattern is to use SNto publish messages to Amazon SQS message queues to reliably send messages to one or many system components asynchronously. Another example use for Amazon SNis to relay time-critical events to mobile applications and devices. Since Amazon SNis both highly reliable and scalable, it provides significant advantages to developers who build applications that rely on real-time events. /sns/faqs/;How does Amazon SNS work?;"It is very easy to get started with Amazon SNS. Developers must first create a “topic” which is an “access point” – identifying a specific subject or event type – for publishing messages and allowing clients to subscribe for notifications. Once a topic is created, the topic owner can set policies for it such as limiting who can publish messages or subscribe to notifications, or specifying which notification protocols will be supported (i.e. HTTP/HTTPS, email, SMS). Subscribers are clients interested in receiving notifications from topics of interest; they can subscribe to a topic or be subscribed by the topic owner. Subscribers specify the protocol and end-point (URL, email address, etc.) for notifications to be delivered. When publishers have information or updates to notify their subscribers about, they can publish a message to the topic – which immediately triggers Amazon SNto deliver the message to all applicable subscribers." /sns/faqs/;How is Amazon SNS different from Amazon SQS?;Amazon Simple Queue Service (SQS) and Amazon SNare both messaging services within AWS, which provide different benefits for developers. Amazon SNallows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates. Amazon SQS is a message queue service used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components. Amazon SQS provides flexibility for distributed components of applications to send and receive messages without requiring each component to be concurrently available. /sns/faqs/;How is Amazon SNS different from Amazon MQ?;Amazon MQ, Amazon SQS, and Amazon SNare messaging services that are suitable for anyone from startups to enterprises. If you're using messaging with existing applications, and want to move your messaging to the cloud quickly and easily, we recommend you consider Amazon MQ. It supports industry-standard APIs and protocols so you can switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications. If you are building brand new applications in the cloud, we recommend you consider Amazon SQS and Amazon SNS. Amazon SQS and SNare lightweight, fully managed message queue and topic services that scale almost infinitely and provide simple, easy-to-use APIs. You can use Amazon SQS and SNto decouple and scale microservices, distributed systems, and serverless applications, and improve reliability. /sns/faqs/;How can I get started using Amazon SNS?;"To sign up for Amazon SNS, click the “Sign up for Amazon SNS” button on the Amazon SNdetail page. You must have an Amazon Web Services account to access this service; if you do not already have one, you will be prompted to create one when you begin the Amazon SNsign-up process. After signing up, please refer to the Amazon SNdocumentation and Getting Started Guide to begin using Amazon SNS. Using the AWS Management Console, you can easily create topics, add subscribers, send notifications, and edit topic policies – all from your browser." /sns/faqs/;Is Amazon SNS supported in the AWS Management Console?;Amazon SNis supported in the AWS Management Console which provides a point-and-click, web-based interface to access and manage Amazon SNS. Using the AWS Management Console, you can create topics, add subscribers, and send notifications – all from your browser. In addition, the AWS Management Console makes it easy to publish messages to your endpoint of choice (HTTP, SQS, Lambda, mobile push, email, or SMS) and edit topic policies to control publisher and subscriber access. /sns/faqs/;What are the Amazon SNS service access points in each region?;Please refer to the AWS Regions and Endpoints section of the AWS documentation for the latest list of all Amazon SNservice access points. /sns/faqs/;Can I get a history of SNS API calls made on my account for security analysis and operational troubleshooting purposes?;Yes. SNsupports AWS CloudTrail, a web service that records AWS API calls for your account and delivers log files to you. With CloudTrail, you can obtain a history of such information as the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by SNS. /sns/faqs/;How much does Amazon SNS cost?;With Amazon SNS, there is no minimum fee and you pay only for what you use. Users pay $0.50 per 1 million Amazon SNRequests, $0.06 per 100,000 notification deliveries over HTTP, and $2.00 per 100,000 notification deliveries over email. For SMS messaging, charges vary by destination country. /sns/faqs/;How will I be charged and billed for my use of Amazon SNS?;There are no set-up fees to begin using the service. At the end of the month, your credit card will automatically be charged for that month’s usage. You can view your charges for the current billing period at any time on the Amazon Web Services web site by logging into your Amazon Web Services account and clicking “Account Activity” under “Your Web Services Account”. /sns/faqs/;When does billing of my Amazon SNS use begin and end?;Your Amazon SNbilling cycle begins on the first day of each month and ends on the last day of each month. Your monthly charges will be totaled at the end of each month. /sns/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /sns/faqs/;What is the format of an Amazon SNS topic?;Topic names are limited to 256 characters. Alphanumeric characters plus hyphens (-) and underscores (_) are allowed. Topic names must be unique within an AWS account. After you delete a topic, you can reuse the topic name. When a topic is created, Amazon SNwill assign a unique ARN (Amazon Resource Name) to the topic, which will include the service name (SNS), region, AWS ID of the user and the topic name. The ARN will be returned as part of the API call to create the topic. Whenever a publisher or subscriber needs to perform any action on the topic, they should reference the unique topic ARN. /sns/faqs/;What are the available operations for Amazon SNS and who can perform these operations?;Amazon SNprovides a set of simple APIs to enable event notifications for topic owners, subscribers, and publishers. /sns/faqs/;Why are there two different APIs to list subscriptions?;The two APIs to list subscriptions perform different functions and return different results: /sns/faqs/;What are the different delivery formats/transports for receiving notifications?;"“SQS” – Users can specify an SQS standard or FIFO queue as the endpoint; Amazon SNwill enqueue a notification message to the specified queue (which subscribers can then process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.)." /sns/faqs/;Can topic owners control the transports that are allowed on topics they create/own?;Topic owners can configure specific transports on their topics by setting the appropriate permissions through access control policies. /sns/faqs/;How does an owner set Access Control policies?;Please refer to the Amazon SNGetting Started Guide for an overview of setting access control policies. /sns/faqs/;Can a single topic support subscriptions over multiple protocols/transports?;Subscribers to an Amazon SNtopic can receive notifications on any transport supported by the topic. A topic can support subscriptions and notification deliveries over multiple transports. /sns/faqs/;Can subscribers selectively receive only a subset of messages published to a topic?;Yes, you can use message filtering on Amazon Simple Notification Service (SNS) to build simpler and more streamlined pub/sub architectures. Message filtering enables Amazon SNtopic subscribers to selectively receive only a subset of the messages they are interested in, as opposed to receiving all messages published to a topic. To monitor the usage of SNsubscription filter policies, use Amazon CloudWatch metrics, which are automatically collected for you. You can also use the AWS::SNS::Subscription resource type in AWS CloudFormation templates to quickly deploy solutions that use SNmessage filtering. For more details, try our 10-minute tutorial, Filter Messages Published to Topics, or see the Filter Messages with Amazon SNsection in our documentation. /sns/faqs/;Can Amazon SNS be used with other AWS services?;Amazon SNcan be used with other AWS services such as Amazon SQS, Amazon EC2 and Amazon S3. Here is an example of how an order processing workflow system uses Amazon SNwith Amazon EC2, SQS, and SimpleDB. In this workflow system, messages are sent between application components whenever a transaction occurs or an order advances through the order processing pipeline. When a customer initially places an order, the transaction is first recorded in Amazon SimpleDB and an application running on Amazon EC2 forwards the order request to a payment processor which debits the customer’s credit card or bank account. Once approved, an order confirmation message is published to an Amazon SNtopic. In this case, the topic has various subscribers over Email/HTTP – merchant, customer and supply chain partners – and notifications sent by Amazon SNfor that topic can instantly update all of them that payment processing was successful. Notifications can also be used to orchestrate an order processing system running on EC2, where notifications sent over HTTP can trigger real-time processing in related components such as an inventory system or a shipping service. By integrating Amazon SNwith Amazon SQS, all notifications delivered are also persisted in an Amazon SQS queue where they are processed by an auditing application at a future time. /sns/faqs/;Is Amazon SNS available in all regions where AWS services are available?;Please refer to the AWS Regions and Endpoints section of the AWS documentation for the most up to date information on Amazon SNavailability. /sns/faqs/;How soon can customers recreate topics with previously used topic names?;Topic names should typically be available for reuse approximately 30-60 seconds after the previous topic with the same name has been deleted. The exact time will depend on the number of subscriptions which were active on the topic – topics with a few subscribers will be available instantly for reuse, topics with larger subscriber lists may take longer. /sns/faqs/;What are SNS FIFO topics?;Similar to standard SNtopics, SNFIFO topics allow users to publish a message to a topic, so it can be delivered to a series of subscribing endpoints. When the delivery of those messages to subscribers must be in order (first-in-first-out), and once only, and you want SNto take care of it, SNFIFO topics is the way to go. Amazon SNFIFO topics deliver ordered messages to Amazon Simple Queue Service (Amazon SQS) FIFO queues to provide consistent end-to-end message ordering for distributed applications. You can now reduce the effort required to process your high throughput, consistently ordered transactions and simplify your messaging architecture. Example use cases include bank transaction logs, stock tickers, flight trackers, price updates, news broadcasting, and inventory management. /sns/faqs/;When should I use SNS FIFO topics and when should I use Kinesis Data Streams?;Both SNFIFO topics and Kinesis Streams enable you to build applications that require strictly ordered, many-to-many messaging. SNFIFO topics can further unlock application integration use cases that require large ordered fan-out, up to 100 subscribers. Kinesis Streams, on the other hand, supports ordered fan-out up to 5 subscribers and is often used for analytics and anomaly detection use cases. /sns/faqs/;How would a user subscribe for notifications to be delivered over email?;To receive email notifications for a particular topic, a subscriber should specify “Email” or “Email-JSONas the protocol and provide a valid email address as the end-point. This can be done using the AWS Management Console or by calling the Amazon SNAPI directly. Amazon SNwill then send an email with a confirmation link to the specified email address, and require the user monitoring the email address to explicitly opt-in for receiving email notifications from that particular topic. Once the user confirms the subscription by clicking the provided link, all messages published to the topic will be delivered to that email address. /sns/faqs/;Why does Amazon SNS provide two different transports to receive notifications over email?;The two email transports are provided for two distinct types of customers/end-users. “Email-JSONsends notifications as a JSON object, and is meant for applications to programmatically process emails. The ”Email” transport is meant for end-users/consumers and notifications are regular, text-based messages which are easily readable. /sns/faqs/;Can a user change the Subject and Display name for notifications sent over Email/Email-JSON?;Amazon SNallows users to specify the Subject field for emails as a parameter passed in to the Publish API call and can be different for every message published. The Display name for topics can be set using the SetTopicAttributes API – this name applies to all emails sent from this topic. /sns/faqs/;Do subscribers need to specifically configure their email settings to receive notifications from Amazon SNS?;In most cases, users should be able to receive subscription confirmations and notifications from Amazon SNwithout doing anything specific. However, there could be cases where the email provider’s default settings or other user-specific configurations mistakenly redirect the emails to the junk/spam folder. To ensure that users see confirmation messages and notifications sent from Amazon SNS, users can add “no-reply@sns.amazonaws.com” to their contact lists and check their junk/spam folders for messages from Amazon SNS. /sns/faqs/;In the case of passing in an SQS queue as an endpoint, will users need to create the queue prior to subscribing? What permissions will the queue require?;Using the SQS console, users should create the SQS queue prior to subscribing it to a Topic. Select this queue on the console, and from the ‘Queue Actions’ in the menu bar, select ‘Subscribe Queue to SNTopic’ from the drop-down list. In the subscribe dialog box, select the topic from the ‘Choose a Topic’ drop-down list, and click the ‘Subscribe’ button. For complete step-by-step instructions, please refer to the Amazon SNdocumentation. /sns/faqs/;How would a developer setup an Amazon SQS queue to receive Amazon SNS notifications?;To have Amazon SNdeliver notifications to an SQS queue, a developer should subscribe to a topic specifying “SQS” as the transport and a valid SQS standard queue as the end-point. In order to allow the SQS queue to receive notifications from Amazon SNS, the SQS queue owner must subscribe the SQS queue to the Topic for Amazon SNto successfully deliver messages to the queue. /sns/faqs/;How can I fanout identical messages to multiple SQS queues?;Create an SNtopic first using SNS. Then create and subscribe multiple SQS standard queues to the SNtopic. Now whenever a message is sent to the SNtopic, the message will be fanned out to the SQS queues, i.e. SNwill deliver the message to all the SQS queues that are subscribed to the topic. /sns/faqs/;What is the format of structured notification messages sent by Amazon SNS?;The notification message sent by Amazon SNfor deliveries over HTTP, HTTPS, Email-JSON and SQS transport protocols will consist of a simple JSON object, which will include the following information: /sns/faqs/;How would a user subscribe for notifications to be delivered over SMS?;Please refer to the 'SMS Related Question' section below. /sns/faqs/;How can users secure the messages sent to my topics?;All API calls made to Amazon SNare validated for the user’s AWS ID and the signature. In addition, we recommend that users secure their data over the wire by connecting to our secure SSL end-points. /sns/faqs/;Who can create a topic?;Topics can only be created by users with valid AWS IDs who have signed up for Amazon SNS. The easiest way to create a topic is to use the AWS Management Console. It can also be created through the CreateTopic API. The AWS Management Console is available at: http://aws.amazon.com/console /sns/faqs/;Can multiple users publish to a single topic?;A topic owner can set explicit permissions to allow more than one user (with a valid AWS ID) to publish to a topic. By default, only topic owners have permissions to publish to a topic. /sns/faqs/;How can the owner grant/revoke publish or subscribe permissions on a topic?;The AddPermission and RemovePermission APIs provide a simple interface for developers to add and remove permissions for a topic. However, for conditional access and more advanced use cases, users should use access control policies to manage permissions. The easiest way to manage permissions is to use the AWS Management Console. The AWS Management Console is available at: http://aws.amazon.com/console /sns/faqs/;How does a topic owner give access to subscribers? Do subscribers have to have valid AWS IDs?;Amazon SNmakes it easy for users with and without AWS IDs to receive notifications. The owner of the topic can grant/restrict access to subscribers by setting appropriate permissions for the topic using Access Control policies. Users can receive notifications from Amazon SNin two ways: /sns/faqs/;How will Amazon SNS authenticate API calls?;All API calls made to Amazon SNwill validate authenticity by requiring that requests be signed with the secret key of the AWS ID account and verifying the signature included in the requests. /sns/faqs/;How does Amazon SNS validate a subscription request to ensure that notifications will not be sent to users as spam?;As part of the subscription registration, Amazon SNwill ensure that notifications are only sent to valid, registered subscribers/end-points. To prevent spam and ensure that a subscriber end-point is really interested in receiving notifications from a particular topic, Amazon SNrequires an explicit opt-in from subscribers using a 2-part handshake: /sns/faqs/;How long will subscription requests remain pending, while waiting to be confirmed?;Token included in the confirmation message sent to end-points on a subscription request are valid for 3 days. /sns/faqs/;Who can change permissions on a topic?;Only the owner of the topic can change permissions for that topic. /sns/faqs/;How can users verify that notification messages are sent from Amazon SNS?;To ensure the authenticity of the notifications, Amazon SNwill sign all notification deliveries using a cryptographically secure, asymmetric mechanism (private-public key pair based on certificates). Amazon SNwill publish its certificate to a well-known location (e.g. http://sns.us-east-1.amazonaws.com/SimpleNotificationService.pem for the US East region) and sign messages with the private key of that certificate. Developers/applications can obtain the certificate and validate the signature in the notifications with the certificate’s public key, to ensure that the notification was indeed sent out by Amazon SNS. For further details on certificate locations, please refer to the Amazon SNdetails page. /sns/faqs/;Do publishers have to sign messages as well?;"Amazon SNrequires publishers with AWS IDs to validate their messages by signing messages with their secret AWS key; the signature is then validated by Amazon SNS." /sns/faqs/;Can a publisher/subscriber use SSL to secure messages?;Yes, both publishers and subscribers can use SSL to help secure the channel to send and receive messages. Publishers can connect to Amazon SNover HTTPS and publish messages over the SSL channel. Subscribers should register an SSL-enabled end-point as part of the subscription registration, and notifications will be delivered over a SSL channel to that end-point. /sns/faqs/;What permissions does a subscriber need to allow Amazon SNS to send notifications to a registered endpoint?;The owner of the end-point receiving the notifications has to grant permissions for Amazon SNto send messages to that end-point. /sns/faqs/;How can subscriptions be unsubscribed?;Subscribers can be unsubscribed either by the topic owner, the subscription owner or others – depending on the mechanism used for confirming the subscription request. /sns/faqs/;Is Amazon SNS HIPAA eligible?;Yes, the AWS HIPAA compliance program includes Amazon SNas a HIPAA eligible Service. If you have an executed Business Associate Agreement (BAA) with AWS, you can now use Amazon SNto build HIPAA-compliant applications. If you don't have a BAA or have other questions about using AWS for your HIPAA-compliant applications, contact us for more information. Please note that Amazon SNmobile push notification and SMS functionalities are outside the scope of the Service’s HIPAA eligibility and thus not suitable for transmitting Protected Health Information (PHI). /sns/faqs/;What else is Amazon SNS compliant with?;Please see AWS Services in Scope by Compliance Program for the latest information about SNand other AWS services. /sns/faqs/;How durable is my data once published to Amazon SNS?;SNprovides durable storage of all messages that it receives. Upon receiving a publish request, SNstores multiple copies (to disk) of the message across multiple Availability Zones before acknowledging receipt of the request to the sender. Each AWS Region has multiple, isolated locations known as Availability Zones. Although rare, should a failure occur in one zone, the operation of SNand the durability of your messages continue without disruption. /sns/faqs/;Will a notification contain more than one message?;No, all notification messages will contain a single published message. /sns/faqs/;How many times will a subscriber receive each message?;Although most of the time each message will be delivered to your application exactly once, the distributed nature of Amazon SNand transient network conditions could result in occasional, duplicate messages at the subscriber end. Developers should design their applications such that processing a message more than once does not create any errors or inconsistencies. /sns/faqs/;Will messages be delivered to me in the exact order they were published?;The Amazon SNservice will attempt to deliver messages from the publisher in the order they were published into the topic. However, network issues could potentially result in out-of-order messages at the subscriber end. /sns/faqs/;Can a message be deleted after being published?;No, once a message has been successfully published to a topic, it cannot be recalled. /sns/faqs/;Does Amazon SNS guarantee that messages are delivered to the subscribed endpoint?;Yes, as long as the subscribed endpoint is accessible. A message delivery fails when Amazon SNcan't access a subscribed endpoint, due to either a client-side or a server-side error. A client-side error happens when the subscribed endpoint has been deleted by the endpoint owner, or when its access permissions have changed in a way that prevents Amazon SNfrom delivering messages to this endpoint. A server-side error happens when the service that powers the subscribed endpoint is unavailable, such as Amazon SQS or AWS Lambda. When Amazon SNreceives a client-side error, or continues to receive a server-side error for a message beyond the number of retries specified by the corresponding retry policy, Amazon SNdiscards the message — unless a dead-letter queue is attached to the subscription. For more information, see Message Delivery Retries. and Amazon SNDead-Letter Queues. /sns/faqs/;What happens to Amazon SNS messages if the subscribing endpoint is not available?;If a message cannot be successfully delivered on the first attempt, Amazon SNexecutes a 4-phase retry policy: 1) retries with no delay in between attempts, 2) retries with minimum delay between attempts, 3) retries according to a back-off model, and 4) retries with maximum delay between attempts. When the message delivery retry policy is exhausted, Amazon SNcan move the message to a dead-letter queue (DLQ). For more information, see Message Delivery Retries and Amazon SNDead-Letter Queues. /sns/faqs/;When should I mark an SMS message as Transactional?;SMS messages that are of high priority to your business should be marked as Transactional. This ensures that messages such as those that contain one-time passwords (OTP) or PINget delivered over routes with the highest delivery reliability. These routes tend to be more expensive than Promotional messaging routes in countries other than the US. You should never mark marketing messages as Transactional, because this violates the local regulatory policies in certain countries, and your account may be marked for abuse and suspended. /sns/faqs/;When should I mark an SMS message as Promotional?;SMS messages that carry marketing messaging should be marked Promotional. Amazon SNensures that such messages are sent over routes that have a reasonable delivery reliability but are substantially cheaper than the most reliable routes. This also allows Amazon SNto handle and deliver your messages in compliance with on local laws and regulation /sns/faqs/;What are account-level and message-level spend quotas and how do they work?;Spend quotas can be specified for an AWS account and for individual messages, and the quotas apply only to the cost of sending SMS messages. /sns/faqs/;Is two-way SMS supported?;Amazon SNdoes not currently support two-way SMS capabilities, except for opt out where required by local regulations. /sns/faqs/;Do I need to subscribe phone numbers to an SNS Topic before sending an SMS message to it?;You no longer need to subscribe a phone number to an Amazon SNtopic before you publish messages to it. Now, you can directly publish messages to a phone number using the Amazon SNconsole or the Publish request in the Amazon SNAPI. /sns/faqs/;Does AWS offer short codes for purchase?;Yes. You can reserve a dedicated short code that is assigned to your account and available exclusively to you. /sns/faqs/;Does AWS offer long codes for purchase?;Yes. You may purchase long codes for use with Amazon SNas described here. /sns/faqs/;Will SMS notifications come from a specific origination number?;"Amazon SNwill use numbers as configured for your account. It will prioritize using a dedicated short code, followed by one of the dedicated long codes. In case you do not have dedicated numbers, Amazon SNwill fallback to using a shared set of numbers to send SMS notifications. When using the shared set, Amazon SNattempts to use the same number when sending messages to a specific destination phone number. This is called ""Sticky Sender ID"". However, depending on various factors like network conditions and throughput available, a different number may be used." /sns/faqs/;Which countries does Amazon SNS support for Worldwide SMS?;Amazon SNsupports more than 200 countries, and we keep growing our reach. Please refer to the SMS Supported Country List for a comprehensive list of supported calling countries. For SMS message sending to China, please Contact Us. /sns/faqs/;Which AWS regions support Worldwide SMS?;Please refer to the SNSupported Regions and Countries page of the Amazon SNdocumentation for the latest list of regions where applications using Amazon SNto send SMS can be hosted in. /sns/faqs/;Do the AWS phone numbers change?;Yes. Amazon SNwill preferentially use the configured dedicated numbers of an account in priority order of short codes before long codes. If no dedicated numbers are configured one of the numbers from a shared set will be used. /sns/faqs/;Why do some devices on the same carrier receive messages from different phone numbers?;Amazon SNwill preferentially use the configured dedicated numbers of an account in priority order of short codes before long codes. If no, dedicated numbers are configured one of the numbers from a shared set will be used. /sns/faqs/;What is the phone number format for sending messages to other countries?;AWS strongly encourages E.164 number formatting for all phone numbers both in the ‘to’ and ‘from’ (when applicable) fields. Please refer to the SMS Supported Country List for a comprehensive list of supported countries. /sns/faqs/;Does Amazon SNS determine if a phone number is a mobile, landline, or VoIP number?;No. Currently, Amazon SNdoes not detect whether a phone number is mobile, landline, or VoIP. /sns/faqs/;Is time-based or scheduled delivery supported for SMS messages?;No. Amazon SNdoes not currently support time-based or scheduled delivery. /sns/faqs/;How do I track the delivery status of my SMS messages?;By enabling the Delivery Status feature in Amazon SNS, you can get information on the following for each message: MessageID, Time Sent, Destination Phone Number, Disposition, Disposition Reason (if applicable), Price, and Dwell Time. /sns/faqs/;Do you support MMS?;No. Currently Amazon SNdoes not support MMS messages. /sns/faqs/;What is the cost of receiving SMS messages from Amazon SNS?;Costs for receiving SMS messages depend on the Data and Messaging of the recipient's wireless / mobile carrier plans. /sns/faqs/;How do recipients opt out from receiving SMS messages from AWS?;Recipients can use their devices to opt out by replying to the message with any of the following: /sns/faqs/;How do I know if a recipient device has ‘opted out’ of Global SMS?;The SNconsole displays the list of opted out numbers for your account. Additionally, the Amazon SNAPI provides the ListPhoneNumbersOptedOut request for listing opted out phone numbers. /sns/faqs/;If a user opts out, will that number be unsubscribed automatically from the SNS Topic?;No. Opt-outs do not unsubscribe a number from an Amazon SNtopic, but rather disable the subscription. This means if you opt-in a phone number you do not need to re-subscribe the phone number to the topic. /sns/faqs/;How do I confirm the end user received the SMS message?;You can use our Delivery Status feature to get information on the final disposition of your SMS message. For more information on the feature and how to use it, please refer to our documentation. /sns/faqs/;Does Amazon SNS provide delivery receipts for SMS messages?;Our Delivery Status feature provides information based on delivery receipts received from the destination carrier. For more information on the Delivery Status feature and how to use it, please refer to our documentation. /sns/faqs/;Does SMS support delivery to VoIP services like Google Voice or Hangouts?;Yes. Amazon SNdoes support delivery to VoIP services that can receive SMS messages. /sns/faqs/;What is 10DLC?;A. 10DLC is a 10-digit long code that you can use as an origination identity when sending text messages (SMS) to consumers in the US. It supports a maximum throughput of 100 text messages per second (TPS). AWS doesn't determine the throughput allocated to you. Instead, US carriers allocate throughput to you when you register for the 10DLC. To use 10DLC numbers, carriers require that you provide information about your company and your use cases (also called 10DLC campaigns). /sns/faqs/;How long does it take to register a 10DLC campaign?;In some cases, registration can occur immediately. For example, if you've previously registered with The Campaign Registry (TCR), they might already have your information. However, it can take a week or longer to receive approval for some campaigns. After your company and 10DLC campaigns are approved by TCR, you can purchase a 10DLC number and associate it with your campaigns. After you purchase a 10DLC number, it may take up to a week for activation. For more information, see 10DLC in the Amazon SNDeveloper Guide. /sns/faqs/;Can I procure an unregistered P2P long code to send A2P SMS to US phone numbers?;A. No. As of February 16, 2021, you cannot purchase SMS-enabled unregistered person-to-person (P2P) long codes from AWS. Starting June 1, 2021, Amazon SNno longer supports sending application-to-person (A2P) SMS messages over unregistered US long codes to US destinations. Instead, you can purchase and use short codes, 10DLC, and/or toll-free numbers as originating identities for US destinations. For more information, see Origination numbers in the Amazon SNDeveloper Guide. /sns/faqs/;Should I delete the existing unregistered US long codes in my AWS account?;A. Yes. On June 1, 2021, carriers will no longer deliver messages sent via unregistered long codes to US destinations. If you do not need them for other purposes (for example, voice telephony using other AWS products), delete them from your account. To send SMS, you can convert existing unregistered long codes to 10DLC numbers by associating them with a 10DLC campaign. For more information, see Associating a long code with a 10DLC campaign in the Amazon SNDeveloper Guide. Amazon SNuses Amazon Pinpoint for managing 10DLC campaigns. /sns/faqs/;I only use Amazon SNS or Amazon Cognito. Should I still use Amazon Pinpoint to register my 10DLC campaign?;A. Yes. You must use Amazon Pinpoint to register 10DLC brands and campaigns. When you complete the registration process and your 10DLC number is activated, Amazon SNand Amazon Cognito automatically use the 10DLC in your account as the origination ID when sending SMS. /sns/faqs/;Can I continue to use my long code when it is being migrated to a 10DLC number?;A. Yes. You can continue using the long code as an origination ID, when it is converted into a 10DLC number. It is important that the 10DLC process is completed before June 1, 2021, as unregistered long codes cannot be used after that date. /sns/faqs/;What is a 10DLC campaign? What information do I need to provide to create one?;A. A 10DLC campaign represents a use case for which you are sending a text message to your customers. For example, you might send a notification when a customer’s bill is due. Before you send the SMS, you need to register your use cases for sending text messages, and associate a 10DLC number with a 10DLC campaign. For more information, see Registering a 10DLC campaign in the Amazon SNDeveloper Guide. Amazon SNuses Amazon Pinpoint for managing 10DLC campaigns. /sns/faqs/;When sending SMS, how does Amazon SNS choose from the origination identities associated with my AWS account?;When you publish messages to Amazon SNS, you can choose one of the registered origination identities by setting the AWS.MM.SMS.OriginationNumber attribute. AWS recommends that you specify the origination identity when publishing messages. /sns/faqs/;Can I use multiple 10DLC numbers for one campaign?;Yes. You can associate multiple 10DLC numbers with a single campaign. However, you cannot use the same 10DLC number across multiple campaigns. /sns/faqs/;I registered my 10DLC company and campaign successfully. However, the associated 10DLC number is stuck in a ‘Pending’ state. What do I do?;A. When the 10DLC number is in pending state, AWS is working to activate your number on the 10DLC campaign. To activate a number, a valid and active 10DLC brand and 10DLC campaign are required. Activation can take a week or longer to complete. If the 10DLC number has been in pending state for more than a week, raise support case via AWS Support console. /sns/faqs/;Can I use AWS API actions to request 10DLC numbers instead of using the Amazon Pinpoint console?;A. No. Currently, you can only request 10DLC numbers via the Amazon Pinpoint console. Amazon SNuses Amazon Pinpoint for managing 10DLC campaigns. /sns/faqs/;How can I use 10DLC across different AWS regions in my AWS account?;A. 10DLC company and 10DLC campaign registration is specific to an AWS account. However, a 10DLC number is specific to an AWS region. You can have multiple 10DLC numbers in an AWS region referring to the same 10DLC campaign. /sns/faqs/;Can I get an 10DLC number with a specific area code?;A. No. Currently, AWS does not support choosing 10DLC numbers. /sns/faqs/;Can I use 10DLC numbers as originating identities for sending SMS outside the US?;A. No. You can only use 10DLC numbers to send SMS messages to US destinations. /sns/faqs/;Can I use 10DLC numbers for sending voice messages?;A. Yes. To use 10DLC numbers to send voice messages, select voice capability when provisioning these numbers. Note that Amazon SNdoes not support voice messages. However, you can use these numbers in other AWS services. /sns/faqs/;Can I use variables in my 10DLC campaign sample messages?;"A. Yes. To use variable content in your sample messages, you can use placeholders in the template that you provide when you register the 10DLC campaign. For example, suppose you want the message to read, ""Hi John. Your OTP is 1234."" In this case, you would write the template as follows: ""Hi {#var1}. Your OTP is {#var2}.""" /sns/faqs/;Can I migrate 10DLC registrations from one AWS account to another? How long does it take?;A. Yes. To migrate 10DLC registrations between the AWS accounts that you own, create a Service quota Increase support case in the AWS Support Center. You can expect a response within two weeks. /sns/faqs/;I registered my company directly with The Campaign Registry (TCR) portal. Can I use the same registration for my AWS account?;A. No. To send SMS using Amazon SNS, you must register your brand and 10DLC campaigns with AWS, using the Amazon Pinpoint console. For more information, see Getting started with 10DLC, in the Amazon SNDeveloper Guide. /sns/faqs/;I send SMS messages using Amazon SNS from multiple AWS regions. How do I register a 10DLC number in the AWS region I operate in?;A. 10DLC numbers are specific to an AWS region. 10DLC company and campaigns are valid across AWS regions, in the same AWS account. You can register your brand and campaigns in one AWS region, and procure new 10DLC numbers for those 10DLC campaigns, for use in other AWS regions as needed. /sns/faqs/;What happens when I send SMS messages at a higher rate than my 10DLC campaign throughput quota?;A. When you exceed your throughput quota, your AWS account experiences throttling errors. The throughput quota is broken down as follows: /sns/faqs/;How can I register my company in two different AWS accounts?;10DLC companies and campaigns reside within a single AWS account. If you have multiple accounts, you can associate those other accounts with your main account in order to use your 10DLC numbers from any of those accounts. For more information, see 10DLC cross-account access in the Amazon SNDeveloper Guide. Amazon SNuses Amazon Pinpoint for managing 10DLC campaigns. /sns/faqs/;Can I use tiny URLs for 10DLC messages?;No. Carriers don't allow the use of tiny URLs that services like bit.ly provide. AWS recommends that you use full URLs, matching your company's domain. Alternatively, you can use URL shortening services that provide custom and/or vanity domains and are obviously related to the brand sending the messages. Be sure to provide these URL examples in the sample messages during 10DLC campaign registration. /sns/faqs/;We use Amazon SNS to send SMS, and we do not set the 'OriginationNumber' attribute. How will Amazon SNS know which 10DLC campaign to use in the event we have more than one campaign in our AWS account?;A. If you have multiple 10DLC campaigns in your AWS account, AWS recommends that you use the ‘OriginationNumber’ parameter while sending messages via Amazon SNto use the correct 10DLC campaign. If you don't specify this parameter, Amazon SNchooses the origination identity for you. /sns/faqs/;How can I send SMS using Amazon SNS via 10DLC numbers from AWS regions that are not supported by Amazon Pinpoint?;A. After a number is configured in an AWS region, you can continue to use Amazon SNin that region. You can register a 10DLC in an AWS region, and create a service quota increase support case, requesting the transfer of this number to another AWS region of your choice. For more information, see Requesting 10DLC numbers, toll-free numbers, and P2P long codes for SMS messaging in the Amazon SNDeveloper Guide. /sns/faqs/;How much do you charge for sending SMS messages?;The price you pay for sending SMS messages varies based on the recipient's country or region, and may also vary based on the recipient's mobile carrier. You can find the latest rates on the SMS Pricing page. /sns/faqs/;Why does the price for sending SMS messages to the same destination country and carrier keep changing?;The costs associated with sending SMS messages to different countries and regions—and even to different carriers within those countries and regions—can change frequently and with little or no notice. Carrier policies, technological changes, and even geopolitical issues can cause the prices for sending SMS messages to change. /sns/faqs/;Am I charged if my SMS messages aren't delivered?;You may be charged for failed deliveries if the destination carrier reports that you attempted to send a message to an invalid phone number. Phone numbers can be invalid for several reasons, such as when the phone number doesn't exist, the recipient's account doesn't have sufficient credit, or the destination number is a landline number. /sns/faqs/;Does the length of a message impact the price I pay?;Yes. A single SMS message can contain a maximum of 140 bytes of information. If a message contains more than 140 bytes, Amazon SNautomatically splits it into multiple messages. When Amazon SNsplits a long message into several smaller messages, you pay for each individual message. /sns/faqs/;Is there an AWS Free Tier allowance for sending SMS messages?;No. /sns/faqs/;Are there quotas for the number of topics or number of subscribers per topic?;By default, SNoffers 10 million subscriptions per topic, and 100,000 topics per account. To request a higher quota, please contact Support. /sns/faqs/;How much and what kind of data can go in a message?;With the exception of SMS messages, Amazon SNmessages can contain up to 256 KB of text data, including XML, JSON and unformatted text. /sns/faqs/;How many message filters can be applied to a topic?;By default, 200 filter policies per account per region can be applied to a topic. Please contact us if more is required. /sns/faqs/;Are there TCP ports that should be used for cross-region communication between SNS and EC2?;Yes, cross-region communication between SNand EC2 on ports other than 80/443/4080/8443 is not guaranteed to work and should be avoided. /sns/faqs/;What is raw message delivery?;You can opt-in to get your messages delivered in raw form, i.e. exactly as you published them. By default, messages are delivered encoded in JSON that provides metadata about the message and topic. Raw message delivery can be enabled by setting the “RawMessageDelivery” property on the subscriptions. This property can be set by using the AWS Management Console, or by using the API SetSubscriptionAttributes. /sns/faqs/;What is the default behavior if the raw message delivery property on the subscription is not set?;By default, if this property is not set, messages will be delivered in JSON format, which is the current behavior. This ensures existing applications will continue to operate as expected. /sns/faqs/;Which types of endpoints support raw message delivery?;Raw message delivery support is supported with SQS and HTTP(S) endpoints. Deliveries to Lambda, email, and SMS endpoints will behave the same independent of the “RawMessageDelivery” property. /sns/faqs/;How will raw messages be delivered to HTTP endpoints?;When raw-formatted messages are delivered to HTTP/s endpoints, the message body will be included in the body of the HTTP POST. /sns/faqs/;What is SNS Mobile Push?;SNMobile Push lets you use Simple Notification Service (SNS) to deliver push notifications to Apple, Google, Fire OS, and Windows devices, as well as Android devices in China with Baidu Cloud Push. With push notifications, an installed mobile application can notify its users immediately by popping a notification about an event, without opening the application. For example, if you install a sports app and enable push notifications, the app can send you the latest score of your favorite team even if the app isn’t running. The notification appears on your device, and when you acknowledge it, the app launches to display more information. Users’ experiences are similar to receiving an SMS, but with enhanced functionality and at a fraction of the cost. /sns/faqs/;How do I get started sending push notifications?;Push notifications can only be sent to devices that have your app installed, and whose users have opted in to receive them. SNMobile Push does not require explicit opt-in for sending push notifications, but iOS, Android and Kindle Fire operating systems do require it. In order to send push notifications with SNS, you must also register your app and each installed device with SNS. For more information, see Using Amazon SNMobile Push Notifications. /sns/faqs/;Which push notifications platforms are supported?;Currently, the following push notifications platforms are supported: /sns/faqs/;How many push notifications can I send with the SNS Free Tier?;The SNfree tier includes 1 million publishes, plus 1 million mobile push deliveries. So you can send 1 million free push notifications every month. Notifications to all mobile push endpoints are all counted together toward your 1 million free mobile push deliveries. /sns/faqs/;Does enabling push notifications require any special confirmations with SNS Mobile Push?;No, they do not. End-users opt-in to receive push notifications when they first run an app, whether or not SNdelivers the push notifications. /sns/faqs/;Do I have to modify my client app to use SNS Mobile Push?;SNdoes not require you to modify your client app. Baidu Cloud Push requires Baidu-specific components to be added to your client code in order to work properly, whether or not you choose to use SNS. /sns/faqs/;How do SNS topics work with Mobile Push?;SNtopics can have subscribers from any supported push notifications platform, as well as any other endpoint type such as SMS or email. When you publish a notification to a topic, SNwill send identical copies of that message to each endpoint subscribed to the topic. If you use platform-specific payloads to define the exact payload sent to each push platform, the publish will fail if it exceeds the maximum payload size imposed by the relevant push notifications platform. /sns/faqs/;What payload size is supported for various target platforms?;SNwill support maximum payload size that is supported by the underlying native platform. Customers can use a JSON object to send platform specific messages. See Using SNMobile Push API for additional details. /sns/faqs/;How do platform-specific payloads work?;When you publish to a topic and want to have customized messages sent to endpoints for the different push notification platforms then you need to select “Use different message body for different protocols” option on the Publish dialog box and then update the messages. You can use platform-specific payloads to specify the exact API string that is relayed to each push notifications service. For example, you can use platform-specific payloads to manipulate the badge count of your iOS application via APNS. For more information, see Using Amazon SNMobile Push Notifications. /sns/faqs/;Can one token subscribe to multiple topics?;Yes. Each token can be subscribed to an unlimited number of SNtopics. /sns/faqs/;What is direct addressing? How does it work?;Direct addressing allows you to deliver notifications directly to a single endpoint, rather than sending identical messages to all subscribers of a topic. This is useful if you want to deliver precisely targeted messages to each recipient. When you register device tokens with SNS, SNcreates an endpoint that corresponds to the token. You can publish to the token endpoint just as you would publish to a topic. You can direct publish either the text of your notification, or a platform-specific payload that takes advantage of platform-specific features such as updating the badge count of your app. Direct addressing is currently only available for push notifications endpoints. /sns/faqs/;Does SNS support direct addressing for SMS or Email?;At this time, direct addressing is only supported for mobile push endpoints (APNS, FCM, ADM, WNS, MPNS, Baidu) and SMS. Email messaging requires the use of topics. /sns/faqs/;How does SNS Mobile Push handle token feedback from notification services?;"Push notification services such as APNand FCM provide feedback on tokens which may have expired or may have been replaced by new tokens. If either APNor FCM reports that a particular token has either expired or is invalid, SNautomatically ""disables"" the application endpoint associated with the token, and notifies you of this change via an event. FCM specifically, at times not only indicates that a token is invalid, but also provides the new token associated with the application endpoint in its response to SNS. When this happens, SNautomatically updates the associated endpoint with the new token value, leaving the endpoint enabled, and then notifies you of this change via an event." /sns/faqs/;I use Google Cloud Messaging (GCM) for SNS mobile notifications. What happens when GCM is deprecated?;GCM device tokens are completely interchangeable with the newer Firebase Cloud Messaging (FCM) device tokens. If you have existing GCM tokens, you’ll still be able to use them to send notifications. This statement is also true for GCM tokens that you generate in the future. For more information please visit The End of Google Cloud Messaging, and What it Means for Your Apps blog. /sns/faqs/;Can I migrate existing apps to SNS Mobile Push?;Yes. You can perform a bulk upload of existing device tokens to Amazon SNS, either via the console interface or API. You would also register your app with SNby uploading your credentials for the relevant push notifications services, and configure your proxy or app to register future new tokens with SNS. /sns/faqs/;Can I monitor my push notifications through Amazon CloudWatch?;Yes. SNpublishes Cloudwatch metrics for number of messages published, number of successful notifications, number of failed notifications, number of notifications filtered out, and size of data published. Metrics are available on per application basis. You can access Cloudwatch metrics via AWS Management Console or CloudWatch APIs. /sns/faqs/;What types of Windows Push Notifications does Amazon SNS support?;SNsupports all types of push notifications types offered by Microsoft WNand MPNS, including toast, tile, badge and raw notifications. Use the TYPE message attribute to specify which notification type you wish to use. When you use default payloads to send the same message to all mobile platforms, SNwill select toast notifications by default for Windows platforms. It is required to specify a notification type for Windows platforms when you use platform-specific payloads. /sns/faqs/;Does SNS support Windows raw push notifications?;Yes. You must encode the notification payload as text to send raw notifications via SNS. /sns/faqs/;What is Baidu Cloud Push?;Baidu Cloud Push is a third-party alternative push notifications relay service for Android devices. You can use Baidu Cloud Push to reach Android customers in China, no matter what Android app store those customers choose to use for downloading your app. For more information about Baidu Cloud Push, visit: https://push.baidu.com/. /sns/faqs/;Can I publish Baidu notifications from all public AWS regions?;Yes, SNsupports Baidu push notifications from all public AWS regions. /sns/faqs/;Can I use Baidu notifications to any Android app store?;Yes, Baidu push notifications work for apps installed via any Android app store. /sns/faqs/;What are message attributes?;Message attributes allow you to provide structured metadata items (such as timestamps, geospatial data, signatures, and identifiers) about the message. Message attributes are optional and separate from, but sent along with, the message body. This information can be used by the receiver of the message to help decide how to handle the message without having to first process the message body. /sns/faqs/;What message attributes are supported in SNS?;SNsupports different message attributes for each endpoint type, depending on what the endpoint types each support themselves. /sns/faqs/;Does Amazon SNS support HTTP/2 for mobile push notification to APNS endpoints?;Amazon SNuses HTTP/2 with p12 certificates for sending push notifications via Apple Push Notification Service (APNS) to iOS and macOS endpoints. /sns/faqs/;Do I have to modify my application due to the deprecation of APNS binary protocol as of November 2020?;Amazon SNuses HTTP/2 with p12 certificates. As it does not rely on the legacy binary protocol, no change is required in your application that is sending push notifications via Amazon SNS. /sns/faqs/;What does support for AWS Lambda endpoints in Amazon SNS mean?;You can invoke your AWS Lambda functions by publishing messages to Amazon SNtopics that have AWS Lambda functions subscribed to them. Because Amazon SNsupports message fan-out, publishing a single message can invoke different AWS Lambda functions or invoke Lambda functions in addition to delivering notifications to supported Amazon SNdestinations such as mobile push, HTTP endpoints, SQS, email and SMS. /sns/faqs/;What is AWS Lambda?;AWS Lambda is a compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information. More information on AWS Lambda and how to create AWS Lambda functions can be found here. /sns/faqs/;What can I do with AWS Lambda functions and Amazon SNS?;By subscribing AWS Lambda functions to Amazon SNtopics, you can perform custom message handling. You can invoke an AWS Lambda function to provide custom message delivery handling by first publishing a message to an AWS Lambda function, have your Lambda function modify a message (e.g. localize language) and then filter and route those messages to other topics and endpoints. Apps and services that already send Amazon SNnotifications, such as Amazon CloudWatch, can now immediately take advantage of AWS Lambda without having to provision or manage infrastructure for custom message handling. You can also use delivery to an AWS Lambda function as a way to publish to other AWS services such as Amazon Kinesis or Amazon S3. You can subscribe an AWS Lambda function to the Amazon SNtopic, and then have the Lambda function in turn write to another service. /sns/faqs/;How do I activate AWS Lambda endpoint support in Amazon SNS?;You need to first create an AWS Lambda function via your AWS account and the AWS Lambda console, and then subscribe that AWS Lambda function to a topic using the Amazon SNconsole or the Amazon SNAPIs. Once that is complete, any messages that you publish to the Amazon SNtopics which have Lambda functions subscribed to them will be delivered to the appropriate Lambda functions in addition to any other destinations subscribed to that topic. /sns/faqs/;What does delivery of a message from Amazon SNS to an AWS Lambda function do?;A message delivery from Amazon SNto an AWS Lambda function creates an instance of the AWS Lambda function and invokes it with your message as an input. For more information on message formats, please refer to the Amazon SNdocumentation and the AWS Lambda documentation. /sns/faqs/;How much does this feature cost?;Publishing a message with Amazon SNcosts $0.50 per million requests. Aside from charges incurred in using AWS services, there are no additional fees for delivering a message to an AWS Lambda function. Amazon SNhas a Free Tier of 1 million requests per month. For more information, please refer to Amazon SNpricing. AWS Lambda function costs are based on the number of requests for your functions and the time your code executes. The AWS Lambda Free-Tier includes 1M requests per month and 400,000 GB-seconds of compute time per month. For more information, please refer to AWS Lambda pricing. /sns/faqs/;Can I subscribe AWS Lambda functions created by someone else to Amazon SNS topics that I own?;We currently do not allow an AWS account owner to subscribe an AWS Lambda function that belongs to another account. You can subscribe your own AWS Lambda functions to your own Amazon SNtopics or subscribe your AWS Lambda functions to an Amazon SNtopic that was created by another account so long as the topic policy for that SNtopic allows it. /sns/faqs/;Is there a quota to the number of AWS Lambda functions that I can subscribe to an Amazon SNS topic?;Amazon SNtreats AWS Lambda functions like any other destination. By default, SNoffers 10 million subscriptions per topic. To request a higher quota, please contact us. /sns/faqs/;What data can I pass to my AWS Lambda function?;When an AWS Lambda function is invoked as a result of an Amazon SNmessage delivery, the AWS Lambda function receives data such as the Message ID, the topic ARNthe message payload, and message attributes via an SNEvent. For more information on the event structure passed to the AWS Lambda function please read our blog. /sns/faqs/;Can I track delivery status for message delivery attempts to AWS Lambda functions?;To track the success or failure status of message deliveries, you need to activate the Delivery Status feature of Amazon SNS. For more information about how to activate this feature please read our blog. /sns/faqs/;What regions is AWS Lambda available in?;See AWS Regions and Endpoints for a complete list. /sns/faqs/;Do my AWS Lambda functions need to be in the same region as my Amazon SNS usage?;You can subscribe your AWS Lambda functions to an Amazon SNtopic in any region. /sns/faqs/;Are there any data transfer costs for invoking AWS Lambda functions?;Data transfer costs are applicable to message deliveries to AWS Lambda functions. Please refer to our pricing for more information. /sns/faqs/;Are there any quotas to the concurrency of AWS Lambda functions?;AWS Lambda currently supports 1000 concurrent executions per AWS account per region. If your Amazon SNmessage deliveries to AWS Lambda contribute to crossing these concurrency quotas, your Amazon SNmessage deliveries will be throttled. If AWS Lambda throttles an Amazon SNmessage, Amazon SNwill retry the delivery attempts. For more information about AWS Lambda concurrency quotas, please refer to AWS Lambda documentation. /sns/faqs/;Can Amazon SNS use the same AWS Lambda functions that I use with other services (e.g. Amazon S3)?;You can use the same AWS Lambda functions that you use with other services as long as the same function can parse the event formats from Amazon SNin addition to the event format of the other services. For the SNevent format please read our blog. /sns/faqs/;What are VoIP Push Notifications for iOS?;In iOS 8 and later, voice-over-IP (VoIP) apps can register for VoIP remote notifications such that iOS can launch or wake the app, as appropriate, when an incoming VoIP call arrives for the user. The procedure to register for VoIP notifications is similar to registering for regular push notifications on iOS. For more information, please refer to our documentation. /sns/faqs/;What are Mac OS push notifications?;You can now send push notifications to Mac OS desktops that run Mac OS X Lion (10.7) or later using Amazon SNS. For more information, please refer to our documentation. /swf/faqs/;What is Amazon SWF?;The coordination of tasks involves managing execution dependencies, scheduling, and concurrency in accordance with the logical flow of the application. With Amazon SWF, developers get full control over implementing processing steps and coordinating the tasks that drive them, without worrying about underlying complexities such as tracking their progress and keeping their state. Amazon SWF also provides the AWS Flow Framework to help developers use asynchronous programming in the development of their applications. By using Amazon SWF, developers benefit from ease of programming and have the ability to improve their applications’ resource usage, latencies, and throughputs. /swf/faqs/;When should I use Amazon SWF vs. AWS Step Functions?;AWS Step Functions is a fully managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Instead of writing a Decider program, you define state machines in JSONAWS customers should consider using Step Functions for new applications. If Step Functions does not fit your needs, then you should consider Amazon Simple Workflow (SWF). Amazon SWF provides you complete control over your orchestration logic, but increases the complexity of developing applications. You may write decider programs in the programming language of your choice, or you may use the Flow framework to use programming constructs that structure asynchronous interactions for you. AWS will continue to provide the Amazon SWF service, Flow framework, and support all Amazon SWF customers. /swf/faqs/;What are the benefits of designing my application as a coordination of tasks?;To coordinate the application execution across workers, you write a program called the decider in your choice of programming language. The separation of processing steps and their coordination makes it possible to manage your application in a controlled manner and give you the flexibility to deploy, run, scale and update them independently. You can choose to deploy workers and deciders either in the cloud (e.g. Amazon EC2 or Lambda) or on machines behind corporate firewalls. Because of the decoupling of workers and deciders, your business logic can be dynamic and you application can be quickly updated to accommodate new requirements. For example, you can remove, skip, or retry tasks and create new application flows simply by changing the decider. /swf/faqs/;What can I do with Amazon SWF?;Writing your applications as asynchronous programs using simple programming constructs that abstract details such as initiating tasks to run remotely and tracking the program’s runtime state. Maintaining your application’s execution state (e.g. which steps have completed, which ones are running, etc.). You do not have to use databases, custom systems, or ad hoc solutions to keep execution state. Communicating and managing the flow of work between your application components. With Amazon SWF, you do not need to design a messaging protocol or worry about lost and duplicated tasks. Centralizing the coordination of steps in your application. Your coordination logic does not have to be scattered across different components, but can be encapsulated in a single program. Integrating a range of programs and components, including legacy systems and 3rd party cloud services, into your applications. By allowing your application flexibility in where and in what combination the application components are deployed, Amazon SWF helps you gradually migrate application components from private data centers to public cloud infrastructure without disrupting the application availability or performance. Automating workflows that include long-running human tasks (e.g. approvals, reviews, investigations, etc.) Amazon SWF reliably tracks the status of processing steps that run up to several days or months. Building an application layer on top of Amazon SWF to support domain specific languages for your end users. Since Amazon SWF gives you full flexibility in choosing your programming language, you can conveniently build interpreters for specialized languages (e.g. XPDL) and customized user-interfaces including modeling tools. Getting detailed audit trails and visibility into all running instances of your applications. You can also incorporate visibility capabilities provided by Amazon SWF into your own user interfaces using the APIs provided by Amazon SWF. /swf/faqs/;What are the benefits of Amazon SWF vs. homegrown solutions and existing workflow products?;Existing workflow products often force developers to learn specialized languages, host expensive databases, and give up control over task execution. The specialized languages make it difficult to express complex applications and are not flexible enough for effecting changes quickly. Amazon SWF, on the other hand, is a cloud-based service, allows common programming languages to be used, and lets developers control where tasks are processed. By adopting a loosely coupled model for distributed applications, Amazon SWF enables changes to be made in an agile manner. /swf/faqs/;What are workers and deciders?;You can have several concurrent runs of a workflow on Amazon SWF. Each run is referred to as a workflow execution or an execution. Executions are identified with unique names. You use the Amazon SWF Management Console (or the visibility APIs) to view your executions as a whole and to drill down on a given execution to see task-level details. /swf/faqs/;How is Amazon SWF different from Amazon SQS?;Both Amazon SQS and Amazon SWF are services that facilitate the integration of applications or microservices: /swf/faqs/;How can I get started with Amazon SWF?; Yes. When you get started with Amazon SWF, you can try the sample walkthrough in the AWS Management Console which takes you through registering a domain and types, deploying workers and deciders and starting workflow executions. You can download the code for the workers and deciders used in this walkthrough, run them on your infrastructure and even modify them to build your own applications. You can also download the AWS Flow Framework samples, which illustrate the use of Amazon SWF for various use cases such as distributed data processing, Cron jobs and application stack deployment. By looking at the included source code, you can learn more about the features of Amazon SWF and how to use the AWS Flow Framework to build your distributed applications. /swf/faqs/;Are there sample workflows that I can use to try out Amazon SWF?; You can access SWF in any of the following ways: /swf/faqs/;What are the different ways to access SWF?;AWS SDK for Java, Ruby, .NET, and PHP AWS Flow Framework for Java (Included in the AWS SDK for Java) Amazon SWF web service APIs AWS Management Console /swf/faqs/;What is registration?; In SWF, you define logical containers called domains for your application resources. Domains can only be created at the level of your AWS account and may not be nested. A domain can have any user-defined name. Each application resource, such as a workflow type, an activity type, or an execution, belongs to exactly one domain. During registration, you specify the domain under which a workflow or activity type should be registered. When you start an execution, it is automatically created in the same domain as its workflow type. The uniqueness of resource identifiers (e.g. type-ids, execution ID) is scoped to a domain, i.e. you may reuse identifiers across different domains. /swf/faqs/;What are domains?; You can use domains to organize your application resources so that they are easier to manage and do not inadvertently affect each other. For example, you can create different domains for your development, test, and production environments, and create the appropriate resources in each of them. Although you may register the same workflow type in each of these domains, it will be treated as a separate resource in each domain. You can change its settings in the development domain or administer executions in the test domain, without affecting the corresponding resources in the production domain. /swf/faqs/;How can I manage my application resources across different environments and groupings?; The decider can be viewed as a special type of worker. Like workers, it can be written in any language and asks Amazon SWF for tasks. However, it handles special tasks called decision tasks. Amazon SWF issues decision tasks whenever a workflow execution has transitions such as an activity task completing or timing out. A decision task contains information on the inputs, outputs, and current state of previously initiated activity tasks. Your decider uses this data to decide the next steps, including any new activity tasks, and returns those to Amazon SWF. Amazon SWF in turn enacts these decisions, initiating new activity tasks where appropriate and monitoring them. By responding to decision tasks in an ongoing manner, the decider controls the order, timing, and concurrency of activity tasks and consequently the execution of processing steps in the application. SWF issues the first decision task when an execution starts. From there on, Amazon SWF enacts the decisions made by your decider to drive your execution. The execution continues until your decider makes a decision to complete it. /swf/faqs/;How does a decider coordinate a workflow in Amazon SWF?;To help the decider in making decisions, SWF maintains an ongoing record on the details of all tasks in an execution. This record is called the history and is unique to each execution. A new history is initiated when an execution begins. At that time, the history contains initial information such as the execution’s input data. Later, as workers process activity tasks, Amazon SWF updates the history with their input and output data, and their latest state. When a decider gets a decision task, it can inspect the execution’s history. Amazon SWF ensures that the history accurately reflects the execution state at the time the decision task is issued. Thus, the decider can use the history to determine what has occurred in the execution and decide the appropriate next steps. /swf/faqs/;How do I ensure that a worker or decider only gets tasks that it understands?;While initiating an activity task, a decider can add it into a specific task list or request Amazon SWF to add it into the default task list for its activity type. While starting an execution, you can request Amazon SWF to add all of its decision tasks to a specific task list or to the default task list for the workflow type. While requesting tasks, deciders and workers specify which task list they want to receive tasks from. If a task is available in the list, SWF sends it in the response and also includes its type-id. /swf/faqs/;What is the AWS Flow Framework?;AWS Flow Framework makes it convenient to express both facets of coordination through familiar programming concepts. For example, initiating an activity task is as simple as making a call to a method. AWS Flow Framework automatically translates the call into a decision to initiate the activity task and lets Amazon SWF assign the task to a worker, monitor it, and report back on its completion. The framework makes the outcome of the task, including its output data, available to you in the code as the return values from the method call. To express the dependency on a task, you simply use the return values in your code, as you would for typical method calls. The framework’s runtime will automatically wait for the task to complete and continue your execution only when the results are available. Behind the scenes, the framework’s runtime receives worker and decision tasks from Amazon SWF, invokes the relevant methods in your program at the right times, and formulates decisions to send back to Amazon SWF. By offering access to Amazon SWF through an intuitive programming framework, the AWS Flow Framework makes it possible to easily incorporate asynchronous and event driven programming in the development of your applications. /swf/faqs/;How do workers and deciders communicate with Amazon SWF?;To overcome the inefficiencies inherent in polling, Amazon SWF provides long-polling. Long-polling significantly reduces the number of polls that return without any tasks. When workers and deciders poll Amazon SWF for tasks, the connection is retained for a minute if no task is available. If a task does become available during that period, it is returned in response to the long-poll request. By retaining the connection for a period of time, additional polls that would also return empty during that period are avoided. With long-polling, your applications benefit with the security and flow control advantages of polling without sacrificing the latency and efficiency benefits offered by push-based web services. /swf/faqs/;Can I use an existing web service as a worker?; No, you can use any programming language to write a worker or a decider, as long as you can communicate with Amazon SWF using web service APIs. The AWS SDK is currently available in Java, .NET, PHP and Ruby. The AWS SDK for Java includes the AWS Flow Framework. /swf/faqs/;Does Amazon SWF restrict me to use specific programming languages?; When you start new workflow executions you provide an ID for that workflow execution. This enables you to associate an execution with a business entity or action (e.g. customer ID, filename, serial number). Amazon SWF ensures that an execution’s ID is unique while it runs. During this time, an attempt to start another execution with the same ID will fail. This makes it convenient for you to satisfy business needs where no more than one execution can be running for a given business action, such as a transaction, submission or assignment. Consider a workflow that registers a new user on a website. When a user clicks the submit button, the user’s unique email address can be used to name the execution. If the execution already exists, the call to start the execution will fail. Nadditional code is needed to prevent conflicts as a result of the user clicking the button more than one when the registration is in progress. /swf/faqs/;How does Amazon SWF help with scaling my applications?; In addition to a Management Console, Amazon SWF provides a comprehensive set of visibility APIs. You can use these to get run-time information to monitor all your executions and to auto-scale your executions depending on load. You can get detailed data on each workflow type, such as the count of open and closed executions in a specified time range. Using the visibility APIs, you can also build your own custom monitoring applications. /swf/faqs/;I have numerous executions running at any time, but a handful of them often fail or stall. How can I detect and troubleshoot these problematic executions?;To find executions that may be stalled, you can start with a time-based search to hone in on executions that are running longer than expected. Next, you can inspect them to see task level details and determine if certain tasks have been running too long or have failed, or whether the decider has simply not initiated tasks. This can help you pinpoint the problem at a task-level. /swf/faqs/;Can I use AWS Identity and Access Management (IAM) to manage access to Amazon SWF?; Yes. Workers use standard HTTP GET requests to ask Amazon SWF for tasks and to return the computed results. Since workers always initiate requests to Amazon SWF, you do not have to configure your firewall to allow inbound requests. /swf/faqs/;Can I run my workers behind a firewall?; Workers use standard HTTP GET requests to ask Amazon SWF for tasks and to return the computed results. Thus, you do not have to expose any endpoint for your workers. Furthermore, Amazon SWF only gives tasks to workers when the decider initiates those tasks. Since you write the decider, you have full control over when and how tasks are initiated, including the input data that gets sent with them to the workers. /swf/faqs/;Isn’t it a security risk to expose my business logic as workers and deciders?; Amazon SWF provides useful guarantees around task assignment. It ensures that a task is never duplicated and is assigned only once. Thus, even though you may have multiple workers for a particular activity type (or a number of instances of a decider), Amazon SWF will give a specific task to only one worker (or one decider instance). Additionally, Amazon SWF keeps at most one decision task outstanding at a time for a workflow execution. Thus, you can run multiple decider instances without worrying about two instances operating on the same execution simultaneously. These facilities enable you to coordinate your workflow without worrying about duplicate, lost, or conflicting tasks. /swf/faqs/;How many workflow types, activity types, and domains can I register with Amazon SWF?; At any given time, you can have a maximum of 100,000 open executions in a domain. There is no other limit on the cumulative number of executions that you run or on the number of executions retained by Amazon SWF. /swf/faqs/;Are there limits on the number of workflow executions that I can run simultaneously?; Each workflow execution can run for a maximum of 1 year. Each workflow execution history can grow up to 25,000 events. If your use case requires you to go beyond these limits, you can use features Amazon SWF provides to continue executions and structure your applications using child workflow executions. /swf/faqs/;How long can workflow executions run?; Amazon SWF does not take any special action if a workflow execution is idle for an extended period of time. Idle executions are subject to the timeouts that you configure. For example, if you have set the maximum duration for an execution to be 1 day, then an idle execution will be timed out if it exceeds the 1 day limit. Idle executions are also subject to the Amazon SWF limit on how long an execution can run (1 year). /swf/faqs/;What happens if my workflow execution is idle for an extended period of time?; Amazon SWF does not impose a specific limit on how long a worker can take to process a task. It enforces the timeout that you specify for the maximum duration for the activity task. Note that since Amazon SWF limits an execution to run for a maximum of 1 year, a worker cannot take longer than that to process a task. /swf/faqs/;How long can a worker take to process a task?; Amazon SWF does not impose a specific limit on how long a task is kept before a worker polls for it. However, when registering the activity type, you can set a default timeout for how long Amazon SWF will hold on to activity tasks of that type. You can also specify this timeout or override the default timeout through your decider code when you schedule an activity task. Since Amazon SWF limits the time that a workflow execution can run to a maximum of 1 year, if a timeout is not specified, the task will not be kept longer than 1 year. /swf/faqs/;How long can Amazon SWF keep a task before a worker asks for it?; Yes, you can schedule up to 100 activity tasks in one decision and also issue several decisions one after the other. /swf/faqs/;Does Amazon SWF retain completed executions? If so, for how long?;Amazon SWF retains the history of a completed execution for any number of days that you specify, up to a maximum of 90 days (i.e. approximately 3 months). During retention, you can access the history and search for the execution programmatically or through the console. /swf/faqs/;Is Amazon SWF available across availability zones?; Please visit the AWS General Reference documentation for more information on access endpoints. /swf/faqs/;Do your prices include taxes?;Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax. Learn more. /appflow/faqs/;What is Amazon AppFlow?;Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service (SaaS) applications like Salesforce, Marketo, Slack, and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift, in just a few clicks. With AppFlow, you can run data flows at nearly any scale at the frequency you choose - on a schedule, in response to a business event, or on demand. You can configure powerful data transformation capabilities like filtering and validation to generate rich, ready-to-use data as part of the flow itself, without additional steps. AppFlow automatically encrypts data in motion, and allows users to restrict data from flowing over the public Internet for SaaS applications that are integrated with AWS PrivateLink, reducing exposure to security threats. /appflow/faqs/;How do I get started with AppFlow?;Go to the AWS Management Console and select AppFlow from the Services menu. This will launch the AppFlow home page. An authorized IAM user can create and configure a Flow using the following steps: /appflow/faqs/;Which AWS services are supported by Amazon AppFlow?;Supported AWS services include Amazon S3, Amazon RedShift, Amazon Connector Customer Profiles, Amazon Lookout for Metrics, and Amazon Honeycode, and we’re continuing to add more all the time. /appflow/faqs/;What are some examples of Flows that I can configure using Amazon AppFlow?;AppFlow gives you the flexibility to configure their own Flows. Some examples of flows include: /appflow/faqs/;What are the trigger mechanisms available for flows?;You can run flows on demand, based on business events, or on a schedule: /appflow/faqs/;"Public APIs are available for my SaaS application today; what additional value does AppFlow bring?";"While developers can use public APIs from SaaS applications to pull or push data, AppFlow helps customers to save time by allowing anyone who prefers not to write code and learn the API documentation of all the different SaaS applications to implement a range of common integration tasks. AppFlow is a fully managed API integration service that replaces custom connectors. It provisions compute, storage, and networking resources to orchestrate and execute the flows; manages API authorization with the SaaS application and manages the life-cycle of access tokens and API keys; and processes data as part of the flow." /appflow/faqs/;Which SaaS integrations are supported as sources and destinations?;AppFlow supports sources such as Amazon S3, Salesforce, SAP, Marketo, Zendesk, and Slack, as well as many others. It supports Amazon S3, Amazon RedShift, Salesforce, and Snowflake as destinations for flows. To learn more, visit the AppFlow integrations page. /appflow/faqs/;I’d like AppFlow to support another SaaS integration. How can I make that request?;Please contact us to let us know the name of the SaaS vendor as well as your use case. /appflow/faqs/;I’m a SaaS vendor and I’d like to integrate with AppFlow. What do I do next?;We’re always interested in adding support for new SaaS vendors. Please contact us to let us know the use case your customers are asking for and we’ll start the process. /appflow/faqs/;Is AWS PrivateLink required for AppFlow to connect with a SaaS application?;No. AppFlow will integrate with public API endpoints of SaaS applications that are not AWS PrivateLink enabled. /appflow/faqs/;How do I set up encryption keys?;With AppFlow, your data is always encrypted at rest and in transit. By default, AppFlow will use your AWS managed customer master key (CMK) for encryption. You can also choose your own managed keys - customer managed CMKs for encryption. Create your custom keys in AWS Key Management Service (KMS). Once set up, your custom key is automatically available for use in flow creation. /appflow/faqs/;When should I use AppFlow or AWS Glue?;AWS Glue provides a managed ETL service that makes it easy for data engineers to prepare and load data stored on AWS for analytics. It creates a data catalog from JDBC-compliant data sources (i.e. databases) that makes metadata available for ETL as well as querying via Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum. AppFlow connects to API-based data sources and enables users in lines of business to build data integration without writing code. /appflow/faqs/;When should I use AppFlow or AWS DataSync?;AWS DataSync is intended to move large amounts of data between on-premises data sources and AWS Cloud for bulk data migration, processing, and backup or disaster recovery. AWS DataSync is the ideal choice when one-time or periodic transfers of tens or hundreds of terabytes are routine. At this scale, the need is to make effective use of network bandwidth and achieving high throughput. On the other hand, AppFlow is used to exchange data between SaaS applications and AWS services. AppFlow is designed for operational data flows which may be triggered by a person, an event, or a schedule. /appflow/faqs/;When should I use AppFlow or Amazon EventBridge?;Amazon EventBridge enables developers to build event driven applications that interact with SaaS applications and AWS services. SaaS applications that have integrated with EventBridge emit events to the customer’s event bus, which can then be routed to targets such as Amazon EC2 instances or Lambda functions for processing. AppFlow supports bi-directional transfer of data between SaaS applications and AWS services that may be initiated by humans using a UI, a schedule, or events - all with a point and click interface. /appflow/faqs/;Can AppFlow be deployed through CloudFormation templates?;AWS CloudFormation support for Amazon AppFlow is available in all regions where Amazon AppFlow is available. To learn more about how to use AWS CloudFormation to provision and manage Amazon AppFlow resources, visit our documentation. /appflow/faqs/;Does AppFlow support CloudTrail?;Yes. To receive a history of AppFlow API calls made on your account, you simply turn on CloudTrail in the AWS Management Console. /step-functions/faqs/;What is AWS Step Functions?;AWS Step Functions is a fully managed service that makes it easier to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function helps you scale more easily and change applications more quickly. Step Functions is a reliable way to coordinate components and step through the functions of your application. Step Functions provides a graphical console to arrange and visualize the components of your application as a series of steps. This makes it easier to build and run multi-step applications. Step Functions automatically triggers and tracks each step and retries when there are errors, so your application executes in order and as expected. Step Functions logs the state of each step, so when things do go wrong, you can diagnose and debug problems more quickly. You can change and add steps without even writing code, so you can more easily evolve your application and innovate faster. /step-functions/faqs/;What are the benefits of designing my application using orchestration?;Breaking an application into service components (or steps) ensures that the failure of one component does not bring the whole system down. Each component scales independently and that component may be updated without requiring the entire system to be redeployed after each change. The coordination of service components involves managing execution dependencies and scheduling, and concurrency in accordance with the logical flow of the application. In such an application, you can use service orchestration to do this and to handle failures. /step-functions/faqs/;What are some common Step Functions use cases?;Step Functions helps with any computational problem or business process that can be subdivided into a series of steps. It’s also useful for creating end-to-end workflows to manage jobs with interdependencies. Common use cases include: Data processing: consolidate data from multiple databases into unified reports, refine and reduce large data sets into useful formats, iterate and process millions of files in an Amazon Simple Storage Service (S3) bucket with high concurrency workflows, or coordinate multi-step analytics and machine learning workflows DevOps and IT automation: build tools for continuous integration and continuous deployment, or create event-driven applications that automatically respond to changes in infrastructure E-commerce: automate mission-critical business processes, such as order fulfillment and inventory tracking Web applications: implement robust user registration processes and sign-on authentication For more details, explore AWS Step Functions use cases and customer testimonials. /step-functions/faqs/;How does AWS Step Functions work?;When you use Step Functions, you define state machines that describe your workflow as a series of steps, their relationships, and their inputs and outputs. State machines contain a number of states, each of which represents a step in a workflow diagram. States can perform work, make choices, pass parameters, initiate parallel execution, manage timeouts, or terminate your workflow with a success or failure. The visual console automatically graphs each state in the order of execution, making it easier to design multi-step applications. The console highlights the real-time status of each step and provides a detailed history of every execution. For more information, see How Step Functions Works in the Step Functions developer guide. /step-functions/faqs/;How does Step Functions connect to my resources?;You can orchestrate any AWS service using service integrations or any self-managed application component using Activity Tasks. Service integrations help you construct calls to AWS services and include the response in your workflow. AWS–SDK service integrations help you invoke one of over 9,000 AWS API actions from over 200 services directly from your workflow. Optimized service integrations further simplify use of common services such as AWS Lambda, Amazon Elastic Container Service (ECS), AWS Glue, or Amazon EMR with capabilities including IAM policy generation and the RunAJob pattern that will automatically wait for completion of asynchronous jobs. Activity Tasks incorporate integration with activity works that you run in a location of your choice, including in Amazon Elastic Compute Cloud (EC2), in Amazon ECS, on a mobile device, or on an on-premises server. The activity worker polls Step Functions for work, takes any inputs from Step Functions, performs the work using your code, and returns results. Since activity workers request work, it is easier to use workers that are deployed behind a firewall. A Step Functions state machine can contain combinations of service integrations and Activity Tasks. Step Functions applications can also combine activity workers running in a data center with service tasks that run in the cloud. The workers in the data center continue to run as usual, along with any cloud-based service tasks. /step-functions/faqs/;How do I get started with Step Functions?;There are a number of ways you can get started with Step Functions: Explore sample projects in the Step Functions console Read through the Step Functions Developer Guide Try our 10-minute tutorials /step-functions/faqs/;What language does Step Functions use?;AWS Step Functions state machines are defined in JSON using the declarative Amazon States Language. To create an activity worker, you may use any programming language, as long as you can communicate with Step Functions using web service APIs. For convenience, you may use an AWS SDK in the language of your choosing. Lambda supports code written in Node.js (JavaScript), Python, Golang (Go), and C# (using the .NET Core runtime and other languages). For more information on the Lambda programming model, see the Lambda Developer Guide. /step-functions/faqs/;My workflow has some of the properties of Standard Workflows and some properties of Express Workflows. How do I get the best of both?;You can compose the two workflow types: By running Express Workflows as a child workflow of Standard Workflows: The Express Workflow is invoked from a Task state in the parent orchestration workflow and succeeds or fails as a whole from the parent's perspective. It is subject to the parent's retry policy for that Task. By calling Express Workflows from within an Express Workflow, so long as all workflows do not exceed the duration limit of the parent: You might choose to factor your workflows this way if your use case has a combination of long-running or exactly-once, and short-lived high-rate steps. /step-functions/faqs/;How does Step Functions support parallelism?;Step Functions includes a Map state for dynamic parallelism. The Map state has two operating modes, Inline and Distributed, and both modes execute the same set of steps for a collection of items. A Map in Inline mode can support concurrency of 40 parallel branches and execution history limits of 25,000 events or approximately 6,500 state transitions in a workflow. With the Distributed mode, you can run at concurrency of up to 10,000 parallel branches. The Distributed Map has been optimized for Amazon S3, helping you more easily iterate over objects in an S3 bucket. See the FAQ in the integration section. The iterations of a Distributed Map are split into parallel executions to help you overcome payload and execution history limits. You can also choose whether each iteration is performed by a Standard Workflow, which is idempotent, or Express Workflow, which is a higher speed and lower cost, but not idempotent. Learn more about the Map state. /step-functions/faqs/;When should I use Step Functions vs. Amazon Simple Queue Service (SQS)?;You should use AWS Step Functions when you need to coordinate service components in the development of highly scalable and auditable applications. Amazon Simple Queue Service (Amazon SQS), is used for when you need a reliable, highly scalable, hosted queue for sending, storing, and receiving messages between services. Step Functions keeps track of all tasks and events in an application, SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. The Step Functions console and visibility APIs provide an application-centric view that lets you search for executions, drill-down into an execution's detail, and administer executions. SQS would require implementing additional functionality. Step Functions offers serveral features that facilitate application development, such as passing data between tasks and flexibility in distributing tasks, whereas SQS would require you to implement application-level functionality. Step Functions has out-of-the-box capabilities to build workflows to coordinate your distributed application. SQS allows you to build basic workflows, but has limited functionality. /step-functions/faqs/;When should I use Step Functions vs. Amazon Simple Workflow Service (SWF)?;You should consider using Step Functions for all your new applications, since it provides a more productive and agile approach to coordinating application components using visual workflows. If you require external signals to intervene in your processes or you would like to launch child processes that return a result to a parent, then you should consider Amazon Simple Workflow Service (Amazon SWF). With SWF, instead of writing state machines in declarative JSONyou can write a decider program to separate activity steps from decision steps. This provides you complete control over your orchestration logic, but increases the complexity of developing applications. You may write decider programs in the programming language of your choice, or you may use the Flow framework to use programming constructs that structure asynchronous interactions for you. /step-functions/faqs/;How does Step Functions connect and coordinate other AWS services?;Workflows that you create with Step Functions can connect and coordinate over 200 AWS services using service integrations. For example, you can: Invoke an AWS Lambda function Run an ECS or AWS Fargate task Get an existing item from an Amazon DynamoDB table or put a new item into a DynamoDB table Submit an AWS Batch job and wait for it to complete Publish a message to an SNtopic Send a message to an Amazon S For the most common use-cases of Step Functions, visit the use cases page, where there is detailed cases, alongside their architecture visualizations. /step-functions/faqs/;How does Step Functions work with Amazon API Gateway?;You can associate your Step Functions APIs with Amazon API Gateway so that these APIs invoke your state machines when an HTTPS request is sent to an API method that you define. You can use an API Gateway API to start Step Functions state machines that coordinate the components of a distributed backend application, and integrate human activity tasks into the steps of your application such as approval requests and responses. You can also make serverless asynchronous calls to the APIs of services that your application uses. For more information, try our tutorial, Creating a Step Functions API Using API Gateway. /step-functions/faqs/;How does logging and monitoring work for Step Functions?;AWS Step Functions sends metrics to Amazon CloudWatch and AWS CloudTrail for application monitoring. CloudWatch collects and track metrics, sets alarms, and automatically reacts to changes in AWS Step Functions. CloudTrail captures all API calls for Step Functions as events, including calls from the Step Functions console and from code calls to the Step Functions APIs. Step Functions also supports CloudWatch Events managed rules for each integrated service in your workflow, and will create and manage CloudWatch Events rules in your AWS account as needed. For more information, see monitoring and logging in the Step Functions developer guide. /step-functions/faqs/;What happens if my Express Workflow fails due to exhausted retries or an unmanaged exception?;By default, Express Workflows report all outcomes to CloudWatch Logs including workflow input, output, and completed steps. You may select different levels of logging to only log errors, and you can choose to not log input and output. Workflows that exhaust retries or have an unmanaged exception should be re-run from the start. /step-functions/faqs/;Can I access Step Functions from resources behind my Amazon VPC without connecting to the internet?;Step Functions also supports VPC Endpoints (VPCE) using AWS PrivateLink. You can access Step Functions from VPC-enabled AWS Lambda functions and other AWS services without traversing the public internet. For more information, refer to the Amazon VPC Endpoints for Step Functions in the Step Functions developer guide. /step-functions/faqs/;What are the compliance standards supported by Step Functions?;Step Functions conforms to HIPAA, FedRAMP, SOC, GDPR, and other common compliance standards. See the AWS Cloud Security site to get a detailed list of supported compliance standards. /aws-cost-management/aws-application-cost-profiler/faqs/;Ready to get started?;Control your AWS costs Learn how to control your AWS costs using the AWS Free Tier and AWS Budgets. Learn more Sign up for a free account Instantly get access to the AWS Free Tier and start experimenting with AWS Application Cost Profiler. Sign up Start building in the console Get started building with AWS Application Cost Profiler in the AWS Console. Get started /pinpoint/faqs/;What is an activity in a journey?;Journeys automate multi-step campaigns. Each activity in a journey is either an action (such as sending an email), a time-based wait, splitting the journey segment based on customer action (such as opening an email vs not opening the email) or enforcing a holdout. /pinpoint/faqs/;Can I schedule my journeys?;You can configure each journey to start and end at a specific time. Each journey can run continuously for up to 18 months. /pinpoint/faqs/;What can I do if I make a mistake in my journey?;Journeys includes a built-in review process that checks for show-stopping errors, while also providing recommendations and best practices. You have to complete this review process before you launch each journey. /pinpoint/faqs/;How can I use Amazon Pinpoint to run and manage my campaigns?;Amazon Pinpoint makes it easy to run targeted campaigns and drive customer communications across different channels: email, SMS, push notifications, in-app messaging, or custom channels. Amazon Pinpoint campaigns enables you define which users to target, determine which messages to send, schedule the best time to deliver the messages, and then track the results of your campaign. /pinpoint/faqs/;How will marketers benefit from Amazon Pinpoint?;The console provides marketers with campaign management tools to create, run, and manage multi-channel campaigns across their applications, user-base and devices. Campaigns can be scheduled or triggered on user changes and actions. For marketers that want to run multi-step campaigns across multiple channels, they can design journeys to orchestrate an end-to-end experience. Marketers can also leverage the templating support to personalize end-user messaging. Marketers can also measure messaging effectiveness using Pinpoint analytics to understand the impact on user behavior. /pinpoint/faqs/;What's a standard campaign?;Standard campaigns include a targeted segment (either static or dynamic), a message, and a schedule for sending the message. You can also reuse previously defined segments or define a new segment when you create a campaign. For every scheduled campaign, Amazon Pinpoint recalculates the current audience size based on the criteria associated with the segment. /pinpoint/faqs/;What's an A/B test campaign?;A/B campaigns are campaigns with more than one treatment. Each treatment differs from the other based on the message or the sending schedule. You can compare the response rates for each treatment to determine which one had a bigger impact on your customers. /pinpoint/faqs/;What are my scheduling options for campaigns?;During campaign set up in Amazon Pinpoint, you can choose when the campaign should be sent. You have two options, you can send the campaign at a specific time, or you can send it when an event occurs. Time-based campaigns can be scheduled to run one time immediately, or at a time you designate in the future. They can also be scheduled with multiple runs—hourly, daily, weekly, or monthly. To define your recurring campaigns, choose a start date and an end date, and specify whether or not to deliver messages based on each recipient's local time zone. /pinpoint/faqs/;If I use another campaign management service, how does Amazon Pinpoint help me?;Amazon Pinpoint's architecture is modular. Companies can choose which services to use and integrate with their existing systems and processes. Amazon Pinpoint's core services include: engagement analytics, communication channels, deliverability metrics, audience management and segmentation, template management, and campaign management. /pinpoint/faqs/;How do campaign limits work?;On the General Settings page of the Amazon Pinpoint console or in the campaign settings, you can configure the maximum number of messages an endpoint can receive for a campaign. This feature is useful when you want to place strict limits on the number of messages that an endpoint can receive at a specific moment in time. For example, if you create a campaign that's automatically sent to all new customers, you can set the limit to 1. This setting ensures that new customers only receive the message once. /pinpoint/faqs/;What is an activity in a journey?;Journeys automate multi-step campaigns. Each activity in a journey is either an action (such as sending an email), a time-based wait, splitting the journey segment based on customer action (such as opening an email vs not opening the email) or enforcing a holdout. /pinpoint/faqs/;Can I schedule my journeys?;You can configure each journey to start and end at a specific time. Each journey can run continuously for up to 18 months. /pinpoint/faqs/;What can I do if I make a mistake in my journey?;Journeys includes a built-in review process that checks for show-stopping errors, while also providing recommendations and best practices. You have to complete this review process before you launch each journey. /pinpoint/faqs/;How can developers use Amazon Pinpoint?;Amazon Pinpoint offers developers a single API layer, CLI support, and client-side SDK support to be able to extend the communication channels through which their applications engage users. These channels include: email, SMS text messaging, and push notifications, voice messages, and custom channels. Amazon Pinpoint also provides developers with an analytics system that tracks app user behavior and user engagement. With this service, developers can learn how each user prefers to engage and can personalize their end-user's experience to increase the value of the developer's applications. Amazon Pinpoint also helps developers address multiple messaging use cases such direct or transactional messaging, targeted, or campaign messaging and event-based messaging. /pinpoint/faqs/;What are event-based campaigns?;Event-based campaigns send messages, such as text messages, push notifications, in-app messaging, and emails, to your customers when they take specific actions within your applications, such as making purchases or watching a video. For example, you can set up a campaign to send a message when a customer creates a new account or when they add an item to their cart but don't purchase it. You can create event-based campaigns by using the Amazon Pinpoint console, or by using the Amazon Pinpoint API. Event-based campaigns are an effective way to implement both transactional use cases, such as one-time-password and order confirmation messages, and targeted uses cases, such as marketing promotions. Rather than define a time to send your message to customers, you select specific events, attributes, and metric values that you want to use to trigger your campaigns. For more information about event-based campaigns, please view this blog post. /pinpoint/faqs/;How do I get started with event-based campaigns?;"The first step in setting up an event-based campaign is to create a new campaign. On step 4 of the campaign creation process, you choose when the campaign should be sent. You can choose to send the campaign at a specific time, or you can send it when an event occurs. Choose ""When an event occurs"", and then choose the events, attributes, and metrics that trigger your campaign." /pinpoint/faqs/;What are custom events?;Custom events are event metrics that you define. They help track user actions specific to your application or game. The Amazon Pinpoint event charts provide a view of how often custom events occur. Custom events can be filtered based on attributes and their associated values. /pinpoint/faqs/;What are the benefits of using custom events?;"Custom events help you understand the actions that users take when using your app. For example, a game developer might want to understand both how often a level is completed and how much health each player has left at the end of a level. With custom events, you can create an event called ""level_complete"", with ""add_level"" as an attribute, and ""health"" as an attribute value. Each time a level is completed, you can record a ""level_complete"" event with the name of the level and the player's health. By reviewing the events charts, you might discover that a level is too easy because players always finish with maximum health. Using this data, you can adjust the level's difficulty to better challenge and engage players, which might improve retention." /pinpoint/faqs/;Can Amazon Pinpoint tell if a single user uses the same app on more than one device (for example, on their phone and on a tablet device)?;Amazon Pinpoint distinguishes between endpoints and users. An endpoint is a destination that you can send messages to—such as a user's mobile device, email address, or phone number. A user is an individual who has a unique user ID. This ID can be associated with up to 10 endpoints. /pinpoint/faqs/;"How is a ""session"" defined?";A session begins when an app is launched (or brought to the foreground), and ends when the app is terminated (or goes to the background). To accommodate for brief interruptions, like a text message, an inactivity period of up to 5 seconds is not counted as a new session. Total daily sessions shows the number of sessions your app has each day. Average sessions per daily active user shows the mean number of sessions per user per day. /pinpoint/faqs/;What metrics does Amazon Pinpoint track for standard campaigns?;For standard campaigns, you can track messages sent, messages delivered, delivery rate, open rate, and campaign sessions by time of day. /pinpoint/faqs/;Where can I access analytics data?;You can view analytics data on the Amazon Pinpoint console. For each of your projects, the console provides detailed charts and metrics that provide insight into areas such as customer demographics, application usage, purchase activity, and delivery and engagement rates for campaigns. You can also access a subset of these metrics programmatically by using the Amazon Pinpoint API. /pinpoint/faqs/;What types of analytics does Amazon Pinpoint provide on my mobile and web applications?;Amazon Pinpoint offers several types of standard analytics that provide insight into how your application is performing. Standard analytics include metrics for active users, user activities and demographics, sessions, user retention, campaign efficacy, and transactional messages. Using these metrics in combination with the analytics tools on the console, you can perform in-depth analysis by filtering on certain segments, custom attributes, and more. /pinpoint/faqs/;How are daily and weekly retention defined?;Daily retention is measured by determining the number of users that first used your app on a specific day, came back and used your app in the next 7 days (7-day retention), fourteen days (14-day retention), and thirty days (30-day retention). /pinpoint/faqs/;"What is ""sticky factor,"" and how is it calculated?";The sticky factor represents the number of monthly users who used the app on a particular day. /pinpoint/faqs/;What are demographics in Amazon Pinpoint?;The demographics charts provide information about the device attributes for your app users. You can also see custom attributes that you define. /pinpoint/faqs/;How long does Amazon Pinpoint store analytics data?;Amazon Pinpoint automatically stores your analytics data for 90 days. You can see your data on the console or you can query a subset of data programmatically using the Amazon Pinpoint API. To keep the data for a longer period of time, you can export data from the console to comma-separated values (.csv) files or configure Amazon Pinpoint to stream event data to Amazon Kinesis. Kinesis is an AWS service that can collect, process, and analyze data from other AWS services in real-time. Amazon Pinpoint can send event data to Kinesis Data Firehose, which streams data to AWS data stores such as Amazon S3 or Amazon Redshift. Amazon Pinpoint can also stream data to Kinesis Data Streams, which ingests and stores multiple data streams for processing in analytics applications. /pinpoint/faqs/;What is 10DLC?;10DLC is a new standard for sending messages from applications such as Amazon Pinpoint to individual recipients. This type of sending is known as Application-to-Person or A2P messaging. You can use 10DLC phone numbers to send text messages to your customers with high throughput and high rates of message delivery. /pinpoint/faqs/;Will I still be able to purchase US long codes after June 1, 2021?;After June 1, 2021, unregistered US long codes will only be available for use with the voice channel. You won’t be able to use unregistered US long codes to send SMS messages. /pinpoint/faqs/;Should I delete the existing unregistered long code(s) for US that I have in my AWS account?;If you don’t do anything, unregistered long codes will remain in your account. You’ll continue to pay $1 per month for each unregistered long code. However, you won’t be able to use unregistered US long codes to send text messages. /pinpoint/faqs/;If I convert my unregistered long codes to 10DLC numbers, will I experience any downtime?; It’s important to note that 10DLC campaigns are completely separate from and unrelated to Amazon Pinpoint campaigns. /pinpoint/faqs/;What is a 10DLC campaign? What information do I need to provide to create one?;A 10DLC campaign is a description of your use case. During the 10DLC campaign registration process, you must describe the use case and provide the message templates that you plan to use. For more information, see Registering a 10DLC Campaign in the Amazon Pinpoint User Guide. /pinpoint/faqs/;If I have just one 10DLC phone number in my account, will every SMS message I send to US recipients be sent using this phone number?;Yes. If you only have one US phone number in your account—whether it’s a 10DLC number, a short code, or toll-free number—all of the messages that you send to recipients in the US will be sent through that number. /pinpoint/faqs/;Can I use multiple 10DLC numbers for one campaign?; It takes about 7–10 days to associate a new phone number with a 10DLC campaign. When the association process is complete, the status changes to ‘Ready.’ /pinpoint/faqs/;I registered my 10DLC company and campaign successfully, but the status of the 10DLC number I associated with the 10DLC campaign is ‘Pending.’ Why is this happening?; No. Currently it’s only possible to purchase phone numbers through the Amazon Pinpoint console. /pinpoint/faqs/;Can I use AWS APIs to purchase 10DLC phone numbers instead of using the AWS Pinpoint console?; The 10DLC company and campaign registration processes are connected to your AWS account, and is not tied to a specific AWS Region. You can have multiple 10DLC numbers that unique to a specific AWS Region. These phone numbers can all be associated with the same 10DLC campaign. /pinpoint/faqs/;How can I use 10DLC for different AWS Regions in my AWS account?; No, we don’t currently provide the option of specifying an area code when you purchase a phone number. /pinpoint/faqs/;Can I get a 10DLC phone number with a specific area code?; No, 10DLC is a concept that only applies to messages sent to recipients in the US who have US phone numbers. /pinpoint/faqs/;Can I use 10DLC numbers as originating identities for sending texts to recipients outside the US?; Yes. When you purchase a phone number to use with a 10DLC campaign, you can specify whether you want to use the number to send SMS messages, voice messages, or both. As long as you enable the voice capability when you purchase a number, you can use that number to send voice messages. /pinpoint/faqs/;Can I use 10DLC phone numbers to send voice messages?; You can use placeholders for variables in your sample messages to indicate variable content. For example, for the below sample message: “Hi John, your OTP is 1234” can be written as “Hi {#var}, your OTP is {#var}.” /pinpoint/faqs/;How can I enter the variable content to my sample messages while creating a 10DLC campaign?; The analytics dashboards in Amazon Pinpoint provide overall metrics for the current project. However, they don’t provide metrics for specific 10DLC campaigns. You can enable event streaming to capture delivery and response metrics for your messages. /pinpoint/faqs/;Is there a way to capture metrics for each 10DLC campaign?;You can also use the Digital User Engagement Events Database solution to create a queryable database of SMS delivery metrics. /pinpoint/faqs/;How long does it take to migrate 10DLC registrations from one AWS account to another?; No. In this situation, you would have to register again through the Amazon Pinpoint console. /pinpoint/faqs/;I registered my company directly via the Campaign Registry portal. Can the same registration be used for my AWS account?; 10DLC company and campaign registrations apply to your entire AWS account, but individual phone numbers are specific to each AWS Region. You can register your 10DLC company and campaign in one Region, and then use cross-account access to share phone numbers across Regions. /pinpoint/faqs/;Is it possible to share 10DLC phone numbers across AWS Regions?; By default, Amazon Pinpoint limits the number of SMS messages that you can send to 20 per second. If you exceed this account-level limit, you will receive throttling errors. However, you can request an increase to this limit. /pinpoint/faqs/;If I publish at a higher rate than my 10DLC campaign supports, will I receive a throttling error?;With 10DLC, the mobile carriers calculate a trust score for each sender during the company and campaign registration processes. This trust score determines how many messages each carrier will accept from you. If you exceed the limits for your campaign for a particular carrier, that carrier will begin rejecting your messages. We highly recommend that you enable event streaming in order to track these events. /pinpoint/faqs/;Is two-way messaging supported with 10DLC numbers?; 10DLC company and campaign registrations are managed through a third-party organization called the Campaign Registry. Currently, the Campaign Registry only allows a company to be registered once for each sending application (such as Amazon Pinpoint). However, you can use cross-account access to allow multiple AWS accounts to send SMS messages using the same 10DLC phone number. /pinpoint/faqs/;How can I register my company in two different AWS accounts?; The mobile carriers don’t allow senders to use links that have been shortened using services such as bit.ly or TinyURL. We recommend that you use the full domain of your URLs when you include links in your messages. Alternatively, there are commercial URL shortening services that let you use dedication domains. You can even build a URL shortener using AWS services. If you use a custom URL shortening domain, the domain name should be obviously related to your brand. During the 10DLC campaign registration process, you should provide examples of your shortened URLs when you provide your message templates. /pinpoint/faqs/;Can I use URL shorteners for 10DLC messages?; Yes. The Amazon SNconsole currently doesn’t include a way to register 10DLC companies and campaigns. However, you can use the 10DLC phone numbers that you configured in the Amazon Pinpoint console to send messages using Amazon SNS. /pinpoint/faqs/;I already use Amazon SNS or Amazon Simple Email Service (SES). What do I gain by switching to Amazon Pinpoint?;In typical Amazon SNand Amazon SES use cases, you have to set up your application to manage each message's audience, content, and delivery schedule. With Amazon Pinpoint, you can create message templates, delivery schedules, highly-targeted segments, and full campaigns. /pinpoint/faqs/;How does Amazon Pinpoint voice channel differ from Amazon Connect?;With Amazon Pinpoint voice, you can engage with your customers by delivering voice messages over the phone. Pinpoint voice gives customers a great way to deliver transactional messages—such as one-time passwords, appointment reminders, order confirmations, and more. With Pinpoint voice capabilities, you can convert a text script to lifelike speech, and then deliver the personalized voice message to your customer. Call metrics—such as number of calls completed and number of calls failed—help you to optimize future voice engagements. With both Pinpoint voice and SMS channels available to you, you can send SMS messages to customers who prefer text and deliver voice messages to those who are either unable to receive SMS messages. With the addition of the voice channel, you can now use Amazon Pinpoint to seamlessly engage your customers with timely, relevant content through push notifications, email, SMS, and voice calls. /pinpoint/faqs/;Does Amazon Pinpoint store my customer data?;Yes. Amazon Pinpoint stores user, endpoint, and event data. We have to retain this data so that you can create segments, send messages to recipients, and capture application and campaign engagement data. /pinpoint/faqs/;Who can access the data stored in Amazon Pinpoint?;A very limited number of authorized AWS employees have access to the data stored in your Amazon Pinpoint account. Maintaining your trust is our highest priority. We use sophisticated physical and technical controls to help safeguard your privacy and the security of your data. Your data is encrypted at rest and during transit. Our processes are designed to prevent unauthorized access to or disclosure of your content. /pinpoint/faqs/;Do I own my content that is processed and stored by Amazon Pinpoint?;You always retain ownership of your content. We only use your content with your consent. /pinpoint/faqs/;How do I delete the data that Amazon Pinpoint stores?;You can selectively delete the data stored in your Amazon Pinpoint account. You can also close your entire AWS account, which deletes all of the data stored in Amazon Pinpoint and all other AWS services in every AWS Region. For more information, see Deleting Data from Amazon Pinpoint in the Amazon Pinpoint Developer Guide. /pinpoint/faqs/;I received spam or other unsolicited email messages from an Amazon Pinpoint user. How do I report these messages?;You can report email abuse by sending an email to email-abuse@amazon.com. /pinpoint/faqs/;How can I submit feature requests or send other product feedback about Amazon Pinpoint?;Your AWS Account Manager can send your feature requests and feedback directly to the appropriate team. If you don't currently have an AWS Account Manager, you can also provide your feedback on the Amazon Pinpoint forum. /pinpoint/faqs/;How can I get technical support for Amazon Pinpoint?;If you have an AWS Support plan, you can create a new support case directly from the web-based AWS management console. AWS Support plans begin at $29 per month. For more information about AWS Support plans, visit https://aws.amazon.com/premiumsupport/. /ses/faqs/;What is an easy way to test Amazon SES?;The Amazon SES sandbox is an area where new users can test the capabilities of Amazon SES. When your account is in the sandbox, you can only send email to verified identities. A verified identity is an email addresses or domain that you've proven that you own. /ses/faqs/;Can I start sending large email volumes right away?;When you're ready to start sending email to non-verified recipients, submit an Amazon SES Sending Limit Increase request through the AWS Support Center. For more information, see Moving Out of the Amazon SES Sandbox in the Amazon SES Developer Guide. /ses/faqs/;How can I track my Amazon SES usage?;You can view your charges for the current billing period at any time by visiting the Billing Dashboard in the AWS Management Console. /ses/faqs/;Can I send emails from any email address?;No. You can only use Amazon SES to send email from addresses or domains that you own. /ses/faqs/;Is there a limit on the size of emails Amazon SES can deliver?;Amazon SES v2 API and SMTP accepts email messages up to 40MB in size including any images and attachments that are part of the message. Messages larger than 10MB are subject to bandwidth throttling, and depending on your sending rate, you may be throttled to as low as 40MB/s. For example, you could send a 40MB message at the rate of 1 message per second, or two 20MB messages per second. /ses/faqs/;Are there any limits on how many emails I can send?;Every Amazon SES account has its own set of sending limits. These limits are: /ses/faqs/;Does Amazon SES support Sender Policy Framework (SPF)?;Yes, Amazon SES supports SPF. You may need to publish an SPF record, depending on how you use Amazon SES to send email. If you don't need to comply with Domain-based Message Authentication, Reporting and Conformance (DMARC) using SPF, you don't need to publish an SPF record, because by default, Amazon SES sends your emails from a MAIL FROM domain that's owned by Amazon Web Services. If you want to comply with DMARC using SPF, you have to set up Amazon SES to use your own MAIL FROM domain and publish an SPF record. /ses/faqs/;Does Amazon SES support Domain Keys Identified Mail (DKIM)?;Yes, Amazon SES supports DKIM. If you have enabled and configured Easy DKIM, Amazon SES signs outgoing messages using DKIM on your behalf. If you prefer, you can also sign your email manually. To ensure maximum deliverability, there are a few DKIM headers that you should not sign. For more information, see Manual DKIM Signing in Amazon SES in the Amazon SES Developer Guide. /ses/faqs/;Can emails from Amazon SES comply with DMARC?;With Amazon SES, your emails can comply with DMARC through SPF, DKIM, or both. /ses/faqs/;Can I specify a dedicated IP address when I send certain types of email?;"If you lease several dedicated IP addresses to use with your Amazon SES account, you can use the dedicated IP pools feature to create groups (pools) of those IP addresses. You can then associate each pool with a configuration set; when you send emails using that configuration set, those emails are only sent from the IP addresses in the associated pool. For more information, see Creating Dedicated IP Pools in the Amazon SES Developer Guide." /ses/faqs/;What is the difference between dedicated IP address (standard) and (managed)?;Both Dedicated IP options help you manage your sending reputation through reserved IP addresses. Dedicated IP Addresses (standard) requires you to manually setup and manage your IP addresses. Dedicated IP Addresses (managed) reduces the need for manual monitoring or scaling of dedicated IP pools. It also helps you to model warmup status more accurately and prevent over-sending which can impact deliverability. For more detailed benefit comparison, see Dedicated IP addresses in the Amazon SES Developer Guide. /ses/faqs/;Does Amazon SES provide an SMTP endpoint?;Amazon SES provides an SMTP interface for seamless integration with applications that can send email via SMTP. You can connect directly to this SMTP interface from your applications, or configure your existing email server to use this interface as an SMTP relay. /ses/faqs/;Can I use Amazon SES to send email from my existing applications?;Amazon SES allows you to create a private SMTP relay for use with any existing SMTP client software, including software that you develop yourself, or any third-party software that can send email using the SMTP protocol. /ses/faqs/;Can Amazon SES send emails with attachments?;Amazon SES supports many popular content formats, including documents, images, audio, and video. /ses/faqs/;Can I test Amazon SES responses without sending email to real recipients?;You can use the Amazon SES mailbox simulator to test your sending rate and to test your ability to handle events such as bounces and complaints, without sending email to actual recipients. Messages that you send to the mailbox simulator don't count against your bounce and complaint metrics or your daily sending quota. However, we do charge you for each message you send to the mailbox simulator, just as if they were messages you sent to actual customers. /ses/faqs/;How does Amazon SES help ensure reliable email delivery?;Amazon SES uses content filtering technologies to scan outgoing email messages. These content filters help ensure that the content being sent through Amazon SES meets the standards of ISPs. In order to help you further improve the deliverability of your emails, Amazon SES provides a feedback loop that includes bounce, complaint, and delivery notifications. /ses/faqs/;Does Amazon SES guarantee receipt of my emails?;Amazon SES closely monitors ISP guidelines to help ensure that legitimate, high-quality email is delivered reliably to recipient inboxes. However, neither Amazon SES nor any other email-sending service can guarantee delivery of every single email. ISPs can drop or lose email messages, recipients can accidentally provide the wrong email address, and if recipients do not wish to receive your email messages, ISPs may choose to reject or silently drop them. /ses/faqs/;How long does it take for emails sent using Amazon SES to arrive in recipients' inboxes?;Amazon SES attempts to deliver emails to the Internet within a few seconds of each request. However, due to a number of factors and the inherent uncertainties of the Internet, we can't predict with certainty when your email will arrive, nor can we predict the exact route the message will take to get to its destination. /ses/faqs/;Can my email deliverability affected by bounces or complaints that are caused by other Amazon SES users?;Typically, when other Amazon SES users send messages that result in bounces or complaints, your ability to send email remains unchanged. /ses/faqs/;Can Amazon access the emails that I send and receive?;We use in-house anti-spam technologies to filter messages that contain poor-quality content. Additionally, we scan all messages that contain attachments to check for viruses and other malicious content. /ses/faqs/;Can I encrypt email messages that I receive?;Amazon SES integrates with AWS Key Management Service (KMS), which provides the ability to encrypt the mail that it writes to your Amazon S3 bucket. Amazon SES uses client-side encryption to encrypt your mail before it sends the email to Amazon S3. This means that it is necessary for you to decrypt the content on your side after you retrieve the mail from Amazon S3. The AWS Java SDK and AWS Ruby SDK provide a client that is able to handle the decryption for you. /ses/faqs/;Does Amazon SES send email over an encrypted connection using Transport Layer Security (TLS)?;Amazon SES supports TLS 1.2, TLS 1.1 and TLS 1.0 for TLS connections. /ses/faqs/;How does Amazon SES ensure that incoming mail is free of spam and viruses?;Amazon SES uses a number of spam and virus protection measures. It uses block lists to prevent mail from known spammers from entering the system in the first place. It also performs virus scans on every incoming email that contains an attachment. Amazon SES makes its spam detection verdicts available to you, enabling you to decide if you trust each message. In addition to the spam and virus verdicts, Amazon SES provides the DKIM and SPF check results. /ses/faqs/;What prevents Amazon SES users from sending spam?;Amazon SES uses in-house content filtering technologies to scan email content for spam and malware. /ses/faqs/;How is Amazon SES different from Amazon SNS?;Amazon SES is for applications that need to send communications via email. Amazon SES supports custom email header fields, and many MIME types. /ses/faqs/;Do I need to sign up for Amazon EC2 or any other AWS services to use Amazon SES?;Amazon SES users do not need to sign up for any other AWS services. Any application with Internet access can use Amazon SES to deliver email, whether that application runs in your own data center, within Amazon EC2, or as a client software solution. /ses/faqs/;I received spam or other unsolicited email messages from an Amazon SES user. How do I report these messages?;You can report email abuse by sending an email to email-abuse@amazon.com. To help us handle the issue as quickly and effectively as possible, please include the full headers of the original email. For procedures for obtaining email headers for several common email clients, see How to Get Email Headers on the MxToolbox.com website. /ses/faqs/;How can I submit feature requests or send other product feedback about Amazon SES?;Your AWS Account Manager can send your feature requests and feedback directly to the appropriate team. If you don't currently have an AWS Account Manager, you can also provide your feedback on the Amazon SES forum. /ses/faqs/;How can I get technical support for Amazon SES?;If you have an AWS Support plan, you can create a new support case directly from the web-based AWS management console. AWS Support plans begin at $29 per month. For more information about AWS Support plans, visit https://aws.amazon.com/premiumsupport/. /alexaforbusiness/faqs/;What is Alexa for Business?;Alexa for Business allows organizations of all sizes to introduce Alexa to their workplace. With Alexa for Business, you can use the Alexa you know as an intelligent assistant to stay organized and focus on the work that matters. Alexa helps workers be more productive as they move throughout their day at – home and at their desks as enrolled users with personal devices, and in meeting rooms, copy rooms or other shared spaces with shared devices. Alexa for Business includes the tools and controls that administrators need to deploy and manage shared Alexa devices, skills, and users at scale. /alexaforbusiness/faqs/;How can I get started with Alexa for Business?;To start using Alexa for Business, you need one or more Alexa devices and an AWS account. Simply sign into the console and navigate to “Alexa for Business” under “Business Productivity”. /alexaforbusiness/faqs/;What are some example uses for Alexa in an organization?;With Alexa for Business, you can deploy Alexa devices: /alexaforbusiness/faqs/;What is the difference between shared devices and enrolled users using personal devices?;Shared devices are the Alexa-enabled devices that you deploy to shared spaces in your workplace, like meeting rooms, lobbies, or breakout rooms. Shared devices are not linked to any specific user, and anyone with physical access to a shared device can use it. Shared devices are managed and configured directly through the Alexa for Business console, where you can assign them to locations, manage settings and enable groups of skills. To simplify setting up shared devices, you can either import existing setup devices or use the Device Setup Tool provided by Alexa for Business. /alexaforbusiness/faqs/;What devices can I use with Alexa for Business?;You can use the following devices with Alexa for Business: /alexaforbusiness/faqs/;Where is Alexa for Business available today?;Alexa for Business is currently available in the US East (NVirginia) and supports Alexa devices running anywhere in the US. Access or use of Alexa for Business or its features may be restricted or limited in countries where Alexa for Business is not currently offered. /alexaforbusiness/faqs/;How does Alexa for Business work with Alexa Skills Kit?;Using the Alexa Skills Kit, you can build your own skills. With Alexa for Business you can make these skills available to your shared devices and enrolled users without having to publish them to the Alexa Skills store. Alexa for Business also provides skills developers an API to build context aware skills for use on shared devices. Alexa for Business supports any skill in the Alexa Skills store. /alexaforbusiness/faqs/;Does Alexa for Business provide a public API?;"Yes, public APIs are available for creating and managing users, rooms, room profiles, skill groups, and devices. APIs are available via the AWS CLI and SDK; you can learn more about the APIs in the documentation." /alexaforbusiness/faqs/;Does the Alexa for Business API log actions in AWS CloudTrail?;Yes. All Alexa for Business actions performed via the AWS CLI and SDK will be included in your CloudTrail audit logs. /alexaforbusiness/faqs/;How is Alexa for Business different from Amazon Lex?;Alexa for Business is intended to enable organizations to take advantage of Amazon’s voice enabled assistant, Alexa. Alexa for Business provides Alexa capabilities that make workers more productive, while working alongside all of the other capabilities that Alexa has today like music, smart home controls, shopping, and thousands of third party skills. /alexaforbusiness/faqs/;What is a shared device?;Alexa for Business lets you use shared Alexa devices in common areas around your workplace. Shared devices can be used by anyone, and they are not associated with a personal Alexa account, and no one can use their personal skills with these devices. /alexaforbusiness/faqs/;Where can I deploy shared devices?;Shared devices can be deployed to any common area in your organization such as meeting rooms, lobbies, kitchens, break rooms, and copy rooms. Interactions with shared devices are not linked to a personal Alexa account, and provide your organization with Alexa’s built-in capabilities and third party skills you choose to enable. /alexaforbusiness/faqs/;"What is a ""room profile""?";A room profile contains the settings for your Alexa device including wake word, address, time zone, and units of measurement. A room profile simplifies the process of creating and managing rooms. For example, you can create a room profile that contains the Alexa settings that apply to all rooms in the same building. What is a network profile? /alexaforbusiness/faqs/;"How do I use ""skill groups""?";Skill groups are collections of Alexa skills you can use to enable skills on the devices in your rooms. For example, you can define a skill group with all the skills users will need in your meeting rooms. When you assign an Alexa device to a room, Alexa for Business automatically enables the skills in the skill groups assigned to the room. You can add skills to your skill groups at any time, and Alexa for Business will automatically enable them on all the Alexa devices in rooms that have been assigned that skill group. /alexaforbusiness/faqs/;What can users ask Alexa from a shared device?;Users can ask Alexa any of the same things they can ask on personal devices such as “Alexa, what time is it?” or “Alexa, what is the capital of Washington state?” If allowed by the administrator, users can also make outbound calls by asking “Alexa, call 206-555-1212” Users can also access any skill which you have enabled on the shared device, including private skills. /alexaforbusiness/faqs/;Does Alexa on shared devices provide personal responses to users?;No. Interactions with Alexa on a shared device are not linked to any personal Alexa account. Users cannot make phone calls to personal contacts, access their personal calendar, or interact with any personal skills linked to their personal Alexa account. /alexaforbusiness/faqs/;Do shared devices support timers, alarms, and lists?;Yes, users can use a shared device to set a timer or alarm or add items to a list. However, Alexa for Business provides a capability that allows you to reset a shared device from the console, or using the AWS SDK. You can reset a device to clear timers, alarms, shopping list, to-do list, the history of Bluetooth connections, and set the volume level back to 50%. /alexaforbusiness/faqs/;How can I use Alexa in my meeting rooms?;Your employees can use Alexa for Business in the meeting room to control conferencing equipment. /alexaforbusiness/faqs/;Which meeting room equipment works with Alexa for Business?;Alexa for Business can control most popular video conferencing and in-room systems including Polycom Group Series, Cisco TelePresence systems, Cisco Webex Room Kit, Crestron 3-Series Control Systems, and Zoom Rooms. Alexa for Business is also built-in to Polycom Trio 8500 and 8800. In addition, the Alexa for Business conference device APIs allow you to build skills so that Alexa can work with additional equipment or perform specific tasks in your meeting rooms. To learn how to enable your conferencing equipment, please see our documentation. /alexaforbusiness/faqs/;How does Alexa for Business know what meeting to join?;You can connect Alexa for Business to Google G-Suite, Office365, or Microsoft Exchange calendars. Alexa for Business utilizes this calendar integration to look up the dial-in information of the scheduled meeting. Alexa for Business can look up meeting dial-in information from the most used conferencing providers including Amazon Chime, Cisco WebEx, Fuze, Google Meet, Zoom, BlueJeans, and Skype for Business. If there is no scheduled meeting, or Alexa cannot determine the dial-in information, users will be prompted for the meeting ID and optional PIN for the default conferencing provider that you specified in the Alexa for Business console. /alexaforbusiness/faqs/;What can users ask Alexa in meeting rooms?;Users can say “Alexa, join my meeting” to start their meeting or “Alexa, end the meeting” to end the meeting. Users can also make calls, as well as access any Alexa skills which have been enabled for the Alexa devices in the meeting room. /alexaforbusiness/faqs/;Can I delete events from a room’s calendar with Alexa?;No, users cannot delete a meeting from a room's calendar with Alexa. /alexaforbusiness/faqs/;How can users make phone calls using Alexa?;"There are two ways users can make phone calls using Alexa. First, they can ask Alexa to call a contact from the address book set up by their administrator. For example, a user can say “Alexa, call IT”. Second, they can ask Alexa to call a specific phone number by speaking the numbers during the request. For example, a user can say ""Alexa, call 212 555 1212""" /alexaforbusiness/faqs/;Can I create address books to simplify calling from my shared devices?;Yes, you can create address books in the Alexa for Business console by clicking on the Create Address Book link in the Calls tab. Address books can contain frequently used contacts, such as the IT helpdesk, facilities, or the building reception. When an address book is associated with a shared Echo device, users can initiate a call from the device to a contact in the address book by speaking the contact name. For example, a user trying to reach the IT helpdesk could say “Alexa, call IT”. /alexaforbusiness/faqs/;Can I create different address books for different shared devices?;"Yes, you can create multiple address books and have a unique list of contacts in each of them. This lets you use different numbers for the same contact when used in different contexts. For example, you might have a unique phone number for the IT helpdesk in each building; creating a unique address book for each building makes it possible for users reach the right IT helpdesk." /alexaforbusiness/faqs/;How do I enable outbound calling for my shared devices?;Outbound calling is by default enabled for shared devices and you can start making calls straight away. You can disable outbound calling by changing the setting in your room profiles in the Alexa for Business console. /alexaforbusiness/faqs/;What phone number shows up when making calls via Echo devices?;When making calls via Echo device, the phone number shows up as an unknown number. For third-party conferencing devices that have Alexa built-in such as Polycom Trio, the phone number associated with the device will show up as caller ID. /alexaforbusiness/faqs/;How much does it cost to make calls from Echo devices ?;Making outbound PSTN calls from your Echo devices is free of charge. Can I make international calls from Echo devices? /alexaforbusiness/faqs/;Can I make international calls from Echo devices?;No, users are only allowed to call most local and toll-free US numbers. International calls, premium rate numbers, N-1-1 numbers, abbreviated dial codes, and dialing-by-letters are not supported. /alexaforbusiness/faqs/;What is an enrolled user?;Enrolled users are users that have linked their personal Alexa account with your Alexa for Business account. This allows them to use their personal Alexa devices for work, at their desks or in their homes. You can invite users to join your Alexa for Business account, who can then join using the Alexa for Business enrollment portal. Once they’ve enrolled, you can enable calendar access and make your private skills available to them. Enrolled users can access these on any of the devices in their personal Alexa account. Enrolled users can also use shared devices in your organization, but they can only access the skills available on those devices. /alexaforbusiness/faqs/;How do I invite a user to join my Alexa for Business organization?;You can send your users an invitation to join your organization via the Alexa for Business console. You can use the console to customize the content of the invitation e-mail that users will receive. The e-mail contains an enrollment URL where your users can login with the Amazon account they use to manage their Alexa devices. Once this is completed, users have access to the Alexa for Business resources you have enabled for them, including their Microsoft Exchange calendar, and your private skills. Users will also be able to auto-dial into conference calls from their Alexa devices, based on the default conferencing provider you configured for your organization in the Alexa for Business console. /alexaforbusiness/faqs/;What if a user does not already have an Amazon account, or doesn't use Alexa?;Your users do not need to be existing users of Alexa to use Alexa for Business. They can use their existing Amazon.com account to enroll with Alexa for Business or create a new Amazon.com account if they do not already have one. Once enrolled, users who are new to Alexa can install the Alexa mobile app for Android or iOS to customize Alexa’s settings for their personal devices. /alexaforbusiness/faqs/;Should my users use a different Amazon account from the one they use at home?;Users may choose to use any Amazon account they wish to enroll in your Alexa for Business organization. We recommend they use the same account that they use at home so that they can access Alexa for Business capabilities whether they are at home, on the go, or at the office. /alexaforbusiness/faqs/;What Alexa devices are supported for enrolled users?;Enrolled users can use any type of Alexa-enabled device. However, some features, such as dialing into conference calls, are only available from compatible Amazon Echo devices (Echo Dot, Echo, Echo Plus, Echo Show). /alexaforbusiness/faqs/;What can users do with Alexa after enrolling with your Alexa for Business account?;Users can continue to ask Alexa the same things they asked before they enrolled with your organization. With Alexa for Business, enrolled users get access to additional skills and features, such as asking Alexa to join their scheduled meetings. Users will also be able to access any private skills you choose to make available to your organization. /alexaforbusiness/faqs/;As an administrator, what access and control do I have to my enrolled users’ personal Alexa accounts?;You do not have any access or control over your users’ personal Alexa accounts, including Alexa capabilities and skills. You cannot see or delete utterances from personal devices used by enrolled users. As an administrator, you can enable your private skills for your users to access, and you can require that users use voice profiles to access their calendars. /alexaforbusiness/faqs/;Can a user link their Alexa account with multiple organizations?;Yes. Users can link their personal Alexa account to more than one Alexa for Business organization. /alexaforbusiness/faqs/;Can I help users self-enroll so that I don’t need to send them an invitation e-mail?;Yes. You can set up a self enrollment process within your organization and use the Alexa for Business SDK to automatically trigger an invitation e-mail to be sent to users. You can also choose to publish an internal LDAP connected web portal that authenticates users and verifies access before generating an invitation e-mail. /alexaforbusiness/faqs/;Can I remove users from my Alexa for Business account?;Yes. You can remove users from your Alexa for Business account using the Alexa for Business console. Removing a user will revoke access to all Alexa for Business features and your private skills. /alexaforbusiness/faqs/;Do I need my company’s IT department to do anything to enable Alexa for Business Work Updates?;As long as your company uses Office 365 or GSuite, no IT work is needed to set up Alexa for Business Work Updates. Please note, however, that your company may block third party apps and you might not be able to use Work Updates. If this happens, reach out to your IT department for clarifications or authorization. /alexaforbusiness/faqs/;What business calendar systems are supported?;You can link your Alexa account to calendars in Google G-Suite, Microsoft Office365, and Microsoft Exchange 2013 (or later). /alexaforbusiness/faqs/;How does a user link their personal Alexa account to their Microsoft Exchange 2013 (or later) account?;After a user is successfully enrolled with Alexa for Business, they can link their Microsoft Exchange account. To link a Microsoft Exchange account to Alexa, open the Alexa app, select Settings, and then select Calendar. Choose Microsoft Exchange and select Link account. /alexaforbusiness/faqs/;How can users restrict their calendar from being accessed?;Once a calendar is linked to Alexa, users can create a voice profile and restrict the calendar to only their voice. /alexaforbusiness/faqs/;How can I manage my calendar with Alexa?;Alexa helps you find one-on-one meeting time with your contacts. It offers suggested times that you and your contact are available to meet. Alexa also makes sure that you don't double-book your meeting by looking at all calendars you have linked with Alexa. Now, you are less likely to book a meeting with a coworker at the same time as your daughter's soccer game or your dentist appointment. /alexaforbusiness/faqs/;How can a new Alexa user get started with the Alexa Smart Scheduling Assistant?;"Link a Google Gmail, Google G Suite, Microsoft Exchange 2013 (or later), or Microsoft Office 365 calendar with Alexa; For more information about how to link your calendar, see Connect Your Calendar to Alexa. Add work or personal contacts to your Alexa app. For more information about how to add your contacts, see Add and Edit Your Contacts to the Alexa App. Have access to your contacts’ calendar availability information. After the calendars and contacts are set up, Alexa will be aware of meetings on your calendar when asking to join a meeting." /alexaforbusiness/faqs/;How can an existing Alexa user get started with the Alexa Smart Scheduling Assistant?;If you have a linked Microsoft Office 365 calendar you may need to relink your calendar. Go to your Alexa Companion App | Settings | Calendar | Microsoft and click Unlink this Microsoft account. Then click on Link this Microsoft account and follow the prompts as defined in Calendar linking flow. /alexaforbusiness/faqs/;What are the new permissions requested from Microsoft Office 365?;Alexa Smart Scheduling Assistant requires the following permissions /alexaforbusiness/faqs/;Can I choose to decline the new permissions?;If you are new customer, you will be required to provide the permissions when linking your calendar for the first time. If you an existing customer who wants Alexa to get availability information when scheduling meetings, you will have to unlink and relink your calendar. If you choose not to relink, you will not be able to get the additional functionality. /alexaforbusiness/faqs/;What calendars are supported by the Alexa Smart Scheduling Assistant?;The Alexa Smart Scheduling Assistant supports Google G-Suite, Microsoft Office 365, and Microsoft Exchange 2013 (or later) calendars. It is available to Alexa for Business and Alexa consumers in the US. /alexaforbusiness/faqs/;How do I build a private skill for my organization?;You build private skills much like how you build a public skill - by using the Alexa Skills Kit. When your skill is ready you can mark the skill as private, submit the skill, and then distribute it to your Alexa for Business account. Please refer to the Alexa Skills Kit for more information. /alexaforbusiness/faqs/;Does my private skills need to pass certification before I can distribute it to my AWS account?;No. Private skills are not subject to certification. As a result, you should only enable private skills that you developed or are from trusted developers. /alexaforbusiness/faqs/;How do I make a private skill available on my shared devices?;You make a private skill available to your shared devices by adding the skill to a skill group and then adding the skill group to your rooms. Alexa for Business automatically enables the skills for the Alexa devices assigned to your rooms. /alexaforbusiness/faqs/;How do I make a private skill available to my users?;Once you have published a private skill, you can navigate to the Skills section on the Alexa for Business Console. Find the skill in the Private Skills tab and check the box under the column “Available for users”. This will enable the skill for all users in the organization. /alexaforbusiness/faqs/;How can my users access the private skill?;Your users can view and manage private skills from the Alexa app on their phone by going to the menu, selecting Skills, and then selecting Your Skills (at the top of the screen. /alexaforbusiness/faqs/;How do Amazon Echo devices recognize the wake word?;Amazon Echo devices use on-device keyword spotting to detect the wake word. When these devices detect the wake word, they stream audio to the cloud, including a fraction of a second of audio before the wake word. /alexaforbusiness/faqs/;Can I turn off the microphone on Echo devices?;Yes, you can turn off the microphone by pushing the microphone on/off button on the top of your device. When the microphone on/off button turns red (on the Echo Show there is a red LED), the microphone is off. The device will not respond to the wake word until you reactivate the microphone by pushing the microphone on/off button again. An organization cannot turn on a device’s microphones via the Alexa for Business Console if the device’s microphones have been turned off. /alexaforbusiness/faqs/;How do I know when an Echo device is streaming my voice to the Cloud?;When an Echo device detects the wake word the light ring around the top of your device turns blue, to indicate that the device is streaming audio to the Cloud (for Echo Show and Echo Spot, you will see a blue bar or ring on the screen). When you use the wake word, the audio stream includes a fraction of a second of audio before the wake word. The audio stream closes once your question or request has been processed. /alexaforbusiness/faqs/;What can an organization tell their users about the user’s information when using a corporate skill on an enrolled account or using a device managed by the organization?;You can tell them that the organization has no access to the information it receives about how they use a personal device, outside of when they interact with corporate skills. The organization may receive engagement metrics (device and skill usage metrics) for shared devices. In either case, the organization has no access to any voice recordings. Voice recordings from shared devices being managed by Alexa for Business can be deleted from the Alexa for Business management console or by voice. If a user has enrolled their personal account, they can view and delete individual voice recordings associated with their account using the Alexa companion app, or all recordings by visiting Manage Your Content and Devices. More Alexa and information on Alexa can be found here: Alexa Device FAQs. /alexaforbusiness/faqs/;When an organization manages shared devices using Alexa for Business, what information does that organization have access to?;The organization can see and control which skills are enabled on a shared device, the room where it’s assigned, and the settings applied to the device. /alexaforbusiness/faqs/;When an organization manages shared devices using Alexa for Business, does the organization have access to voice recordings made by users of the shared device?;No, unlike with a personal Alexa-enabled device where a user can review their voice recordings in the Alexa companion app, Alexa for Business organizations cannot access any voice recordings or text transcripts of what a user said. In addition, the organization doesn’t see Alexa’s responses to users’ queries. /alexaforbusiness/faqs/;What data do skill developers for Alexa for Business have access to?;Skill developers receive the information about their skill and its usage that is made available to skill developers in the Alexa Skills Kit developer portal. They also have access certain information about shared devices via the Alexa for Business API. /alexaforbusiness/faqs/;What controls do organizations have over personal accounts that they let enroll and join their Alexa for Business account?;Organizations can control which of their users can enroll and join their personal account to the organization’s Alexa for Business account. In addition, they can require a user create a voice profile to access corporate resources like calendars. /alexaforbusiness/faqs/;What information does an organization receive about its users' Amazon accounts when users enroll their personal account with the organization's Alexa for Business account?;The organization does not have any access to the user’s personal Amazon account. The organization does not receive the name or email that the personal account uses. As with shared devices, the organization has no access to the voice recordings on a personal device, including deleting voice recordings. /alexaforbusiness/faqs/;Are voice inputs processed by Alexa for Business stored, and how are they used by Alexa for Business?;Alexa for Business may store and use voice inputs processed by the service solely to provide and maintain the service and to improve and develop the quality of Alexa for Business and other Amazon machine learning and artificial intelligence services. Use of your content is necessary for continuous improvement of your Alexa for Business customer experience, including the development and training of related technologies. We do not use any personally identifiable information that may be contained in your content to target products, services, or marketing to you or your end users. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. /alexaforbusiness/faqs/;How can voice recordings be deleted?;An individual using a shared device can delete their voice recordings by saying either “Alexa, forget what I just said” or “Alexa, forget what I said today.” The organization can also delete voice recordings for shared devices they manage in one of two ways: via the Alexa for Business console or via an programmatic API call. The organization does not have any access to these voice recordings, other than the ability to delete them. Personal device users can can view and delete specific voice recordings associated with their accounts by going to History in Settings in the Alexa app, drilling down for a specific entry, and then tapping on the delete button. Or, personal device users can delete all voice recordings associated with their accounts for each of their Alexa-enabled products by selecting the applicable product at Manage Your Content and Devices. Deleting voice recordings may degrade your Alexa for Business experience. /alexaforbusiness/faqs/;Who has access to my content that is processed and stored by Alexa for Business?;Only authorized employees will have access to your content that is processed by Alexa for Business. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. /alexaforbusiness/faqs/;Do I still own my content that is processed and stored by Alexa for Business?;You always retain ownership of your content and we will only use your content with your consent. /alexaforbusiness/faqs/;Is the content processed by Alexa for Business moved outside the AWS region where I am using Alexa for Business?;Any content processed by Alexa for Business is encrypted and stored at rest in the AWS region where you are using Alexa for Business. Some portion of content processed by Alexa for Business may be stored in another AWS region solely in connection with the continuous improvement and development of your Alexa for Business customer experience and other Amazon machine learning and artificial intelligence services. Your trust, privacy, and the security of your content are our highest priority and we implement appropriate and sophisticated technical and physical controls, including encryption at rest and in transit, designed to prevent unauthorized access to, or disclosure of, your content and ensure that our use complies with our commitments to you. /alexaforbusiness/faqs/;Can customers manage how Amazon uses voice recordings for machine learning?;Voice recordings are used to train our speech recognition and natural language understanding systems using machine learning. By default, a very small percentage of these recordings are manually reviewed in order to improve this process. Customers can now designate, at a room profile level, if voice recordings from shared devices they manage will be manually reviewed and used to improve machine learning algorithms. Using a new preference, Data Use Policy, located in the Room Profile, customers can either allow (default) or disallow manual reviews of voice recordings which are used to improve Amazon’s services. /alexaforbusiness/faqs/;What support is provided for Alexa for Business?;Depending on your AWS support contract, Alexa for Business is supported under Developer Support, Business Support and Enterprise Support plans. /alexaforbusiness/faqs/;How much does Alexa for Business cost?;Please see Alexa for Business Pricing for the latest information. /alexaforbusiness/faqs/;Does Alexa for Business offer a Free Tier?;Currently there is no Free Tier for Alexa for Business. /workmail/faqs/;What is Amazon WorkMail?;Amazon WorkMail is a secure, managed business email and calendar service with support for existing desktop and mobile clients. Amazon WorkMail gives users the ability to seamlessly access their email, contacts, and calendars using Microsoft Outlook, their web browser, or their native iOS and Android email applications. You can integrate Amazon WorkMail with your existing corporate directory and control both the keys that encrypt your data and the location in which your data is stored. /workmail/faqs/;How can I get started using Amazon WorkMail?;To get started with Amazon WorkMail, you will need an AWS account. You can use this account to sign into the AWS Management Console and create an organization, add your domains, and also create users, groups, or resources. Please refer to the Amazon WorkMail documentation for more information on getting started. /workmail/faqs/;What clients can I use to access Amazon WorkMail?;You can access Amazon WorkMail from Microsoft Outlook clients on Windows and Mac OS X, and on mobile devices that support the Microsoft Exchange ActiveSync protocol including iPhone, iPad, Kindle Fire, Fire Phone, Android, Windows Phone, and BlackBerry 10. Additionally, you can use the Apple Mail application on Mac OS X or the Amazon WorkMail web application to securely access Amazon WorkMail using your web browser. /workmail/faqs/;Does Amazon WorkMail support accessibility capabilities?;"Yes, you can use screen readers and keyboard shortcuts with the Amazon WorkMail web application for easier accessibility; you can learn more about these capabilities on the Working with Accessibility Features documentation page here. In addition, the accessibility capabilities offered in supported desktop and mobile clients (see below for a list) can also be used with Amazon WorkMail." /workmail/faqs/;What is the mailbox storage limit in Amazon WorkMail?;Amazon WorkMail offers a mailbox storage limit of 50 GB per user. /workmail/faqs/;What is the maximum size of email that I can send from Amazon WorkMail?;The maximum size of outgoing and incoming email in Amazon WorkMail is 25 MB. /workmail/faqs/;Can I share my calendar with other users in my organization?;Yes. Amazon WorkMail offers the ability to share your calendar with your co-workers. /workmail/faqs/;Does Amazon WorkMail provide resource booking?;Yes. Amazon WorkMail provides the option to create resource mailboxes such as conference rooms, projectors, and other equipment. The resource mailboxes will allow users to reserve the room or equipment by including the resource in meeting invites. /workmail/faqs/;Does Amazon WorkMail support email archiving?;Email journaling can be enabled to capture and preserve messages in your existing archiving solution. /workmail/faqs/;Can I set up email redirect rules on Amazon WorkMail?;Yes, you can configure email redirection rules for Amazon WorkMail mailboxes. You can setup email redirection rules on your desktop email application, such as Microsoft Outlook, or using the Amazon WorkMail web application. You will need to ensure that the Amazon Simple Email Service (Amazon SES) identity policies for your domains are up-to-date to take advantage of email redirection rules. Please visit this page for more information on how to update the Amazon SES identity policy for your domain. /workmail/faqs/;Are there limits on the number of organizations and users I can create when using Amazon WorkMail?;No, there are no limits on the number of organizations and users you can create. /workmail/faqs/;Are there limits on the number of messages I can send per user?;"There are limits only on sending external messages. For example, the number of messages sent to recipients outside your organization. Each user in your organization can send messages to a maximum of 10,000 external recipients per day, and the total external recipients for an AWS account is limited to 100,000 per day. New Amazon WorkMail accounts may start with limits that are lower than the limits described here; please see AWS Service Limits for more information." /workmail/faqs/;Are there limits associated with the use of the Amazon WorkMail SMTP gateway?;Yes. To learn more about SMTP limits, please see AWS Service Limits. /workmail/faqs/;Are there limits on the number of messages each user can receive?;There are no limits on the number of messages each user can receive. However, we may queue or reject messages (and send a bounce to the sender) if there is a large volume of incoming email in a short period of time. Please see AWS Service Limits for more information. /workmail/faqs/;Do meeting requests count when evaluating usage against message limits?;All messages that are sent to another user are considered when evaluating these limits. These include e-mails, meeting requests, meeting responses, task requests, as well as all messages that are forwarded or redirected automatically as a result of a rule. /workmail/faqs/;Does Amazon WorkMail support public folders?;No, WorkMail does not offer public folders. /workmail/faqs/;What features does the Amazon WorkMail web application provide?;The Amazon WorkMail web application provides users anywhere with access to email, calendar, contacts, and tasks. Users can also access shared calendars, access the global address book, manage their out-of-office replies, and book resources. /workmail/faqs/;Which browsers does the Amazon WorkMail web application work on?;The Amazon WorkMail web application supports the following browsers: Firefox, Chrome, Safari and Edge. For more information, please see Log On to the Amazon WorkMail Web Application. /workmail/faqs/;In which languages is the Amazon WorkMail web application available?;The Amazon WorkMail web application is currently available in English, French, and Russian. /workmail/faqs/;Can I use Amazon WorkMail on my mobile device?;Yes. Amazon WorkMail is compatible with most major mobile devices supporting the Microsoft Exchange ActiveSync protocol, including iPad, iPhone, Kindle Fire, Fire Phone, Android, Windows Phone, and BlackBerry 10. /workmail/faqs/;What mobile device policies does Amazon WorkMail support?;Amazon WorkMail gives you the ability to require a PIN or password on your devices, configure the password strength, require a device lock after a number of failed login attempts, require a screen lock for idle timeouts, and require device and storage card encryption. /workmail/faqs/;Does Amazon WorkMail offer the ability to remotely wipe mobile devices?;Yes. Amazon WorkMail offers a remote wipe feature. A remote wipe can be performed by the IT administrator using the AWS Management Console. /workmail/faqs/;Can I use Amazon WorkMail with Microsoft Outlook on Microsoft Windows?;Yes. Amazon WorkMail offers native support for Microsoft Outlook 2007, 2010, 2013, and 2016 on Microsoft Windows. /workmail/faqs/;Do I need any additional software to connect Microsoft Outlook to Amazon WorkMail?;No. Amazon WorkMail offers native support for the most recent versions of Microsoft Outlook and does not require any additional software to connect Microsoft Outlook. /workmail/faqs/;Can I use Amazon WorkMail with Microsoft Outlook on Mac OS X?;Yes. Amazon WorkMail offers native support for Microsoft Outlook 2011 and Microsoft Outlook 2016 on Mac OS X. /workmail/faqs/;Can I use Amazon WorkMail with other clients on Mac OS X?;Yes. Amazon WorkMail offers native support for the Apple Mail and Calendar applications on Mac OS X (10.6 and above). /workmail/faqs/;Does the Amazon WorkMail user subscription include a license for Microsoft Outlook?;Amazon WorkMail does not include a license for Microsoft Outlook. To use Microsoft Outlook with Amazon WorkMail, you must have a valid license from Microsoft. /workmail/faqs/;Does Amazon WorkMail support the Click-to-run version of Microsoft Outlook 2010, 2013, and 2016?;Yes. Amazon WorkMail supports the Click-to-run versions of Microsoft Outlook 2010, 2013, and 2016. /workmail/faqs/;Can I access my Amazon WorkMail mailbox with my existing POP3 or IMAP client applications?;You can access your Amazon WorkMail mailbox with client applications that support the IMAP protocol. Amazon WorkMail currently does not offer support for POP3 email access. /workmail/faqs/;When using an IMAP client application, can I access all items in my Amazon WorkMail mailbox?;The IMAP protocol provides access to email, but not to calendar items, contacts, notes, or tasks. /workmail/faqs/;When using an IMAP client application, will I be able to see all my email folders?;Yes, any folder which contains email will be visible and accessible using an IMAP client application. /workmail/faqs/;How do I send email when using an IMAP email client application?;You can send email by configuring your IMAP email client to use the Amazon WorkMail SMTP gateway. Amazon WorkMail SMTP addresses can be found at AWS Regions and Endpoints. /workmail/faqs/;What is the Amazon WorkMail SMTP Gateway?;The Simple Mail Transfer Protocol (SMTP) gateway is an Amazon WorkMail service which allows you to submit email messages for delivery to both internal and external recipients. To learn more, please see Connect your Client IMAP Application. /workmail/faqs/;What email client applications can I use to send email using the Amazon WorkMail SMTP gateway?;You can use the Amazon WorkMail SMTP gateway to send email using any email client that supports the SMTP protocol. This includes popular email clients like Microsoft Outlook, Apple Mail or Mozilla Thunderbird. /workmail/faqs/;Do I need to set up a directory to use Amazon WorkMail?;"Each user you add to your Amazon WorkMail organization needs to exist in a directory, but you do not have to provision a directory yourself. You can integrate your existing Microsoft Active Directory with Amazon WorkMail using AWS Directory Service AD Connector or run AWS Directory Service for Microsoft Active Directory Enterprise Edition (""Microsoft AD"") so you don’t have to manage users in two places and users can continue to use their existing Microsoft Active Directory credentials. Alternatively, you can have Amazon WorkMail create and manage a Simple AD directory for you and have users in that directory created when you add them to your Amazon WorkMail organization." /workmail/faqs/;How can I integrate with an existing Microsoft Active Directory?;You can integrate with an existing Microsoft Active Directory by setting up an AWS Directory Service AD Connector or Microsoft AD and enabling Amazon WorkMail for this directory. After you've configured this integration, you can choose which users you would like to enable for Amazon WorkMail from a list of users in your existing directory, and users can log in to Amazon WorkMail using their existing Active Directory credentials. /workmail/faqs/;Can I use my existing domain name with Amazon WorkMail?;Yes. You can add your existing domain name to Amazon WorkMail using the AWS Management Console. Before the domain name can be used, you must verify the ownership of the domain name. You can verify the ownership by adding a DNrecord to your DNserver. /workmail/faqs/;Can I assign multiple email addresses to a user account?;Yes. You can assign multiple email addresses to a user account using the AWS Management Console. /workmail/faqs/;Can I create distribution groups to deliver email to multiple users?;Yes. You can create new distribution group or enable an existing group from your Microsoft Active Directory using the AWS Management Console. These distribution groups are available in the Global Address Book. Users can also create personal distribution groups using Microsoft Outlook or the Amazon WorkMail web application. /workmail/faqs/;What happens if a user forgets their password to access Amazon WorkMail?;If Amazon WorkMail is integrated with an existing Active Directory domain, then the user would follow the existing lost password process for your existing domain, such as contacting an internal helpdesk. If the account is integrated with a Simple AD directory and a user forgets their password, then the account’s IT administrator can reset the password from the AWS Management Console. /workmail/faqs/;How does an IT administrator remove a user’s access to Amazon WorkMail?;The account’s IT administrator can remove a user’s access to Amazon WorkMail using the AWS Management Console. /workmail/faqs/;Does Amazon WorkMail provide a management API?;No. Amazon WorkMail does not currently provide a management API. /workmail/faqs/;Does Amazon WorkMail offer an SDK?;Yes. Amazon WorkMail provides an administrative SDK so you can natively integrate WorkMail with your existing services. The SDK enables programmatic user, email group, and meeting room or equipment resource management through API calls. This means your existing IT service management tools, workflows, and third party applications can automate WorkMail migration and management. To learn more, please visit our our API reference. /workmail/faqs/;How can I start using email journaling?;Email journaling can be setup from the Amazon WorkMail Management Console under Organization Settings. You can enable email journaling, specify the email address to which journaled emails are sent, and specify the email address to which reports are sent. /workmail/faqs/;Can I apply email journaling to a specific set of actions or users?;No. Today email journaling is a global setting that is applied to all inbound and outbound email, and all users. /workmail/faqs/;Does email journaling apply to recipients in the blind carbon copy (BCC) field?;Yes. Email sent using BCC recipients is recorded using email journaling. /workmail/faqs/;Will journaling reports show email recipients in the BCC field?;For outbound email, journaling reports will contain the details of recipients in the BCC field. For inbound email, the journaling report will only contain of details of recipients in the BCC field if those recipients are in your Amazon WorkMail organization. /workmail/faqs/;Will emails marked as spam be journaled?;Yes, they will. /workmail/faqs/;Will emails marked as containing viruses be journaled?;No. Emails that contain viruses will be dropped and will not be journaled. /workmail/faqs/;What actions will be taken in case of delivery failures to the journaling destination mailbox?;Amazon WorkMail will continue to try to deliver the journaled messages to the journaling destination mailbox for 12 hours. In case of continuous failure, the failure reports will be delivered to the address you specify in the Amazon WorkMail Management Console. /workmail/faqs/;What do journaling failed delivery reports contain?;Whenever journaled email fails to be delivered to the primary journaling address, a report is sent to the failed delivery report email address you specify in the Amazon WorkMail Management Console. This report contains information about each journaled message that failed to be delivered, but does not show the contents of the original message. /workmail/faqs/;What is the email address from which journaled emails are sent?;Journaled emails are be sent from amazonjournaling@.awsapps.com where is your Amazon WorkMail organization name. /workmail/faqs/;Is there an additional cost to using email journaling?;No, there is no additional cost to using email journaling. /workmail/faqs/;Which SMTP headers will identify a journaled message by the journaling agent?;“X-WM-Journal-Report” will be used as the header to identify journaled messages. This header will be signed so that it cannot be mimicked. /workmail/faqs/;Do journaling messages count against the sending limits?;No, journaling messages are always sent as long as the user is allowed to send a message. They are not counted against that user’s sending limit. On receiving a message, the journaling message is always sent as long as it can be delivered to a user. /workmail/faqs/;How can I migrate mailboxes from my existing email solution to Amazon WorkMail?;You can migrate your existing mailboxes to Amazon WorkMail using solutions from a preferred Amazon WorkMail migration provider. To see a list of providers, please visit this webpage. If you’re migrating from Microsoft Exchange Server 2013 or 2010, you can set up interoperability to minimize disruption for your end users. /workmail/faqs/;Does Amazon WorkMail support interoperability with Microsoft Exchange Server?;Yes, Amazon WorkMail supports interoperability with Microsoft Exchange Server 2013 and 2010. You can learn about how to set up interoperability here. /workmail/faqs/;What interoperability capabilities does Amazon WorkMail support?;Interoperability allows you to use the same corporate domain for all mailboxes on both Microsoft Exchange and Amazon WorkMail. Your users can seamlessly schedule meetings with bi-directional sharing of calendar free-busy information between the two environments, and access user and resource information through a unified global address book. /workmail/faqs/;Which versions of Microsoft Exchange Server are supported with Amazon WorkMail interoperability?;Amazon WorkMail offers interoperability support with Microsoft Exchange Server 2013 and 2010. /workmail/faqs/;Are there additional charges to use interoperability features?;No. Interoperability features are included in Amazon WorkMail per mailbox pricing. /workmail/faqs/;Can users access Amazon WorkMail using their existing Microsoft Active Directory credentials?;Yes, users can connect to Amazon WorkMail using their existing Microsoft Active Directory credentials. /workmail/faqs/;Will mailboxes on Amazon WorkMail use the same domain as mailboxes on my Microsoft Exchange server?;Yes. To make this possible, you need to enable email routing between Microsoft Exchange and Amazon WorkMail so that mailboxes on both environments use the same corporate domain. To set up email routing, you can follow the steps outlined here. /workmail/faqs/;Which email platform handles incoming email traffic when interoperability is established?;Your on-premises Microsoft Exchange Server handles and processes all incoming email. If you’re using interoperability for migration, you can switch your MX record to point to Amazon WorkMail when your migration is complete. /workmail/faqs/;Can I restrict access to my Microsoft Exchange Server to just my VPC?;No, you can’t restrict access to the Exchange Server to your VPC. As of now, the EWS endpoint of your on-premises Microsoft Exchange environment needs to be publicly available. /workmail/faqs/;Does Amazon WorkMail support bi-directional sharing of calendar free-busy information with Microsoft Exchange?;Yes, interoperability provides you bi-directional sharing of calendar free-busy information between your Amazon WorkMail and Microsoft Exchange environments. Please follow the steps here. /workmail/faqs/;How does Amazon WorkMail interact with my on-premises Microsoft Exchange Server to perform bi-directional calendar free-busy lookups?;You will need to configure availability settings on Amazon WorkMail and Microsoft Exchange to share calendar free-busy information. Amazon WorkMail uses the EWS URL for your Microsoft Exchange server to perform free-busy lookups. Amazon WorkMail uses an Exchange service account to login to Exchange and read free-busy data of the users in the Microsoft Exchange organization. /workmail/faqs/;Do I need to set up federation on my on-premises Microsoft Exchange server?;No, for interoperability support with Amazon WorkMail, you don’t need to set up federation on your Microsoft Exchange server. /workmail/faqs/;Can I also view subject and location in the free-busy details when interoperability is enabled?;Yes, to view subject and location information, the service account user needs to have access to this information. /workmail/faqs/;How does Amazon WorkMail interact with my on-premises Microsoft Exchange Server to create a unified global address book?;Once interoperability support is enabled, Amazon WorkMail performs a synchronization of the address book with your on-premises Active Directory every four hours, using AD Connector. All Microsoft Exchange users, groups, and resources are automatically added to your Amazon WorkMail address book. /workmail/faqs/;Will all Microsoft Exchange Server objects synchronize to the Amazon WorkMail global address book?;Amazon WorkMail will synchronize users, groups, resources, and contacts that reside in Microsoft Exchange Server. Amazon WorkMail will not synchronize dynamic groups or address lists. When your Microsoft Exchange global address book contains these objects, they won't be available in Amazon WorkMail. /workmail/faqs/;Will Amazon WorkMail still synchronize with my Active Directory when interoperability support isn’t enabled?;Yes, Amazon WorkMail will still synchronize with your Active Directory when interoperability support is disabled. In this scenario only changes to Amazon WorkMail users and groups are synchronized. /workmail/faqs/;Does the Microsoft Outlook offline address book also contain all my Microsoft Exchange users, and groups, and resources?;Yes, the Microsoft Outlook offline address book will contain both Amazon WorkMail Microsoft Exchange users, groups, and resources. /workmail/faqs/;Can my distribution groups contain both Amazon WorkMail and Microsoft Exchange users as members?;Yes, you can have both Amazon WorkMail and Microsoft Exchange users as members of distribution groups. /workmail/faqs/;Can I still create new resource in Amazon WorkMail when interoperability support is enabled?;No. To create new resources in Amazon WorkMail, you first need to disable interoperability support. Once your new resources have been created, you can then turn interoperability support back on. This is done to ensure resources are synchronized back to your Microsoft Exchange Server. /workmail/faqs/;What are email flow rules?;Amazon WorkMail allows you to use email flow rules to filter, update, or route email traffic for your Amazon WorkMail organizations. On inbound emails, this can help you reduce email from unwanted senders, route suspicious mail to junk folders, and trigger AWS Lambda functions. On outbound, you can block sending to certain domains, route mail through custom SMTP endpoints, or trigger Lambda functions. Email flow rules can be applied based on specific email addresses, or entire email domains. /workmail/faqs/;What types of email data are passed to the Lambda function?;The Lambda function will receive the message id, sender, recipient, and subject of an email. /workmail/faqs/;Can I retrieve more information about an email message from within my Lambda?;Yes, you can retrieve the full content of the email message using WorkMailMessageFlow’s SDKs. See the Admin Guide for more information. /workmail/faqs/;What format does the email content come in when retrieving it from my Lambda?;The WorkMailMessageFlow SDK will return the raw MIME content of the message that is being processed. You can use common MIME-processing libraries, such as JavaMail for Java or email.parser for Python to convert this to a structured format for easier parsing. /workmail/faqs/;Can I update content of an email message using a Lambda function?;Yes, you can update the content of an email message, before it is sent out or delivered in, using WorkMailMessageFlow’s SDKs in your Lambda function. For the changes to take effect, the Lambda action of your mail flow rules should be configured to run your Lambda synchronously. See Updating message content with AWS Lambda for more information. /workmail/faqs/;How can I start using email flow rules?;Rules can be set up from the Amazon WorkMail management console by navigating to Organization Settings. You can create, modify, and delete flow rules under the Email Flow Rules tab. /workmail/faqs/;Can I perform filtering based on IP address or range?;IP based filtering is already supported by Amazon Simple Email Service. Please see Creating IP Address Filters for Amazon SES Email Receiving to learn more about IP-based filtering. /workmail/faqs/;What happens if email containing a virus is received from a source specified to bypass spam checks?;Amazon WorkMail scans all incoming and outgoing email for spam, malware, and viruses. All email containing viruses is dropped and not delivered, regardless of the configured flow rules. /workmail/faqs/;What happens if email flow rules overlap?;If you have email for which multiple email flow rules match, the action of the most specific rule will be applied. For example, a rule for a specific email address will take precedence over a rule for an entire domain. If multiple rules have the same specificity, the most restrictive action will be applied (for example, Drop will take precedence over Bounce). Please see Managing Email Flows for more information. /workmail/faqs/;How can I test email flow rules before applying it on real emails?;You can create a rule with a single email address as Sender domains or addresses condition, and choose the action you want to use. You can then send test emails to or from the single address you chose to confirm that the rule is behaving as you expect. Once satisfied with the result, you can extend the rule for other Sender domains or addresses. /workmail/faqs/;Are there limits on the number of rules I can create?;Yes. To learn more about limits related to email flow rules, please see AWS Service Limits. /workmail/faqs/;How long does a rule need to take effect?;Rules take effect immediately after creation. /workmail/faqs/;Is there any additional charge for defining email flow rules?;No, there is no additional charge for using email flow rules. Though, if you are using Lambda action, then Lambda execution charges will be billed separately. /workmail/faqs/;How is data transmitted to Amazon WorkMail?;All data in transit is encrypted using industry-standard SSL. Our web application, and mobile and desktop clients transmit data to Amazon WorkMail using SSL. /workmail/faqs/;Can I choose the AWS region where my data is stored?;Yes. You choose the AWS region where your organization’s data is stored. Please refer to the Regional Products and Services page for details of Amazon WorkMail availability by region. /workmail/faqs/;How do I decide which AWS region to use?;There are several factors to consider, based on your needs, including whether using a specific AWS region enables you to meet regulatory and compliance requirements. We generally recommend that you set up your Amazon WorkMail organization in the region nearest to where most of your users are located, to reduce data access latencies. /workmail/faqs/;How is Amazon WorkMail protected from malware/viruses?;Amazon WorkMail scans all incoming and outgoing email for spam, malware, and viruses to help protect customers from malicious email. /workmail/faqs/;Does Amazon WorkMail offer support for mobile device policies, to protect data stored on mobile devices?;Yes. Amazon WorkMail gives you the ability to require a PIN or password on your users’ devices, configure the password strength, require a device lock after a number of failed login attempts, require a screen lock for idle timeouts, and require device and storage card encryption. /workmail/faqs/;How can I manage my encryption key used for the data encryption in Amazon WorkMail?;Amazon WorkMail is integrated with Amazon Key Management Service for the encryption of your data. Key management can be performed from the Amazon IAM console. For more information about AWS Key Management Service, please see Amazon AWS Key Management developer guide. /workmail/faqs/;What data is encrypted with my encryption keys?;All email content, attachments, and metadata for a mailbox is encrypted using the customer-managed keys of that user’s organization. /workmail/faqs/;Is my email encrypted when using the IMAP protocol to access my Amazon WorkMail mailbox?;Yes. All email communication is encrypted in transit by the secure connections made between the client and the server, and all email stored in Amazon WorkMail is encrypted at rest. /workmail/faqs/;Does Amazon WorkMail support S/MIME for signing and encrypting email?;Yes. Amazon WorkMail supports S/MIME signing and encryption in the Microsoft Outlook client and certain mobile devices like Apple iPhone and iPad. The Amazon WorkMail web application currently does not support S/MIME signing and encryption. /workmail/faqs/;What compliance certifications does Amazon WorkMail support?;Amazon Web Services has achieved the ISO 27001, ISO 27017 and ISO 27018 certifications. Amazon WorkMail regions in US East (N.Virginia), US West (Oregon) and EU (Ireland) are within the scope of the certifications. You can learn more about these certifications on the AWS Cloud Compliance section of the website. You can also request a copy of the Service Organization Controls (SOC) report available from AWS Compliance to learn more about the security controls AWS uses to protect your data. /workmail/faqs/;How does AWS use my Amazon WorkMail email content?;You own your content in Amazon WorkMail, and you retain full ownership and control of your Amazon WorkMail email. We will not view, use, or move the contents of your Amazon WorkMail account unless authorized by you. /workmail/faqs/;How does Amazon WorkMail integrate with Amazon WorkDocs?;Amazon WorkDocs integration offers users the ability to distribute large documents easily from the Amazon WorkMail web application, keep control of sensitive documents distributed by email, and securely save email attachments in Amazon WorkDocs. /workmail/faqs/;How can I start using the Amazon WorkDocs integration?;To use the integration with Amazon WorkDocs, your organization first needs to be activated for Amazon WorkDocs. You can activate Amazon WorkDocs for your organization in the AWS Management Console. After this is done, you can enable Amazon WorkDocs for your users using the Amazon WorkDocs admin panel. After your users are enabled for Amazon WorkDocs, they can start using the Amazon WorkDocs integration in the Amazon WorkMail web application. If your organization and users are already using Amazon WorkDocs, your users can start using the integration right after they are enabled for Amazon WorkMail. /workmail/faqs/;Can I use Amazon WorkMail without using Amazon WorkDocs?;Yes, however you will not be able to use the Amazon WorkDocs integration in the Amazon WorkMail web application. /workmail/faqs/;How does Amazon WorkMail integrate with Amazon Simple Email Service?;Amazon WorkMail uses Amazon Simple Email Service to send all outgoing email. The test mail domain and your production domains are available for management in the Amazon Simple Email Service console. /workmail/faqs/;Will I be charged for outgoing email sent from Amazon WorkMail?;No. You won’t be charged for outgoing email sent from Amazon WorkMail. /workmail/faqs/;Do I need to increase Amazon SES sending limits to use Amazon WorkMail?;No. This is not needed to use with Amazon WorkMail. The SES limits only apply when you are using Amazon SES using the Amazon SES API for sending bulk email from your AWS account. /workmail/faqs/;Does Amazon WorkMail integrate with AWS CloudTrail?;Yes. CloudTrail captures API calls from the WorkMail console or from WorkMail or WorkMailMessageFlow API operations. Using the information collected by CloudTrail, you can track requests made to WorkMail, the source IP address from which the requests were made, who made the requests, when they were made, and so on. To learn more about CloudTrail, including how to configure and enable it, see the AWS CloudTrail User Guide. To learn more about logging WorkMail API calls, see Logging Amazon WorkMail API Calls with AWS CloudTrail. /workmail/faqs/;Will I be charged for using AWS CloudTrail with Amazon WorkMail?;There is no additional WorkMail charge to use WorkMail with CloudTrail. There may be charges associated with delivering events using CloudTrail. For details, please see the CloudTrail Pricing. /workmail/faqs/;Does WorkMail offer email metrics?;Yes, WorkMail logs metrics for emails sent, received, and bounced free of charge in CloudWatch metrics /workmail/faqs/;Does WorkMail offer message tracking?;Yes, WorkMail offers the option to enable WorkMail Monitoring in CloudWatch logs. When activating logging, you can define the CloudWatch log group to log into, as well as the log retention period. WorkMail will then log detailed information for messages received and sent, when rules are applied, when message journaling is initiated, and for bounce messages. /workmail/faqs/;What data is logged in WorkMail Monitoring?;If logging is activated, WorkMail logs envelope data such as sender and recipients. Message bodies are not logged. /workmail/faqs/;How can I run queries on messages?;CloudWatch offers insights which allows for fast and easy querying on CloudWatch logs. /workmail/faqs/;How will my business be charged for use of Amazon WorkMail?;"There are no upfront fees or commitments to begin using Amazon WorkMail. At the end of the month, you are billed for that month's usage. You can view estimated charges for the current billing period by logging into the AWS Management Console and clicking on ""Account Activity."" You can get started with a free trial of Amazon WorkMail and activate up to 25 user accounts at no charge for the first 30 days. You can use the WorkMail console to get started today." /workmail/faqs/;Is there a free trial for Amazon WorkMail?;Yes. You can activate up to 25 users at no charge for the first 30 days after you sign up for Amazon WorkMail. After this period ends, you are charged for all active users unless you remove them or deregister your Amazon WorkMail account. /workmail/faqs/;Will I be charged for creating or using resources (such as meeting rooms)?;No. Creating or using of resources within Amazon WorkMail is available free of charge. /workmail/faqs/;Is there an additional charge for using IMAP client applications?;No. IMAP access is included in the Amazon WorkMail mailbox pricing. /workmail/faqs/;Are Amazon WorkMail groups billable/eligible for pricing/charged?;Amazon WorkMail will not charge for Groups separately and customers can create multiple groups. Only Enabled Users are charged. /workspaces/faqs/;What is Amazon WorkSpaces?;Amazon WorkSpaces is a managed, secure cloud desktop service. You can use Amazon WorkSpaces to provision either Windows, Amazon Linux, or Ubuntu Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe. You can pay either monthly or hourly, just for the WorkSpaces you launch, which helps you save money when compared to traditional desktops and on-premises Virtual Desktop Infrastructure (VDI) solutions. Amazon WorkSpaces helps you eliminate the complexity in managing inventory, OS versions and patches, and VDI, which helps simplify your desktop delivery strategy. With Amazon WorkSpaces, your users get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device. /workspaces/faqs/;What is an Amazon WorkSpace?;An Amazon WorkSpace is a cloud-based virtual desktop that can act as a replacement for a traditional desktop. A WorkSpace is available as a bundle of operating system, compute resources, storage space, and software applications that allow a user to perform day-to-day tasks just like using a traditional desktop. /workspaces/faqs/;How do I connect to my Amazon WorkSpace?;A user can connect to a WorkSpace from any supported device using the free Amazon WorkSpaces client application on supported devices, including Windows and Mac computers, iPads, Android tablets, Android-compatible Chrome OS devices, or using Chrome or Firefox web browsers. Users will connect using credentials set up by an administrator or using their existing Active Directory credentials if you’ve chosen to integrate your Amazon WorkSpaces with an existing Active Directory domain. Once the user is connected to a WorkSpace, they can perform all the usual tasks they would do on a desktop computer. /workspaces/faqs/;How can I get started with Amazon WorkSpaces?;To get started with Amazon WorkSpaces, you will need an AWS account. You can use this account to sign in to the AWS Management Console and you can then quickly provision Amazon WorkSpaces for yourself and any other users in your organization. To provision an Amazon WorkSpace, first select a user from your directory. Next, select an Amazon WorkSpaces bundle for the user. The Amazon WorkSpaces bundle specifies the resources you need, which desktop operating system you want to run, how much storage you want to use, and the software applications you want prepackaged. Finally, choose a running mode for their Amazon WorkSpace – pick AlwaysOn if you want to use monthly billing, or AutoStop if you want to use hourly billing. Once your WorkSpace is provisioned, the user will receive an email with instructions for connecting to their WorkSpace. You can use this same process to provision multiple WorkSpaces at the same time. /workspaces/faqs/;Which Amazon WorkSpaces bundles are available?;You can find the latest information on Amazon WorkSpaces bundles here. /workspaces/faqs/;Which streaming protocols are supported by Amazon WorkSpaces?;Amazon WorkSpaces supports two protocols, PCoIP and WorkSpaces Streaming Protocol (WSP). The protocol that you choose depends on several factors, such as the type of devices your users will be accessing their WorkSpaces from, which operating system is on your WorkSpaces, what network conditions your users will be facing, and whether your users require unique features available to specific protocols, such as bidirectional video or smartcard support with WSP. Visit Protocols for Amazon WorkSpaces in the Amazon WorkSpaces Administration Guide to learn more. /workspaces/faqs/;Which operating systems are available for use with Amazon WorkSpaces?;Amazon WorkSpaces offers Amazon Linux WorkSpaces built on Amazon Linux 2 LTS, Ubuntu WorkSpaces built on Ubuntu Desktop 22.04 LTS, or Windows 10 desktop experiences. You can choose if your Windows 10 desktop experience is powered by Windows Server 2016 or Windows Server 2019. If your organization is eligible to bring its own Windows Desktop licenses, you can run the Windows 10 or Windows 11 Enterprise operating system on your Amazon WorkSpaces. /workspaces/faqs/;What are the root and user volumes mapped to for Amazon Linux WorkSpaces, Ubuntu WorkSpaces, and Amazon WorkSpaces with Windows?;For Amazon Linux WorkSpaces and Ubuntu WorkSpaces, root volume is mapped to /, and user volume is mapped to /home. /workspaces/faqs/;Can I migrate users from an Amazon WorkSpaces Windows 7 bundle to a Windows 10 bundle?;Yes. WorkSpaces migrate enables WorkSpaces migration to a new bundle or compute type with the user volume data preserved. You could perform migrate operations to move your users to the Windows 10 Desktop experience. To get started, go to the Amazon WorkSpaces console, select the WorkSpace, click “Action > Migrate WorkSpaces”, then select a target bundle with the Windows 10 desktop experience. /workspaces/faqs/;How does a user get started with their Amazon WorkSpace once it has been provisioned?;When Amazon WorkSpaces are provisioned, users receive an email providing instructions on where to download the WorkSpaces clients they need, and how to connect to their WorkSpace. If you are not integrating with an existing Active Directory, the user will have the ability to set a password the first time they attempt to connect to their WorkSpace. If the AWS Directory Services AD Connector has been used to integrate with an existing Active Directory domain, users will use their regular Active Directory credentials to log in. /workspaces/faqs/;What does a user need to use an Amazon WorkSpace?;A user needs to have an Amazon WorkSpace provisioned for them, and a broadband Internet connection. To use an Amazon WorkSpaces client application to access their WorkSpace, they will need a supported client device (PC, Mac, Linux, iPad, Android tablet, or Android-compatible Chrome OS device), and an Internet connection with opened TCP ports 443 & either 4172 for PCoIP or 4195 for WSP, and UDP port 4172 for PCoIP or 4195 for WSP. /workspaces/faqs/;Once users connect to their Amazon WorkSpace can they personalize it with their favorite settings?;An administrator can control what a user can personalize in their WorkSpace. By default, users can personalize their WorkSpaces with their favorite settings for items such as wallpaper, icons, shortcuts, etc. These settings will be saved and persist until a user changes them. If an administrator wishes to lock down a WorkSpace using tools like Group Policy for Windows, this will restrict a user’s ability to personalize their WorkSpaces. /workspaces/faqs/;Can users install applications on their Amazon WorkSpace?;By default, users are configured as local administrators of their WorkSpaces. Administrators can change this setting and can restrict users’ ability to install applications with a technology such as Group Policy. /workspaces/faqs/;Are Amazon WorkSpaces persistent?;Yes. Each WorkSpace runs on an individual instance for the user it is assigned to. Applications and users’ documents and settings are persistent. /workspaces/faqs/;Do users need an AWS account?;No. An AWS account is only needed to provision WorkSpaces. To connect to WorkSpaces, users will require only the information provided in the invitation email they will receive when their WorkSpace is provisioned. /workspaces/faqs/;If I am located a significant distance from the region where my Amazon WorkSpace is located, will I have a good user experience?;If you are located more than 2000 miles from the regions where Amazon WorkSpaces is currently available, you can still use the service, but your experience may be less responsive. The easiest way to check performance is to use the Amazon WorkSpaces Connection Health Check Website. You can also refer to the Regional Products and Services page for details of Amazon WorkSpaces service availability by region. /workspaces/faqs/;Does Amazon WorkSpaces offer a set of public APIs?;"Yes, public APIs are available for creating and managing Amazon WorkSpaces programmatically. APIs are available via the AWS CLI and SDK; you can learn more about the APIs in the documentation." /workspaces/faqs/;Do the Amazon WorkSpaces APIs log actions in AWS CloudTrail?;Yes. Actions on Amazon WorkSpaces performed via the WorkSpaces APIs will be included in your CloudTrail audit logs. /workspaces/faqs/;Is there Resource Permission support with the Amazon WorkSpaces APIs?;Yes. You can specify which Amazon WorkSpaces resources users can perform actions on. For details see the documentation. /workspaces/faqs/;Do I need to use the AWS Management Console to get started with Amazon WorkSpaces?;To get started with Amazon WorkSpaces, you will need to register a directory with the WorkSpaces service. You can use AWS Management Console or Amazon WorkSpaces APIs to register a directory with the WorkSpaces service and then create and manage WorkSpaces. /workspaces/faqs/;Can I deploy my WorkSpaces in the AWS GovCloud (US) Regions?;Yes. You can deploy WorkSpaces in the AWS GovCloud (US West) region to meet US federal, state, and local government requirements. Go here for details on the AWS GovCloud (US) Regions. /workspaces/faqs/;Can I get help to learn more about and onboard to Amazon WorkSpaces?;Yes, Amazon WorkSpaces specialists are available to answer questions and provide support. Contact Us and you’ll hear back from us in one business day to discuss how AWS can help your organization. /workspaces/faqs/;What applications are available with Amazon Linux WorkSpaces?;Amazon Linux WorkSpaces come with a curated selection of applications at no additional cost that include LibreOffice, Firefox Web Browser, Evolution mail, Pidgin IM, GIMP, and other desktop utilities and tools. You can always add more software from the Amazon Linux repositories using yum. To install an available package from the Amazon Linux repositories, simply type “yum install [package-name]”. You can also add software from RPM-based public and private Linux repositories at any time. /workspaces/faqs/;How do I launch an Amazon WorkSpace from a custom image?;To launch an Amazon WorkSpace from a custom image, you will first need to pair the custom image with a hardware type you want that WorkSpace to use, which results in a bundle. You can then publish this bundle through the console, then select the bundle when launching new WorkSpaces. /workspaces/faqs/;What is the difference between a bundle and an image?;An image contains only the OS, software and settings. A bundle is a combination of both that image and the hardware from which a WorkSpace can be launched. /workspaces/faqs/;How many custom images can I create?;As an administrator, you can create as many custom images as you need. Amazon WorkSpaces sets default limits, but you can request an increase in these limits here. To see the default limits for Amazon WorkSpaces, please visit our documentation. /workspaces/faqs/;Can I update the image in an existing bundle?;Yes. You can update an existing bundle with a new image that contains the same tier of software (for example, containing the Plus software) as the original image. /workspaces/faqs/;Can I copy my Amazon WorkSpaces Images to other AWS Regions?;Yes, you can use the WorkSpaces console, APIs, or CLI to copy your WorkSpaces Images to other AWS Regions where WorkSpaces is available. Log on to the WorkSpaces console and navigate to the “Images” section from the left hand navigation menu. Simply select the image you would like to copy, click on the “Actions” button and select the “Copy Image” option to get started. /workspaces/faqs/;How can I tell if the Image I copied is available for me to use?;As soon as you initiate a copy operation, you will be provided a unique identifier for the new Image being created as a copy of the original one. You can use that identifier to look up the status of that Image in the destination Region through the WorkSpaces console, APIs, or CLI. /workspaces/faqs/;Can I cancel a pending Image copy operation?;Once initiated, you cannot cancel a pending Image copy operation. You can delete the Image in the destination Region if the Image is not required. /workspaces/faqs/;Are there any data transfer fees for copying Images?;No. There are no additional fees for copying Images across Regions. Maximum Image limits for your account in destination AWS Region will still apply. Once you reach this limit you will not be able to copy more Images. /workspaces/faqs/;Can I bulk copy multiple Images to another Region?;You can copy Images one by one to another AWS Region. You can use CopyWorkspaceImage API to programmatically copy Images. /workspaces/faqs/;Can I copy a BYOL Image to another AWS Region?;Yes. You can copy a BYOL WorkSpace Image to another AWS Region if the destination AWS Region is enabled for BYOL. /workspaces/faqs/;Can I copy an Image to the same Region?;Yes. You can use the copy Image operation to make a copy of the WorkSpaces Image in the same Region. /workspaces/faqs/;What type of Amazon Elastic Block Store (EBS) volumes does Amazon WorkSpaces offer?;All Amazon WorkSpaces launched after January 31, 2017, are built on general purpose solid-state drives (SSD) EBS volumes for both root and user volumes. Amazon WorkSpaces launched prior to January 31, 2017, are configured with EBS magnetic volumes. You can switch your Amazon WorkSpaces using magnetic EBS volumes to SSD EBS volumes by rebuilding them (more information can be found here). You can learn more about SSD EBS volumes here, and magnetic EBS volumes here. /workspaces/faqs/;Can I use custom images to launch WorkSpaces with SSD volumes, even if they were created using WorkSpaces with magnetic EBS volumes?;Yes. You can use your custom images to launch WorkSpaces with SSD EBS volumes, even if they were created using WorkSpaces with magnetic EBS volumes. /workspaces/faqs/;Do I need to provide an AMI build using WorkSpaces with SSD EBS volumes when using my own Windows desktop licenses (BYOL)?;No. You can use the AMIs you built as part of the BYOL process without any additional changes. /workspaces/faqs/;How do I deploy applications to my users?;You have flexibility in how you deploy the right set of applications to users. First, you choose which image type to build from, either basic or Plus, which determines the default applications that will be in the WorkSpaces. Second, you can install additional software on a WorkSpace and create a custom image which can be used to launch more WorkSpaces. For more detail see the bundle documentation. /workspaces/faqs/;Which software can I install on an Amazon WorkSpace?;For Amazon Linux, any application available in the Amazon Linux repositories is compatible and can be installed using yum install [package-name]. /workspaces/faqs/;Can I increase the size of my Amazon WorkSpaces storage volumes?;Yes. You can increase the size of the root and user volumes attached to your WorkSpaces at any time. When you launch new WorkSpaces, you can select bundled storage configurations for root and user volumes, or choose your preferred storage size greater than the provided storage configurations. For storage configurations with 80 GB Root volume, you can choose 10 GB, 50 GB, or 100 GB for User volume. You can use storage configurations with 175 GB to 2000 GB Root volume along with 100 GB to 2000 GB User volume. Please note that you need to set the Root volume to 175 GB in order to expand the User volume in the range of 100 GB to 1000 GB. After your WorkSpaces have been launched, you can only increase the size of the volumes using the above configurations to up to 2000 GB for each Root and User volume. /workspaces/faqs/;Can I decrease the size of storage volumes?;No. To ensure that your data is preserved, the volume sizes of either volume cannot be reduced after a WorkSpace is launched. You can launch a Value, Standard, Performance, Power, or PowerPro WorkSpace with a minimum of 80 GB for the root volume and 10 GB for the user volume. You can launch a GPU-enabled WorkSpace with a minimum of 100 GB for the root volume and 100 GB for the user volume. For more information about configurable storage, see Modifying WorkSpaces. /workspaces/faqs/;How do I change the size of my Amazon WorkSpaces storage volumes?;You can change the size of your storage volumes via the Amazon WorkSpaces management console, or through the Amazon WorkSpaces API. /workspaces/faqs/;Is the storage configuration for a WorkSpace preserved when I rebuild it?;Yes, each rebuild preserves the storage allocation size for WorkSpaces when using default bundles. If a WorkSpace has its volumes extended, and is rebuilt, the larger volume sizes will be preserved, even if the bundle's drive sizes are smaller. /workspaces/faqs/;Is the storage configuration for a WorkSpace preserved when I restore it?;Yes, each restore preserves your existing storage allocation size when using WorkSpaces default bundles. For example, restoring a WorkSpace with 80GB Root and 100GB User volumes will result in a rebuilt WorkSpace with 80GB Root and 100GB User. /workspaces/faqs/;What data can I retain after a WorkSpaces migrate?;All data in the latest snapshot of the original user volume will be retained. For a Windows WorkSpace, the D drive data captured by the latest snapshot will be retained after migration and the C drive will be newly created from the target bundle image. In addition, migrate attempts to move data from the old user profile to the new one. Data that cannot be moved to the new profile will be preserved in a .notMigrated folder. For more information, please refer to the documentation. /workspaces/faqs/;Can I move an existing WorkSpace from a public bundle to a custom bundle?;Yes. The WorkSpaces migrate function allows you to replace your WorkSpace’s root volume with a base image from another bundle. Migrate will recreate the WorkSpace using a new root volume from the target bundle image, and the user volume from the latest original user volume snapshot. For detailed information about migrate, please refer to the documentation. /workspaces/faqs/;What’s the difference between migrate and rebuild?;WorkSpaces Migrate allows you to switch to a new bundle and have your user profile regenerated. Rebuild just refreshes your WorkSpace with a root volume generated from the base image of the original bundle. /workspaces/faqs/;What happens if I rebuild my WorkSpace after migrate?;Migrate associates your WorkSpace with a new bundle. And a rebuild after migration will uses the newly associated bundle to generate the root volume. /workspaces/faqs/;Can I expand Amazon WorkSpaces magnetic storage volumes?;No, configurable storage volumes are only available when using solid state drives (SSD). Any WorkSpaces launched before February 2017 might still use magnetic storage volumes. To switch from magnetic to SSD drives, rebuild your WorkSpaces. /workspaces/faqs/;How do custom images affect my root volume size?;The root volume size of WorkSpaces launched from a custom image is, by default, the same size as the custom image. For example, if your custom image has a root volume of 100 GB, all WorkSpaces launched from that image also have a root volume size of 100 GB. You can increase your root volume size when you launch your WorkSpace, or any time after that. /workspaces/faqs/;Can I change my Amazon WorkSpaces bundle without performing WorkSpaces migrate?;Yes. You can switch between Value, Standard, Performance, Power, or PowerPro bundles by using the Amazon WorkSpaces management console or the WorkSpaces API. When you switch hardware bundles, your WorkSpaces restart immediately. When they resume, your operating system, applications, data, and allocated storage on both the root and user volumes are all preserved. /workspaces/faqs/;How can I track my storage and bundle switch requests?;You can use AWS CloudTrail to track the changes that you have requested. /workspaces/faqs/;I currently bring my own Windows licenses. Can I expand my storage volumes and switch my WorkSpaces bundles?;Yes. You can take advantage of both these features even if you bring your own Windows desktop licenses. By default, you can switch WorkSpaces bundles for up to 20% of the total number of your WorkSpaces in a week. To switch more than 20% of your WorkSpaces, contact us. /workspaces/faqs/;Does a WorkSpace running in AutoStop mode need to be running to apply a change to the bundle type?;No. When you make a change, we start a WorkSpace that isn’t running, apply the bundle change, restart it so that the changes take effect, and then stop it again. For example, you change the bundle type on a stopped Standard (2vCPU, 4 GiB) WorkSpace to Performance. We start your Standard WorkSpace, apply the bundle change, and restart it. Following the restart, your WorkSpace has Performance hardware (2vCPU, 7.5 GiB). /workspaces/faqs/;How do I get charged if I change storage size or hardware bundle during a month?;For either change, you get charged the monthly price for AlwaysOn or the monthly fee for AutoStop WorkSpaces prorated on a per-day basis. /workspaces/faqs/;How often can I increase volume sizes or change hardware bundle of a WorkSpace?;You can increase volume sizes or change a WorkSpace to a larger hardware bundle once in a 6-hour period. You can also change to a smaller hardware bundle once in a 30-day period. For a newly launched WorkSpace, you must wait 6 hours before requesting a larger bundle. /workspaces/faqs/;Does Amazon WorkSpaces offer GPU-enabled cloud desktops?;Yes. Amazon WorkSpaces offers Graphics, GraphicsPro, and Graphics G4dn family. A Graphics bundle is for general purpose graphics applications such as CAD/CAM software, commercial and industrial modeling, prototyping, and mainstream graphics development. /workspaces/faqs/;What are GPU-enabled bundles from Amazon WorkSpaces?;GPU-enabled bundles from Amazon WorkSpaces are cloud desktops optimized for workloads that benefit from graphics acceleration. You can choose the Graphics, the GraphicsPro, the Graphics.g4dn, or the GraphicsPro.g4dn bundle, depending on the performance requirements of your graphics workload and your cost requirements. /workspaces/faqs/;In which AWS Regions can I launch GPU-enabled Amazon WorkSpaces bundles?;You can launch Graphics or GraphicsPro bundles in the following AWS Regions: US East (NVirginia), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore). You can also launch GraphicsPro bundles in the AWS GovCloud (US-West) region. /workspaces/faqs/;Can I create a custom image for my GPU-enabled bundles?;Yes. Custom images created from a GPU-enabled Amazon WorkSpaces bundle can only be used with the same type of bundle. For instance, you can use an image made from a Graphics.g4dn bundle to launch a GraphicsPro.g4dn WorkSpace. However, you cannot use an image made from a Graphics bundle to launch a GraphicsPro WorkSpace or a G4dn-based WorkSpace. /workspaces/faqs/;How do I get started with GPU-enabled Amazon WorkSpaces bundles?;You can launch Graphics, GraphicsPro, Graphics.g4dn, or GraphicsPro.g4dn bundles using the Amazon WorkSpaces Management Console or the Amazon WorkSpaces API. When launching a new WorkSpace, simply select the corresponding graphics bundle name. You may need to request for a quota increase before launching the GPU-enabled bundles. /workspaces/faqs/;How much bandwidth do GPU-enabled Amazon WorkSpaces consume?;Bandwidth used by GPU-enabled Amazon WorkSpaces bundles depends on the tasks being performed. If there aren’t many changes taking place on the screen, the bandwidth used is generally less than 300 kbps. If there is context switching between multiple windows, or if 3D models are being manipulated, bandwidth use can increase to several megabits per second. /workspaces/faqs/;Does Amazon WorkSpaces offer GPU-enabled desktops using WSP?;No. Amazon WorkSpaces does not currently offer a GPU-enabled WSP bundle. /workspaces/faqs/;What are the storage options available on GPU-enabled WorkSpaces?;All GPU-enabled WorkSpaces bundles come with a minimum 100 GB of persistent storage for the user volume and root volumes. You can select the amount of storage that you need for both root and user volumes when you launch new WorkSpaces, and you can increase storage allocations at any time. Data that users store on the “user volume” attached to the WorkSpace is automatically backed up to Amazon S3 on a regular basis. /workspaces/faqs/;Can I bring my Windows Desktop licenses to Amazon WorkSpaces?;Yes, you can bring your own Windows 10 or Windows 11 desktop licenses to WorkSpaces if they meet Microsoft’s licensing requirements. WorkSpaces gives you an option to run Windows 10 desktop images on physically dedicated hardware, which lets you maintain license compliance for your Windows desktops when you bring your own licenses to WorkSpaces. /workspaces/faqs/;Can I bring my own Windows Desktop licenses for GPU-enabled Amazon WorkSpaces?;Yes, you can. Please contact us if this is something you’d like to do. /workspaces/faqs/;What versions of Windows desktop licenses can I bring to Amazon WorkSpaces?;If your organization meets the licensing requirements set by Microsoft, you can bring your Windows 10 or Windows 11 Enterprise license to Amazon WorkSpaces. You cannot use Windows OEM licenses for your Amazon WorkSpaces. Please consult with Microsoft if you have any questions about your eligibility to bring your own Windows Desktop licenses. /workspaces/faqs/;What benefits are there in bringing my own Windows desktop licenses to Amazon WorkSpaces?;By bringing your own Windows Desktop licenses to Amazon WorkSpaces, you will save $4 per Amazon WorkSpace per month when being billed monthly, and you will save money on the hourly usage fee when being billed hourly (see the Amazon WorkSpaces pricing page for more information). Additionally, you can now use a single golden image to manage your physical and virtual desktop deployments. /workspaces/faqs/;What are the requirements for bringing my Windows desktop Licenses to Amazon WorkSpaces?;You need an active and eligible Microsoft Volume Licensing (VL) agreement with Software Assurance and/or VDA per user license to bring your Windows 10 or Windows 11 Desktop license to Amazon WorkSpaces. Please consult with your Microsoft representative to confirm your eligibility in bringing your Windows Desktop licenses to Amazon WorkSpaces. /workspaces/faqs/;How do I get started with bringing my Windows desktop licenses to Amazon WorkSpaces?;In order to ensure that you have adequate dedicated capacity allocated to your account, please reach out to your AWS account manager or sales representative to enable your account for BYOL. alternatively, you can create a Technical Support case with Amazon WorkSpaces to get started with BYOL. /workspaces/faqs/;How will I activate my Windows 10 or Windows 11 Desktop operating system on Amazon WorkSpaces?;You can activate your Windows 10 or Windows 11 Desktop operating system using existing Microsoft activation servers that are hosted in your VPC, or ones that can be reached from the VPC in which Amazon WorkSpaces are launched. /workspaces/faqs/;Can I create a new custom image of the Windows 10 or Windows 11 Desktop image uploaded to Amazon WorkSpaces?;Yes. You can use the standard WorkSpaces image management functionality to further customize the Windows 10 or Windows 11 Desktop image and save it as a new Amazon Workspaces image in your account. /workspaces/faqs/;How long will it take before I can launch Amazon WorkSpaces using my own Windows desktop licenses and image?;It can take a few hours after you perform the “Create Image” operation for your custom Windows desktop image to be available to use. You can check the status of your custom image in the WorkSpaces Console, API, or CLI. /workspaces/faqs/;Will all of my dedicated Amazon WorkSpaces launch in a single AZ?;No. Amazon WorkSpaces launched on dedicated hardware will be balanced across two AZs. You select the AZs for Amazon WorkSpaces when you create the directory in which your Amazon WorkSpaces will be launched, and subsequent launches of Amazon WorkSpaces are automatically load balanced across the AZs selected when you created the directory. /workspaces/faqs/;What happens when I terminate Amazon WorkSpaces that are launched on physically dedicated hardware?;You can terminate Amazon WorkSpaces when you no longer need them. You will only be billed for the Amazon WorkSpaces that are running. /workspaces/faqs/;What happens to Amazon WorkSpaces that are rebuilt, restored, or restarted on physically dedicated hardware?;Amazon WorkSpaces that are rebuilt, restored, or restarted can be placed on any available physical server allocated to your account. A restart, restore, or rebuild of an Amazon WorkSpace can result in that instance being placed on a different physical server that has been allocated to your account. /workspaces/faqs/;How do I subscribe to Microsoft Office for BYOL WorkSpaces?;When importing a BYOL image, you can select if you want to include Microsoft Office in the image. If selected, Office is automatically installed in the image during image creation. WorkSpaces created from this image are automatically subscribed to Microsoft Office Professional from AWS. /workspaces/faqs/;How is Office bundle charged on BYOL WorkSpaces?;WorkSpaces launched from a BYOL image with Office bundle enabled incur the listed fee for the Office bundle every month irrespective of whether you use that WorkSpace in that month. You will not be billed for BYOL images with the Office bundle enabled, only the WorkSpaces created from an image with the Office bundle enabled. For more information on pricing for the Office bundle, visit Amazon WorkSpaces pricing page. /workspaces/faqs/;What software is available as part of the Office bundle for BYOL WorkSpaces?;For Windows 10 BYOL WorkSpaces, you have the option to select Microsoft Office Professional 2016 or 2019. Windows 11 BYOL WorkSpaces supports only Microsoft Office Professional 2019. /workspaces/faqs/;How do I subscribe to the Office bundle on my existing BYOL WorkSpaces?;After you have created a BYOL image with the Office bundle installed, you can use the Amazon WorkSpaces migrate feature to migrate your existing BYOL WorkSpaces to ones with the Office bundle. All data in the latest snapshot of the original user volume will be retained after migration and the C drive will be newly created from the new image. You can migrate a WorkSpace created from a bundle that does not have Office provided by AWS to another WorkSpace created from a bundle that has AWS provided Microsoft Office and vice versa. Data on both root and user volumes are preserved upon migration. /workspaces/faqs/;How do I get updates for the Office bundle applications?;Office updates are included as part of your regular Windows Updates. Our image creation process will pick up the latest updates during the creation process. We recommend that you periodically update your Windows base images to stay current on all security patches and updates. /workspaces/faqs/;What is Amazon Linux WorkSpaces?;Amazon Linux WorkSpaces are enterprise ready cloud desktops that organizations can provide to developers, engineers, students or office workers to get their work done. /workspaces/faqs/;What can I do with Amazon Linux WorkSpaces?;Developers can develop software with their favorite applications like AWS CLI, AWS SDK tools, Visual Studio Code, Eclipse and Atom. Analysts can run simulations using MATLAB and Simulink. Office workers can use pre-installed applications like Libre Office for editing documents, spreadsheets, and presentations, Evolution for email, Firefox for web browsing, GIMP for image editing, Pidgin for instant messaging, and many others. You can always install more applications from the Amazon Linux repositories or other RPM based Linux repositories. /workspaces/faqs/;Which applications and tools come with Amazon Linux WorkSpaces?;Amazon Linux WorkSpaces include a selection of desktop utilities and tools, development tools, and general productivity applications. Developers can quickly get started using packages like OpenJDK 8, Python, C/C++, AWS CLI, and AWS SDK. General office workers can use Libre Office for document editing, spread sheets, and presentations, Firefox for web browsing, GIMP for photo editing, Pidgin for IM, Evolution for mails, Atril for PDF documents and more for day to day productivity tasks. You can always install more applications from the Amazon Linux repositories or from other RPM based Linux repositories. /workspaces/faqs/;How do I get started with Amazon Linux WorkSpaces?;To get started, simply create or select users from your configured directory, select Amazon Linux WorkSpaces bundles, and launch. Your users will receive instructions via email for connecting to their WorkSpaces. Please see here for the list of available hardware bundles. /workspaces/faqs/;How much does it cost to use Amazon Linux WorkSpaces?;Amazon Linux WorkSpaces are available with both the hourly and monthly billing options. Detailed pricing is available here. /workspaces/faqs/;Which package manager does Amazon Linux supports?;Amazon Linux is RPM based and uses yum package manager. /workspaces/faqs/;Which repositories are available with Amazon Linux WorkSpaces?;Amazon Linux WorkSpaces are connected to the Amazon Linux core and extras repositories. You can always add other RPM based Linux repositories. /workspaces/faqs/;How can I request new packages for the Amazon Linux repositories?;You can request new packages for the Amazon Linux repositories using the AWS developer forums here. Packages will be added at the sole discretion of Amazon Web Services. /workspaces/faqs/;How will I receive package updates for the Amazon Linux WorkSpaces?;Amazon Linux WorkSpaces are regularly patched and updated from the Amazon Linux repositories. /workspaces/faqs/;What directory types are supported for Amazon Linux WorkSpaces?;Amazon Linux WorkSpaces currently support Active Directory, an on-premises directory available via AD Connector and Microsoft Active Directory on AWS. /workspaces/faqs/;What hardware bundles are available for Amazon Linux WorkSpaces?;Amazon Linux WorkSpaces are available with different hardware bundle in all regions where the Amazon WorkSpaces service operates. For a complete list, please see here. /workspaces/faqs/;Can I customize my Amazon Linux WorkSpaces?;Yes. You can customize settings and install additional software on Amazon Linux WorkSpaces. You can also create custom images using the Amazon WorkSpaces console or API and use those images to launch WorkSpaces with your customizations for other users in your organization. /workspaces/faqs/;Is sudo access enabled by default on Amazon Linux WorkSpaces?;By default, Amazon Linux WorkSpaces users get sudo access while root user is disabled for them. You can always modify permissions by editing /etc/sudoers file. /workspaces/faqs/;Is there an Amazon Linux WorkSpaces bundle using WSP?;Yes. Amazon WorkSpaces offers Linux with WSP in the AWS GovCloud (US-West) Region with support for smart cards, keyboard and mouse input, and audio output. /workspaces/faqs/;Is Amazon WorkSpaces HIPAA eligible?;Yes. If you have an executed Business Associate Agreement (BAA) with AWS, you can use Amazon WorkSpaces with the AWS accounts associated with your BAA. If you don’t have an executed BAA with AWS, contact us and we will put you in touch with a representative from our AWS sales team. For more information, see, HIPAA Compliance. /workspaces/faqs/;Is Amazon WorkSpaces PCI compliant?;Yes. Amazon WorkSpaces is PCI compliant and conforms to the Payment Card Industry Data Security Standard (PCI DSS). PCI DSS is a proprietary information security standard administered by the PCI Security Standards Council, which was founded by American Express, Discover Financial Services, JCB International, MasterCard Worldwide and Visa Inc. PCI DSS applies to all entities that store, process or transmit cardholder data (CHD) and/or sensitive authentication data (SAD) including merchants, processors, acquirers, issuers, and service providers. The PCI DSS is mandated by the card brands and administered by the Payment Card Industry Security Standards Council. For more information, see PCI DSS Compliance. /workspaces/faqs/;Which credentials should be used to sign in to Amazon WorkSpaces?;Users sign into their WorkSpace using their own unique credentials, which they can create after a WorkSpace has been provisioned for them. If you have integrated the Amazon WorkSpaces service with an existing Active Directory domain, users will sign in with their regular Active Directory credentials. Amazon WorkSpaces also integrates with your existing RADIUS server to enable multi-factor authentication (MFA). In addition, WorkSpaces integrates with your SAML 2.0 identity provider (IdP) so that you can extend security features available from your IdP to WorkSpaces, including multi-factor (MFA) and contextual access. /workspaces/faqs/;Can I control the client devices that access my Amazon WorkSpaces?;Yes. You can restrict access to Amazon WorkSpaces based on the client OS type, and using digital certificates. You can choose to block or allow macOS, Microsoft Windows, Linux, iPadOS, Android, Chrome OS, zero client, and the WorkSpaces web access client. /workspaces/faqs/;What is a digital certificate?;A digital certificate is a digital form of identity that is valid for a specified period of time, which is used as a credential that provides information about the identity of an entity, as well as other supporting information. A digital certificate is issued by a certificate authority (CA), and the CA guarantees the validity of the information in the certificate. /workspaces/faqs/;What devices use digital certificates to control access to Amazon WorkSpaces?;Digital certificates can be used to block or allow WorkSpaces access from macOS and Microsoft Windows client devices. /workspaces/faqs/;How do I use digital certificates to control access to Amazon WorkSpaces?;To use digital certificates to block or allow access to Amazon WorkSpaces, you upload your root certificates to the WorkSpaces management console and distribute your client certificates to the macOS, Windows, Android, and Android-compatible Chrome OS devices you want to trust. To distribute your client certificates, use your preferred solution such as Microsoft System Center Configuration Manager (SCCM), or Mobile-Device Management (MDM) software. For more information, see Restrict WorkSpaces Access to Trusted Devices. /workspaces/faqs/;How many root certificates can be imported to an Amazon WorkSpaces directory?;For each Amazon WorkSpaces directory, you can import up to two root certificates each for macOS and Microsoft Windows devices. If two root certificates are imported, WorkSpaces will present both root certificates to the client device, and the client device will use the first certificate that chains up to either root certificate. /workspaces/faqs/;Can I control client device access to Amazon WorkSpaces without using digital certificates?;Yes. You can control access to Amazon WorkSpaces using the device type only. /workspaces/faqs/;Can I use digital certificates to control Amazon WorkSpaces access from iPadOS, or zero clients?;At this time Amazon WorkSpaces can use digital certificates only with macOS and Microsoft Windows, Android, and Android compatible Chrome OS devices. /workspaces/faqs/;What is Multi-Factor Authentication (MFA)?;Multi-Factor Authentication adds an additional layer of security during the authentication process. Users must validate their identity by providing something they know (e.g. password), as well as something they have (e.g. hardware or software generated one-time password (OTP). /workspaces/faqs/;What delivery methods are supported for MFA?;Amazon supports one time passwords that are delivered via hardware and software tokens. Out of band tokens, such as SMS tokens are not currently supported. /workspaces/faqs/;Is there support for Google Authenticator and other virtual MFA solutions?;Google Authenticator can be used in conjunction with RADIUS. If you are running a Linux-based RADIUS server, you can configure your RADIUS fleet to use Google Authenticator through a PAM (Pluggable Authentication Module) library. /workspaces/faqs/;Which Amazon WorkSpaces client applications support Multi-Factor Authentication (MFA)?;MFA is available for Amazon WorkSpaces client applications on the following platforms - Windows, Mac, Linux, Chromebooks, iOS, Fire, Android, and PCoIP Zero Clients. MFA is also supported when using web access to access Amazon WorkSpaces. /workspaces/faqs/;What happens if a user forgets the password to access their Amazon WorkSpace?;If either AD Connector or AWS Microsoft AD is used to integrate with an existing Active Directory domain, the user would follow your existing lost password process for your domain, such as contacting an internal helpdesk. If the user is using credentials stored in a directory managed by the WorkSpaces service, they can reset their password by clicking on the “Forgot Password” link in the Amazon WorkSpaces client application. /workspaces/faqs/;How will Amazon WorkSpaces be protected from malware and viruses?;You can install your choice of anti-virus software on your users’ WorkSpaces. The Plus bundle options offer users access to anti-virus software, and you can find more details on this here. If you choose to install your own anti-virus software, please ensure that it does not block UDP port 4172 for PCoIP and UDP port 4195 for WSP, as this will prevent users connecting to their WorkSpaces. /workspaces/faqs/;How do I remove a user’s access to their Amazon WorkSpace?;To remove a user’s access to their WorkSpace, you can disable their account either in the directory managed by the WorkSpaces service, or in an existing Active Directory that you have integrated the WorkSpaces service with. /workspaces/faqs/;Does WorkSpaces work with AWS Identity and Access Management (IAM)?;Yes. Please see our documentation. /workspaces/faqs/;Can I select the Organizational Unit (OU) where computer accounts for my WorkSpaces will be created in my Active Directory?;Yes. You can set a default Organizational Unit (OU) in which computer accounts for your WorkSpaces are created in your Active Directory. This OU can be part of the domain to which your users belong, or part of a domain that has a trust relationship with the domain to which your users belong, or part of a child domain in your directory. Please see our documentation for more details. /workspaces/faqs/;Can I use Amazon VPC Security groups to limit access to resources (applications, databases) in my network or on the Internet from my WorkSpaces?;Yes. You can use Amazon VPC Security groups to limit access to resources in your network or the Internet from your WorkSpaces. You can select a default Amazon VPC Security Group for the WorkSpaces network interfaces in your VPC as part of the directory details on the WorkSpaces console. Please see our documentation for more details. /workspaces/faqs/;What is an IP Access Control Group?;An IP Access Control Group is a feature that lets you specify trusted IP addresses that are permitted to access your WorkSpaces. An Access Control group is made up of a set of rules, each rule specifies a specific permitted IP address or range of addresses. you can create up to 25 IP Access Control groups with up to 10 rules per group specifying the IP addresses or IP ranges accessible to your Amazon WorkSpaces. /workspaces/faqs/;Can I implement IP address-based access controls for WorkSpaces?;Yes. With this feature you can create up to 25 IP Access Control groups with up to 10 rules per group specifying the IP addresses or IP ranges accessible to your Amazon WorkSpaces. /workspaces/faqs/;How can I implement IP address-based access controls?;See IP Access Control Groups for details. /workspaces/faqs/;Can IP address-based access controls be used with all WorkSpaces clients?;Yes. This feature can be used with the macOS, iPad, Windows desktop, Android tablet, and web access. This feature also supports zero clients using MFA. /workspaces/faqs/;Which Zero Client configurations are compatible with the IP Based Access Controls feature?;Zero Clients using MFA can be used with IP Based Access Controls, along with any compatible Zero Clients which do not use PCoIP Connection Manager to connect to WorkSpaces. Any connections through PCoIP Connection Manager will not be able to access WorkSpaces if IP Based Access Controls are enabled. /workspaces/faqs/;Are there any scenarios where a non-whitelisted IP address could access a WorkSpace?;Yes. If web access is enabled, when accessing WorkSpaces through the web access client, if the IP address changes from a whitelisted IP to a non-whitelisted IP address after the user’s credentials are validated and before the WorkSpace session begins to launch, the non-whitelisted IP address would be allowed. The initial connection would require a whitelisted IP address. /workspaces/faqs/;How are IP addresses whitelisted if users are accessing the WorkSpaces through a Network address translation (NAT)?;You will need to allow your public IPs with this feature, so if you have a NAT, you will need to allow access from the IPs coming from it. In this case you will be allowing access any time a user accesses WorkSpaces through a NAT. /workspaces/faqs/;How should IP addresses be whitelisted for VPNs?;If you want to allow access from VPNs, you will need to add the public IPs of the VPNIn this case you will be allowing access any time a user accesses WorkSpaces through the VPN with public IPs whitelisted. /workspaces/faqs/;Can I customize the login workflow for my end users' login experience?;WorkSpaces supports the use of the URI (uniform resource identifier) WorkSpaces:// to open the WorkSpaces client and optionally enter the registration code, user name, and/or multi-factor authentication (MFA) code (if MFA is used by your organization). /workspaces/faqs/;How do I enable URI?;You can create your unique URI links by following the WorkSpaces URI formatting documented in Customize How Users Log in to their WorkSpaces in the Amazon WorkSpaces Administration Guide. By providing these links to users, you enable them to use the URI on any device that has the WorkSpaces client installed. URI links can contain human-readable sensitive information if you choose to include the registration code, user name, and/or MFA information, so take precautions with how and whom you share URI information. /workspaces/faqs/;Does Amazon WorkSpaces support encryption?;Yes. Amazon WorkSpaces supports root volume and user volume encryption. Amazon WorkSpaces uses EBS volumes that can be encrypted on creation of a WorkSpace, providing encryption for data stored at rest, disk I/O to the volume, and snapshots created from the volume. Amazon WorkSpaces integrates with the AWS KMS service to allow you to specify the keys you want to use to encrypt the volumes. /workspaces/faqs/;Which Amazon WorkSpaces bundle types support encryption?;Encryption is supported on all Amazon WorkSpaces hardware and software bundle types. This includes the Windows 10 desktop experiences, and the Value, Standard, Performance, Power, PowerPro, Graphics, GraphicsPro, Graphics.g4dn, and GraphicsPro.g4dn bundles. It also includes all Plus application bundles. Additionally, any custom bundles also support encryption. /workspaces/faqs/;How can I encrypt a new Amazon WorkSpace?;When creating a new Amazon WorkSpace from the console or the Amazon WorkSpaces APIs, you will have the option to specify which volume(s) you want encrypted along with a key ARN from your KMS keys for encryption. Note that during the launch of a WorkSpace, you can specify whether you want encryption for the user volume, root volume or both volumes, and the key provided will be used to encrypt the volumes specified. /workspaces/faqs/;Can Amazon WorkSpaces create a KMS key on my behalf?;Amazon WorkSpaces creates a default master key upon your first attempt to launch a WorkSpace through the AWS Management Console. You cannot manage the lifecycle of default master keys. To control the full lifecycle of a key, configure WorkSpaces to use a KMS custom customer master key (CMK). To create a KMS custom CMK, visit the KMS console or use KMS APIs to create your own keys. Note that you can use a default key generated by KMS for your WorkSpaces which will be made available to you on your first attempt to launch Amazon WorkSpaces with encryption through the AWS Management Console. /workspaces/faqs/;What are the prerequisites for using KMS keys to encrypt Amazon WorkSpaces?;In order to use KMS keys to encrypt Amazon WorkSpaces, the key must not be disabled, and should not have exceeded its limits (learn more about limits here). You also need to have the correct permissions and policies associated with the key to use it for encryption. To learn more about the correct permissions and policies needed on the keys, please refer to our documentation. /workspaces/faqs/;How will I be notified if my KMS key does not meet the pre-requisites outlined above?;When you launch a new WorkSpace with the key specified, the WorkSpaces service will verify if the key is valid and eligible to be used for encryption. If the key is not valid, the launch process will fail quickly and notify you of the error associated with the key. Please note that if you change the key settings while the WorkSpace is being created, there is a chance that provisioning will fail and you will be notified of this failure through the AWS Management Console or through the DescribeWorkSpaces API call. /workspaces/faqs/;How will I be able to tell which Amazon WorkSpaces are encrypted and which ones are not?;You will be able to see if a WorkSpace is encrypted or not from the AWS Management Console or using the Amazon WorkSpaces API. In addition to that, you will also be able to tell which volume(s) on the WorkSpace were encrypted, and the key ARN that was used to encrypt the WorkSpace. For example, the DescribeWorkSpaces API call will return information about which volumes (user and/or root) are encrypted and the key ARN that was used to encrypt the WorkSpace. /workspaces/faqs/;Can I enable encryption of volumes on a running Amazon WorkSpace?;Encryption of WorkSpaces is only supported during the creation and launch of a WorkSpace. /workspaces/faqs/;What happens to a running Amazon WorkSpace when I disable the key in the KMS console?;A running WorkSpace will not be impacted if you disable the KMS key that was used to encrypt the user volume of the WorkSpace. Users will be able to login and use the WorkSpace without interruption. However, restarts, rebuilds, and restores of WorkSpaces that were encrypted using a KMS key that has been disabled (or the permissions/policies on the key have been modified) will fail. If the key is re-enabled and/or the correct permissions/policies are restored, restarts, rebuilds, and restores of the WorkSpace will work again. /workspaces/faqs/;Is it possible to disable encryption for a running Amazon WorkSpace?;Amazon WorkSpaces does not support disabling encryption for a running WorkSpace. Once a WorkSpace is launched with encryption enabled, it will always remain encrypted. /workspaces/faqs/;Will snapshots of an encrypted user volume also be encrypted?;Yes. All snapshots of the user volume will be encrypted using the same key that was used to encrypt the user volume of the WorkSpace when it was created. The user volume once encrypted stays encrypted throughout its lifecycle. /workspaces/faqs/;Can I rebuild an Amazon WorkSpace that has been encrypted?;Yes. Rebuilds of a WorkSpace will work as long as the key that was used to encrypt the WorkSpace is still valid. The WorkSpace volume(s) stay encrypted using the original key after it has been rebuilt. /workspaces/faqs/;Can I restore an Amazon WorkSpace that has been encrypted?;Yes. A WorkSpace restore will work as long as the key that was used to encrypt the WorkSpace is still valid. The WorkSpace volume(s) stay encrypted using the original key after it has been restored. /workspaces/faqs/;Can I create a custom image from a WorkSpace that has been encrypted?;Creating a custom image from a WorkSpace that is encrypted is not supported. /workspaces/faqs/;Will the performance of my WorkSpace be impacted because the volume(s) are encrypted?;You can expect a minimum increase in latency on IOPS on encrypted volumes. /workspaces/faqs/;Will encryption impact the launch time of an Amazon WorkSpace?;The launch time of a WorkSpace that only requires user volume encryption are similar to those of an unencrypted WorkSpace. The launch time of a WorkSpace that requires root volume encrypt will take several more minutes. /workspaces/faqs/;Will encryption be supported for BYOL WorkSpaces?;Yes. Amazon WorkSpaces will support encryption for BYOL WorkSpaces. /workspaces/faqs/;Will I be able to use the same KMS key to encrypt Amazon WorkSpaces in a different region?;No. Encrypted resources in one region cannot be used in a different region, because a KMS key belongs to the region in which it was created. /workspaces/faqs/;Is there a charge for encrypting volumes on Amazon WorkSpaces?;There is no additional charge for encrypting volumes on WorkSpaces, however you will have to pay standard AWS KMS charges for KMS API requests and any custom CMKs that are used to encrypt WorkSpaces. Please see AWS KMS pricing here. Please note that the Amazon WorkSpaces services makes a maximum of five API calls to the KMS service upon launching, restarting or rebuilding a single WorkSpace. /workspaces/faqs/;Can I rotate my KMS keys?;Yes. You can use KMS to rotate your custom CMKs. You can configure a custom CMK that you create to be automatically rotated by KMS on an annual basis. There is no impact to WorkSpaces encrypted before the CMK rotation, they will work as expected. /workspaces/faqs/;Where can I download the Amazon WorkSpaces client application?;You can download the Amazon WorkSpaces client application for free on the client download website. /workspaces/faqs/;Can I use any other client (e.g., an RDP client) with Amazon WorkSpaces?;No. You can use any of the free clients provided by AWS, which includes client applications for Windows, macOS, iPadOS, Android tablets, and Android-compatible Chrome OS devices, or Chrome or Firefox web browsers, to access your Amazon WorkSpaces. /workspaces/faqs/;Which operating systems are supported by the Amazon WorkSpaces client applications?;Please refer to the WorkSpaces Clients documentation. /workspaces/faqs/;Which mobile devices are supported by the Amazon WorkSpaces client application?;Amazon WorkSpaces clients are available for the following devices: /workspaces/faqs/;What is a PCoIP Zero Client?;A PC-over-IP (PCoIP) Zero Client is a single-purpose hardware device that can enable access to Amazon WorkSpaces. Zero Clients include hardware optimization specifically for the PCoIP protocol, and are designed to require very little administration. /workspaces/faqs/;Can I use PCoIP Zero Clients with Amazon WorkSpaces?;Yes, Amazon WorkSpaces is compatible with PCoIP Zero Client devices that have the Teradici Tera2 chipset. PCoIP Zero client will only work with PCoIP WorkSpaces, they will not work with WSP WorkSpaces. For a complete list of Zero Clients that are compatible with Amazon WorkSpaces please reference Teradici's website. /workspaces/faqs/;Will my Amazon WorkSpace running in AutoStop running mode preserve the state of applications and data when it stops?;Amazon WorkSpaces preserve the data and state of your applications when stopped. On reconnect, your Amazon WorkSpace will resume with all open documents and running programs intact. AutoStop Graphics.g4dn, GraphicsPro.g4dn, Graphics, and GraphicsPro WorkSpaces do not preserve the state of data and programs when they stop. For these Autostop WorkSpaces, we recommend saving your work when you’re done using them each time. /workspaces/faqs/;How do I resume my Amazon WorkSpace after it stops?;By logging into your Amazon WorkSpace from the Amazon WorkSpaces client application, the service will automatically restart your Amazon WorkSpace. When you first attempt to log in, the client application will notify you that your Amazon WorkSpace was previously stopped, and that your new session will start once your WorkSpace has resumed. /workspaces/faqs/;How long does it take for my Amazon WorkSpace to be available once I attempt to log in?;If your Amazon WorkSpace has not yet stopped, your connection is almost instantaneous. If you Amazon WorkSpace has already stopped, in most cases it will be available within two minutes. For BYOL AutoStop WorkSpaces, a large number of concurrent logins could result in significantly increased time for a WorkSpace to be available. If you expect many users to log into your BYOL AutoStop WorkSpaces at the same time, please consult your account manager for advice. /workspaces/faqs/;What kind of headsets can be used for audio conversations?;Most analog and USB headsets will work for audio conversations through WorkSpaces running Windows. For USB headsets, you should ensure they show up as a playback device locally on your client computer. /workspaces/faqs/;Can I use the built in microphone and speakers for making audio calls?;Yes. For the best experience, we recommend using a headset for audio calls. However, you may experience an echo when using the built in microphone and speakers with certain communication applications. /workspaces/faqs/;Does Audio-in work with mobile clients such as Android, iPadOS, and Android-compatible Chrome OS devices?;Audio-in is supported on the Windows, macOS, Android and iPadOS clients. /workspaces/faqs/;How do I enable Audio-in for my WorkSpaces?;Audio-in is enabled for all new WorkSpaces. For WorkSpaces with Windows, enabling the WorkSpaces Audio-in capability requires local logon access inside your WorkSpace. If you have a Group Policy restricting user local logon in your WorkSpace, we will detect it and not apply the Audio-in update to the WorkSpace. You can remove the Group Policy and the Audio-in capability will be enabled after the next reboot. /workspaces/faqs/;How do I optimize the audio quality for Amazon Connect?;Audio optimization with Amazon Connect is available on the WorkSpaces directory level. The feature enables customers to offload the CCP (Contact Control Panel) audio traffic from WorkSpaces streaming to local endpoint processing, which addresses audio quality issues related to suboptimal network conditions. /workspaces/faqs/;Does WorkSpaces support devices with high DPI screens?;Yes. The Amazon WorkSpaces desktop client application will automatically scale the in-session display to match the DPI settings of the local device. /workspaces/faqs/;Will my bandwidth usage be higher when I use four monitors, or I use 4k Ultra HD resolution?;Yes. The bandwidth requirements for WorkSpaces depends on two factors (a) the number of screens it has to stream to and (b) the amount of pixel changes taking place in each screen. /workspaces/faqs/;Will Amazon WorkSpaces remember my monitor settings between sessions?;The fullscreen mode setting will be preserved. If you quit a WorkSpaces session in the fullscreen mode, you will be able to log into the fullscreen mode next time. However, display configurations will not be saved. Every time you initiate a WorkSpaces session, the client application extracts the EDID of uses your local setup configuration and sends that to the WorkSpaces host to deliver an optimal display experience. /workspaces/faqs/;What happens to my display settings when I connect to my WorkSpace from a different desktop?;When you connect from a different desktop computer, the display settings of that computer will take precedence to deliver an optimal display experience. /workspaces/faqs/;Will the iPad and Android applications support Keyboard/Mouse input?;The Android client supports both keyboard and mouse input. The iPad client supports keyboard and Bluetooth mouse inputs. While we expect most popular keyboard and mouse devices to work correctly, there may be devices that may not be compatible. If you are interested in support for a particular device, please let us know via the Amazon WorkSpaces forum. /workspaces/faqs/;Can I access my Amazon WorkSpaces through a web browser?;Yes, you can use Amazon WorkSpaces web access to log in to your Amazon WorkSpace with Windows through Chrome or Firefox web browsers with PCoIP WorkSpaces, and through any Chromium-based web browser with WSP WorkSpaces. You do not need to install any software, and you can connect from any network that can access the public Internet. To get started, your WorkSpaces admin needs to enable web access from the AWS Console in the WorkSpaces Directory Details – Access Control Options section. Once these steps are complete, to access your WorkSpace through a browser, simply visit the Amazon WorkSpaces web access page using a supported browser and enter your WorkSpaces registration code and then login to the WorkSpace with your username and password. /workspaces/faqs/;What is Amazon WorkSpaces web access?;Amazon WorkSpaces web access allows you to access your Amazon WorkSpace with Windows from Chrome or Firefox web browsers with PCoIP WorkSpaces, and through any Chromium-based web browser with WSP WorkSpaces running on a computer connected to any network that can access the public Internet. Web access does not exclude users from using native Amazon WorkSpaces client applications to connect to their WorkSpaces. Users can choose between web access and native client applications. Web access is available here. /workspaces/faqs/;From which web browsers and operating systems can I access Amazon WorkSpaces?;With PCoIP WorkSpaces, web access works with the latest Google Chrome and Firefox versions. With WSP WorkSpaces, web access works with any Chromium-based web browser, including Google Chrome and Microsoft Edge. Web access is supported from Windows, macOS, or Linux computers. Mobile devices are not currently supported. /workspaces/faqs/;Can I enable web access for Non-English based Amazon WorkSpaces?;Yes. Web access support is currently available on WorkSpaces with English (US), Japanese, Korean, and French (Canadian) based versions Windows desktops. /workspaces/faqs/;Do I need to install any additional software in order to access my Amazon WorkSpaces through a web browser?;No, you do not need to install any programs, add-ins, or plugins in order to access your Amazon WorkSpaces through a supported web browser. /workspaces/faqs/;How do I get started using web access to log in to my Amazon WorkSpaces?;First, your Amazon WorkSpace needs to be enabled for web access. This can be done through the AWS Management Console by your IT administrator. Once this is complete, you can log in using web access, available here. The first time you log in, you will be asked to enter the registration code that was provided in your welcome email. /workspaces/faqs/;How will I know if my Amazon WorkSpace has been enabled for web access?;If your Amazon WorkSpace has been set to block web access, you will receive an error message when you attempt to log in, informing you to contact your system administrator to enable web access. /workspaces/faqs/;Can I use Web Access to access my Amazon WorkSpaces on any network?;Yes. You can use web access on any network that can access the public Internet. If you can browse the web, then you can connect to your Amazon WorkSpace. /workspaces/faqs/;Which Amazon WorkSpaces bundles support web access?;You can use web access to connect to the Value, Standard, Performance, Power, and PowerPro Amazon WorkSpaces with Windows 10 or Windows Server 2016 operating systems. WorkSpaces powered by Windows Server 2019, and Windows 11 only support Web Access with WSP. GPU-enabled WorkSpaces and Amazon Linux WorkSpaces currently do not support web access. Ubuntu WorkSpaces support web access. /workspaces/faqs/;What local devices can I use when connecting to my Amazon WorkSpace through web access?;You will be able to use your mouse and keyboard as input devices. Local peripheral devices—including printers, USB drives, webcams, and microphones—will not be available. Though clipboard redirection will not work across your local operating system and your Amazon WorkSpace, copy and paste operations within your WorkSpace will work. /workspaces/faqs/;In which regions is web access available?;Amazon WorkSpaces web access is available in all regions where Amazon WorkSpaces is available, excluding Asia Pacific (Mumbai) and GovCloud (US-West) Regions. /workspaces/faqs/;Do I need to enter a registration code to use web access?;The first time you log in using web access, you will be asked to enter the registration code that was provided in your welcome email. At the moment, web access does not offer the ability to store multiple different registration codes. /workspaces/faqs/;When using a web browser to access my Amazon WorkSpace, how can I control my session?;You can use the connection bar along the top of your browser window to control your session. The connection bar allows you to disconnect, enter and exit full screen mode, and send a “Ctrl-Alt-Del” key sequence to the Amazon WorkSpace. It can be pinned in place, or set to hide automatically. /workspaces/faqs/;How do I disconnect from my Amazon WorkSpace when accessing it through a web browser?;You can disconnect using the “Disconnect” command in the connection bar, by closing the browser tab, or by quitting the browser program. Web access does not support reconnecting to your Amazon WorkSpace - you must log in again to reconnect. /workspaces/faqs/;Will Amazon WorkSpaces support additional client devices and virtual desktop operating systems?;We continually review our roadmap to see what features we can add to address our customers' requirements. If there is a client device or virtual desktop operating system that you'd like Amazon WorkSpaces to support, please email us with details of your request. /workspaces/faqs/;What is the end user experience when Multi-Factor Authentication (MFA) is enabled?;Users will be prompted for their Active Directory username and password, followed by their OTP. Once a user passes both Active Directory and RADIUS validation, they will be logged in to their Amazon WorkSpace. To learn more, visit our documentation. /workspaces/faqs/;How can I determine the best region to run my Amazon WorkSpaces?;The Amazon WorkSpaces Connection Health Check Website compares your connection speed to each Amazon WorkSpaces region and recommends the fastest one. /workspaces/faqs/;Which languages are supported by Amazon WorkSpaces?;Amazon WorkSpaces bundles that provide the Windows 10 desktop experience currently support English (US), French (Canadian), Korean, and Japanese. You can also download and install language packs for Windows directly from Microsoft. For more information, visit this page. Amazon WorkSpaces client applications currently support English (US), German, Chinese (Simplified), Japanese, French (Canadian), Korean, and Portuguese. /workspaces/faqs/;Can I access my WorkSpaces using SmartCard instead of username/password?;Yes - WSP WorkSpaces can be accessed with SmartCard instead of username/password. You can access WorkSpaces using smartcard if you use an Active Directory Connector and set the directory API to smartcard enabled. Note: PCoIP WorkSpaces cannot support SmartCard features. /workspaces/faqs/;What types of SmartCards are officially supported?;WorkSpaces officially Supports CAC and PIV SmartCards. /workspaces/faqs/;Is SmartCard Support available in all regions?;In-session SmartCard support for use inside of the WorkSpaces is available in all regions in which WSP is supported. Pre-session SmartCard for authentication to WorkSpaces is only available for WSP WorkSpaces in AWS GovCloud (US-West) Region. /workspaces/faqs/;Does the Amazon WorkSpaces service have maintenance windows?;Yes. Amazon WorkSpaces enables maintenance windows for both AlwaysOn and AutoStop WorkSpaces by default. /workspaces/faqs/;Can I opt out of maintenance windows for my WorkSpaces?;It is highly recommended to keep your WorkSpaces maintained regularly. If you want to run your own WorkSpaces maintenance schedule, it is possible to opt out of the service default maintenance windows for Windows WorkSpaces. /workspaces/faqs/;Will my Amazon WorkSpaces require software updates?;Your Amazon WorkSpaces provide users with the Amazon Linux cloud desktops, Windows 10 experience, provided by Windows Server 2016/2019. The underlying OS, and any applications installed in the WorkSpace may need updates. /workspaces/faqs/;How will my Amazon WorkSpaces be patched with software updates?;By default, your Amazon WorkSpaces are configured to install software updates. Amazon Linux and Ubuntu WorkSpaces will be updated to install the latest security and software patches, and Amazon WorkSpaces with Windows have Windows Updates turned on. You can customize these settings, or use an alternative patch management approach. Updates are installed at 2am each Sunday. /workspaces/faqs/;What action is needed to receive updates for the Amazon WorkSpaces service?;Naction is needed on your part. Updates are delivered automatically to your Amazon WorkSpaces during the maintenance window. During the maintenance window, your WorkSpaces may not be available. /workspaces/faqs/;Can I turn off the software updates for the Amazon WorkSpaces service?;No. The Amazon WorkSpaces service requires these updates to be provided to ensure normal operation of your users’ WorkSpaces. /workspaces/faqs/;I don’t want to have Windows Update automatically update my Amazon WorkSpaces. How can I control updates and ensure they are tested in advance?;You have full control over the Windows Update configuration in your WorkSpaces, and can use Active Directory Group Policy to configure this to meet your exact requirements. If you would like to have advance notice of patches so you can plan appropriately we recommend you refer to Microsoft Security Bulletin Advance Notification for more information. /workspaces/faqs/;How are updates for applications installed in my WorkSpaces provided?;Amazon WorkSpaces running Amazon Linux and Ubuntu are updated via pre-configured Amazon Linux yum or Ubuntu (APT or Snap) repositories hosted in each WorkSpaces region and the updates are automatically installed. Patches and updates requiring a reboot are installed during our weekly maintenance window. /workspaces/faqs/;How do I manage my WorkSpaces?;The WorkSpaces Management console lets you provision, restart, rebuild, restore, and delete WorkSpaces. To manage the underlying OS for the WorkSpaces, you can use standard Microsoft Active Directory tools such as Group Policy or your choice of Linux orchestration tools to manage the WorkSpaces. In the case when you have integrated WorkSpaces with an existing Active Directory domain, you can manage your WorkSpaces using the same tools and techniques you are using for your existing on-premises desktops. If you have not integrated with an existing Active Directory, you can set up a Directory Administration WorkSpace to perform management tasks. Please see the documentation for more information. /workspaces/faqs/;Can I use tags to categorize my Amazon WorkSpaces resources?;Yes, you can assign tags to existing Amazon WorkSpaces resources including WorkSpaces, directories registered with WorkSpaces, images, custom bundles, and IP Access Control Groups. You can also assign tags during the creation of new Amazon WorkSpaces and new IP Access Control Groups. You can assign up to 50 tags (key/value pairs) to each Amazon WorkSpaces resource using the AWS Management Console, the AWS Command Line Interface, or the Amazon WorkSpaces API. To learn more about assigning tags to your Amazon WorkSpaces resources, follow the steps listed on this web page: Tag WorkSpaces Resources. /workspaces/faqs/;Can I control whether my users can access Amazon WorkSpaces web access?;Yes. You can use the AWS Management Console to control whether Amazon WorkSpaces in your directory can be accessed using web access, by visit the directory details page. Note: this setting can only be applied to all Amazon WorkSpaces in a directory, not at an individual Amazon WorkSpace level. /workspaces/faqs/;What is the difference between restarting and rebuilding a WorkSpace?;A restart is just the same as a regular operating system (OS) reboot. A rebuild will retain the user volume on the WorkSpace but will return the WorkSpace to its original state (any changes made to the system drive will not be retained). /workspaces/faqs/;What is the difference between WorkSpaces Rebuild and Restore?;A rebuild will retain the user volume on the WorkSpace but will return the WorkSpace to its original state (any changes made to the system drive will not be retained). A restore will retain both the root and user volumes on the WorkSpace but will return the WorkSpace to the last healthy state as detected by the service. /workspaces/faqs/;How do I remove an Amazon WorkSpace I no longer require?;To remove a WorkSpace you no longer require, you can “delete” the Workspace. This will remove the underlying instance supporting the WorkSpace and the WorkSpace will no longer exist. Deleting a WorkSpace will also remove any data stored on the volumes attached to the WorkSpace, so please confirm you have saved any data you must keep prior to deleting a WorkSpace. /workspaces/faqs/;Can I provide more than one Amazon Workspace per user?;No. You can currently only provide one WorkSpace for each user. /workspaces/faqs/;How many Amazon WorkSpaces can I launch?;You can launch as many Amazon WorkSpaces as you need. Amazon WorkSpaces sets default limits, but you can request an increase in these limits here. To see the default limits for Amazon WorkSpaces, please visit our documentation. /workspaces/faqs/;What is the network bandwidth that I need to use my Amazon WorkSpace?;The bandwidth needed to use your WorkSpace depends on what you're doing on your WorkSpace. For general office productivity use, we recommend that a bandwidth download speed of between 300Kbps up and 1Mbps. For graphics intensive work we recommend bandwidth download speeds of 3Mbps. /workspaces/faqs/;What is the maximum network latency recommended while accessing a WorkSpace?;For PCoIP, the maximum round trip latency recommendation is 250 ms, but the best user experience will be achieved at less than 100 ms. When the RTT exceeds 375ms, the WorkSpaces client connection is terminated. For WorkSpaces Streaming Protocol (WSP), the best user experience will be achieved with round trip latency below 250ms. If the RTT is between 250ms and 400ms, the user can access the WorkSpace, but performance is degraded. /workspaces/faqs/;Is there a recommended power plan or power settings for my WorkSpaces?;"Yes. For WorkSpaces running Windows, we recommend selecting the ""High Performance"" power plan in Windows. For WorkSpaces running Linux, you should select a power plan that optimizes for performance." /workspaces/faqs/;Does WorkSpaces need any Quality of Service configurations to be updated on my network?;If you wish to implement Quality of Service on your network for WorkSpaces traffic, you should prioritize the WorkSpaces interactive video stream which is comprised of real time traffic on UDP port 4172 for PCoIP and 4195 for WSP. If possible, this traffic should be prioritized just after VoIP to provide the best user experience. /workspaces/faqs/;Is MFA on Amazon WorkSpaces available in my region?;Support for MFA is available in all AWS Regions where Amazon WorkSpaces is offered. /workspaces/faqs/;What are the prerequisites for setting up a PCoIP Zero Client?;Zero Clients should be updated to firmware version 4.6.0 (or newer). The WorkSpace will need to be using the PCoIP protocol, WSP protocol does not support PCoIP Zero Clients. You will need to run the PCoIP Connection Manager to enable the clients to successfully connect to Amazon WorkSpaces. Please consult the Amazon WorkSpaces documentation for a step by step guide on how to properly setup the PCoIP Connection Manager, and for help on how to find and install the necessary firmware required for your Zero Clients. /workspaces/faqs/;How do I get support with Amazon WorkSpaces?;You can get help from AWS Support, and you can also post in the Amazon WorkSpaces Forum. /workspaces/faqs/;How does billing work for Amazon WorkSpaces?;You can pay for your Amazon WorkSpaces either by the hour, or by the month. You only pay for the WorkSpaces you launch, and there are no upfront fees and no term commitments. The fees for using Amazon WorkSpaces include use of both the infrastructure (compute, storage, and bandwidth for streaming the desktop experience to the user) and the software applications listed in the bundle. /workspaces/faqs/;How much does an Amazon WorkSpace cost?;Please see our pricing page for the latest information. /workspaces/faqs/;Can I pay for my Amazon WorkSpaces by the hour?;Yes, you can pay for your Amazon WorkSpaces by the hour. Hourly pricing is available for all WorkSpaces bundles, and in all AWS regions where Amazon WorkSpaces is offered. /workspaces/faqs/;How does hourly pricing work for Amazon WorkSpaces?;Hourly pricing has two components: an hourly usage fee, and a low monthly fee for fixed infrastructure costs. Hourly usage fees are incurred only while your Amazon WorkSpaces are actively being used, or undergoing routine maintenance. When your Amazon WorkSpaces are not being used, they will automatically stop after a specified period of inactivity, and hourly metering is suspended. When your Amazon WorkSpaces resume, hourly charges begin to accrue again. /workspaces/faqs/;How do I get started with hourly billing for my Amazon WorkSpaces?;To launch an Amazon WorkSpace to be billed hourly, simply select a user, choose an Amazon WorkSpaces bundle (a configuration of compute resources and storage space), and specify the AutoStop running mode. When your Amazon WorkSpace is created, it will be billed hourly. /workspaces/faqs/;What is the difference between monthly pricing and hourly pricing for Amazon WorkSpaces?;With monthly billing, you pay a fixed monthly fee for unlimited usage and instant access to a running Amazon WorkSpace at all times. Hourly pricing allows you to pay for your Amazon WorkSpaces by the hour and save money on your AWS bill when your users only need part-time access to their Amazon WorkSpaces. When your Amazon WorkSpaces being billed hourly are not being used, they automatically stop after a specified period of inactivity, and hourly usage metering is suspended. /workspaces/faqs/;How do I select hourly billing or monthly billing for my Amazon WorkSpaces?;Amazon WorkSpaces operates in two running modes – AutoStop and AlwaysOn. The AlwaysOn running mode is used when paying a fixed monthly fee for unlimited usage of your Amazon WorkSpaces. This is best when your users need high availability and instant access to their desktops, especially when many users need to log into WorkSpaces around the same time. The AutoStop running mode allows you to pay for your Amazon WorkSpaces by the hour. This running mode is best when your users can wait for around 2 minutes to start streaming desktops that have sporadic use. Please consult your account manager for more information about login concurrency and running modes. You can easily choose between monthly and hourly billing by selecting the running mode when you launch Amazon WorkSpaces through the AWS Management Console, the Amazon WorkSpaces APIs, or the Amazon WorkSpaces Command Line Interface. You can also switch between running modes for your Amazon WorkSpaces at any time. /workspaces/faqs/;When do I incur charges for my Amazon WorkSpace when paying by the hour?;Hourly usage fees start accruing as soon as your Amazon WorkSpace is running. Your Amazon WorkSpace may resume in response to a login request from a user, or to perform routine maintenance. /workspaces/faqs/;When do I stop incurring charges for my Amazon WorkSpaces when paying by the hour?;Hourly usage charges are suspended when your Amazon WorkSpaces stop. AutoStop automatically stops your WorkSpaces a specified period of time after users disconnect, or when scheduled maintenance is completed. The specified time period is configurable and is set to 60 minutes by default. Note that partial hours are billed as a full hour, and the monthly portion of hourly pricing does not suspend when your Amazon WorkSpaces stop. /workspaces/faqs/;Can I force hourly charges to suspend sooner?;You can manually stop Amazon WorkSpaces from the AWS Management Console, or by using the Amazon WorkSpaces APIs. To stop the monthly fee associated with your hourly Amazon WorkSpaces, you need to remove the Amazon WorkSpaces from your account (note: this also deletes all data stored in those Amazon WorkSpaces). /workspaces/faqs/;Can I switch between hourly and monthly billing?;Yes, you can switch from hourly to monthly billing for your Amazon WorkSpaces at any time by switching the running mode to AlwaysOn in the AWS Management Console, or through the Amazon WorkSpaces APIs. When you switch, billing immediately changes from hourly to monthly, and you are charged a prorated amount at the monthly rate for the remainder of the month for AlwaysONalong with the base monthly fee and hourly usage fees of AutoStop that have been already billed for the month. Your Amazon WorkSpaces will continue to be charged monthly unless you switch the running mode back to AutoStop. /workspaces/faqs/;If I don’t use my Amazon WorkSpace for the full month, are the fees prorated?;If you’re paying for your Amazon WorkSpaces monthly, your Amazon WorkSpaces are charged for the full month’s usage. If you’re paying hourly (AutoStop running mode), you are charged for the hours during which your Amazon WorkSpaces are running or undergoing maintenance, plus a monthly fee for fixed infrastructure costs. In both cases, the monthly fee is prorated in the first month only. /workspaces/faqs/;Will I be charged the low monthly fee associated with hourly billing if I don’t use my Amazon WorkSpaces in a given month?;Yes, you will be charged a small monthly fee for the Amazon WorkSpaces bundle you selected. If you’ve chosen an Amazon WorkSpaces Plus bundle, you will be charged for the software subscription as well. You can find the monthly fees for all Amazon WorkSpaces on the pricing page here. /workspaces/faqs/;How are the Plus software bundles charged when I pay hourly for my Amazon WorkSpaces?;Plus bundles are always charged monthly, even if you’re paying for your Amazon WorkSpaces by the hour. If you selected a Plus bundle when you launched your WorkSpaces, you will incur the listed fee for the Plus software bundle even if you do not use those Amazon WorkSpaces in a particular month. /workspaces/faqs/;Will I be able to monitor how many hours my Amazon WorkSpaces have been running?;Yes, you will be able to monitor the total number of hours your Amazon WorkSpaces have been running in a given period of time through the Amazon CloudWatch “UserConnected” metric. /workspaces/faqs/;Does Amazon WorkSpaces pricing include bandwidth costs?;Amazon WorkSpaces pricing includes network traffic between the user’s client and their WorkSpace. Web traffic from WorkSpaces (for example, accessing the public Internet, or downloading files) will be charged separately based on current AWS EC2 data transfer rates listed here. /workspaces/faqs/;How will I be charged for Amazon WorkSpaces that I launch that are based on a custom image?;There is no additional charge for Amazon WorkSpaces created from custom images. You will be charged the same as the underlying bundles on which the customized images are based. /workspaces/faqs/;Can I use custom images for Amazon WorkSpaces that are billed hourly?;Yes. You can launch Amazon WorkSpaces billed hourly from images that you create and upload. There is no additional charge for Amazon WorkSpaces launched from custom images. You will be charged the same as the underlying bundles on which the customized images are based. /workspaces/faqs/;Is there a charge to use Amazon WorkSpaces client applications?;The Amazon WorkSpaces client applications are provided at no additional cost, and you can install the clients on as many devices as you need to. You can access these here. /workspaces/faqs/;Is there an additional charge to access Amazon WorkSpaces using web access?;There is no additional charge to access Amazon WorkSpaces using web access. For Amazon WorkSpaces set to bill hourly, you will keep getting billed for the time you leave a browser tab open with an actively running Amazon WorkSpace. /workspaces/faqs/;Can I use tags to obtain usage and cost details for Amazon WorkSpaces on my AWS monthly billing report?;Yes. By setting tags to appear on your monthly Cost Allocation Report, your AWS monthly bill will also include those tags. You can then easily track costs according to your needs. To do this, first assign tags to your Amazon WorkSpaces by following the steps listed on this web page: Tagging WorkSpaces. Next, select the tag keys to include in your cost allocation report by following the steps listed on this web page: Setting Up Your Monthly Cost Allocation Report. /workspaces/faqs/;Are there any costs associated with tagging Amazon WorkSpaces?;There are no additional costs when using tags with your Amazon WorkSpaces. /workspaces/faqs/;What are the requirements for schools, universities, and public institutions to reduce their WorkSpaces licensing?;Schools, universities, and public institutions may qualify for reduced WorkSpaces licensing fees. Please reference the Microsoft Licensing Terms and Documents for qualification requirements. If you think you may qualify, please create a case with the AWS support center here. Select Regarding:, Service:, Category:, and enter the required info. We will review your information and work with you to reduce your fees and costs. /workspaces/faqs/;What do I need to provide to qualify as a school, university, or public institution?;You will need to provide AWS your institution's full legal name, principle office address, and public website URL. AWS will use this information to qualify you for reduced user fees for qualified educational institutions with your WorkSpaces. Please note: The use of Microsoft software is subject to Microsoft’s terms. You are responsible for complying with Microsoft licensing. If you have questions about your licensing or rights to Microsoft software, please consult your legal team, Microsoft, or your Microsoft reseller. You agree that we may provide the information to Microsoft in order to apply educational pricing to your Amazon WorkSpaces usage. /workspaces/faqs/;Does qualification for Amazon WorkSpaces reduced user fees affect other AWS cloud services?;No, your user fees are specific to Amazon WorkSpaces, and do not affect any other AWS cloud services or licenses you have. /workspaces/faqs/;Is there a charge for streaming data between my WorkSpaces and End Users' devices?;The charges for the Service include the cost of streaming data between your WorkSpaces and End Users’ devices unless you stream via VPNin which case you will be charged VPN data transfer rates in addition to any applicable Internet data transfer changes. Other WorkSpace data transfer will be charged using Amazon EC2 data transfer pricing. /workspaces/faqs/;Am I eligible to take advantage of the Amazon WorkSpaces Free Tier offer?;The Amazon WorkSpaces Free Tier offer is available to new or existing AWS customers that have not previously used WorkSpaces. Customers must be a new Amazon customer for WorkSpaces and have an account that is not under an AWS Partner account. /workspaces/faqs/;What Amazon WorkSpaces bundles are available as part of the Free Tier?;The Amazon WorkSpaces Free Tier allows you to provision two Standard bundle WorkSpaces with 80 GB Root and 50 GB User volumes. The Standard bundle WorkSpace offers a cloud desktop with 2 vCPUs, 4 GB of memory, 80 GB Root and 50 GB User volume of SSD-based storage, and you can choose between Amazon Linux WorkSpaces, Amazon WorkSpaces with Windows 10 desktop experiences powered by Windows Server. As with all WorkSpaces, your WorkSpace comes with the pre-installed applications, and access to Amazon WorkDocs with 50 GB included storage. Limited-time promotion offers might be offered via the Free Tier, please refer to the WorkSpaces pricing page for the latest information. /workspaces/faqs/;What is included with the Amazon WorkSpaces Free Tier?;The WorkSpaces Free Tier includes two Standard Bundle WorkSpaces with 80 GB Root and 50 GB User volumes, for 40 hours of combined use per month, for the first three billing cycles. As with all bundles, your WorkSpace comes with the pre-installed applications, and access to Amazon WorkDocs with 50 GB included storage. Limited-time promotion offers might be offered via the Free Tier, please refer to the WorkSpaces pricing page for the latest information. /workspaces/faqs/;Can I use any other Amazon WorkSpaces bundles as part of the Free Tier?;The Amazon WorkSpaces Free Tier includes the Standard bundle only. Limited-time promotion offers might be offered via the Free Tier, please refer to the WorkSpaces pricing page for the latest information. /workspaces/faqs/;What is the duration of the Amazon WorkSpaces Free Tier?;The Free Tier offer starts when you launch your first Amazon WorkSpace, and expires after three billing cycles. For example, if you launched your first WorkSpace on the 15th of the month, the Free Tier offer extends to the end of the month after next. Limited-time promotion offers might be offered via the Free Tier, please refer to the WorkSpaces pricing page for the latest information. /workspaces/faqs/;If I use less than 40 hours in my first month of Free Tier use, do the remaining hours roll over to the next month?;The Amazon WorkSpaces Free Tier allows you to use a combined total of 40 hours per month. Unused hours expire when the new calendar month starts. Limited-time promotion offers might be offered via the Free Tier, please refer to the WorkSpaces pricing page for the latest information. /workspaces/faqs/;What happens if I use my WorkSpaces for more than 40 hours in a calendar month during the Free Tier period?;In the event you exceed 40 hours of use in a month during the Free Tier period, you are billed at the current hourly rate for Amazon WorkSpaces. Limited-time promotion offers might be offered via the Free Tier, please refer to the WorkSpaces pricing page for the latest information. /workspaces/faqs/;What happens if I convert my Amazon WorkSpaces from AutoStop (hourly billing) to AlwaysOn (monthly billing) before my Free Tier period expires?;To qualify for the Free Tier, your Amazon WorkSpaces need to run in the AutoStop running mode. You can change the running mode of your WorkSpaces to AlwaysOn, but this action converts your WorkSpaces to monthly billing, and your Free Tier period will end. /workspaces/faqs/;Hourly billing for Amazon WorkSpaces includes a fee for hours used, and a monthly infrastructure cost. Is the monthly infrastructure cost waived during the Amazon WorkSpaces Free Tier?;The monthly infrastructure fee for Amazon WorkSpaces is waived for Free Tier use. /workspaces/faqs/;What happens when my Amazon WorkSpaces Free Tier period ends?;When your Free Tier period ends, your Amazon WorkSpaces will be billed at the current hourly rate. In addition, the monthly infrastructure fee will start to apply. For current rates, see Amazon WorkSpaces Pricing. /workspaces/faqs/;How can I track my Amazon WorkSpaces Free Tier usage?;To track your Amazon WorkSpaces usage, go to the My Account page in the AWS management console and see your current and past activity by service, and region. You can also download usage reports. For more information, see Understanding Your Usage with Billing Reports. /workspaces/faqs/;Can I use an HTTPS proxy to connect to my Amazon WorkSpaces?;Yes, you can configure a WorkSpaces Client app to use an HTTPS proxy. Please see our documentation for more information. /workspaces/faqs/;Can I connect Amazon WorkSpaces to my VPC?;Yes. The first time you connect to the WorkSpaces Management Console, you can choose an easy ‘getting started’ link that will create a new VPC and two associated subnets for you as well as an Internet Gateway and a directory to contain your users. If you choose to access the console directly, you can choose which of your VPCs your WorkSpaces will connect to. If you have a VPC with a VPN connection back to your on-premises network, then your WorkSpaces will be able to communicate with your on-premises network (you retain the usual control you have over network access within your VPC using all of the normal configuration options such as security groups, network ACLS, and routing tables). /workspaces/faqs/;Can I connect to my existing Active Directory with my Amazon WorkSpaces?;Yes. You can use AD Connector or AWS Microsoft AD to integrate with your existing on-premises Active Directory. /workspaces/faqs/;Will my Amazon WorkSpaces be able to connect to the Internet to browse websites and download applications?;Yes. You have full control over how your Amazon WorkSpaces connect to the Internet based on regular VPC configuration. Depending on what your requirements are you can either deploy a NAT instance for Internet access, assign an Elastic IP Address (EIP) to the Elastic Network Interface (ENI) associated with the WorkSpace, or your WorkSpaces can access the Internet by utilizing the connection back to your on-premises network. /workspaces/faqs/;Can I use IPv6 addresses for my Amazon WorkSpaces bundles?;Yes. You can use IPv6 addresses for Value, Standard, Performance, Power, PowerPro, GraphicsPro, Graphics.g4dn, and GraphicsPro.g4dn bundles. At this time, IPv6 addresses are not supported in Graphics bundles. /workspaces/faqs/;Can my Amazon WorkSpaces connect to my applications that are running in Amazon EC2 such as a file server?;Yes. Your WorkSpaces can connect to applications such as a fileserver running in Amazon EC2 (both “Classic” and VPC networking environments). All you need to do is ensure appropriate route table entries, security groups and network ACLs are configured so that the WorkSpaces can reach the EC2 resources you would like them to be able to connect to. /workspaces/faqs/;What are the pre-requisites for using my digital certificates on Amazon WorkSpaces?;To use your certificates to manage which client devices can access Amazon WorkSpaces, you need to distribute your client certificates using your preferred solution such as Microsoft System Center Configuration Manager (SCCM), or a Mobile-Device Management (MDM) software solution to the devices you want to trust. Your root certificates are imported into the WorkSpaces management console. For more information, please see Restrict WorkSpaces Access to Trusted Devices. /workspaces/faqs/;What are the pre-requisites for enabling MFA on Amazon WorkSpaces?;To enable MFA on WorkSpaces, you will need to configure AD Connector, and have an on-premises RADIUS server(s). Your on-premises network must allow inbound traffic over the default RADIUS server port (1812) from the AD Connector server(s). Additionally, you must ensure that usernames match between Active Directory and your RADIUS server. To learn more, visit our documentation. /workspaces/faqs/;Do I need to set up a directory to use the Amazon WorkSpaces service?;Each user you provision a WorkSpace for needs to exist in a directory, but you do not have to provision a directory yourself. You can either have the WorkSpaces service create and manage a directory for you and have users in that directory created when you provision a WorkSpace. Alternatively, you can integrate WorkSpaces with an existing, on-premises Active Directory so that users can continue to use their existing credentials meaning that they can get seamless applications to existing applications. /workspaces/faqs/;If I use a directory that the Amazon WorkSpaces service creates for me, can I configure or customize it?;Yes. Please see our documentation for more details. /workspaces/faqs/;Can I integrate Amazon WorkSpaces with my existing on-premises Active Directory?;Yes. You can use AD Connector or AWS Microsoft AD to integrate with your existing on-premises Active Directory. /workspaces/faqs/;How do I integrate Amazon WorkSpaces with my on-premises Microsoft Active Directory?;There are two ways you can integrate Amazon WorkSpaces with your on-premises Microsoft Active Directory (AD): you can set up an interforest trust relationship with your AWS Microsoft AD domain controller, or you can use AD Connector to proxy AD authentication requests. /workspaces/faqs/;There are two options for integrating Amazon WorkSpaces with my on-premises Microsoft Active Directory. Which one should I use?;You can integrate Amazon WorkSpaces with your on-premises Microsoft Active Directory (AD) either by setting up an interforest trust relationship with your AWS Microsoft AD domain controller, or by using AD Connector to proxy AD authentication requests. /workspaces/faqs/;Can I use the Amazon WorkSpaces APIs to create new WorkSpaces for users across domains when I have an interforest trust relationship established with AWS Microsoft AD?;Yes. When using the Amazon WorkSpaces API to launch WorkSpaces, you will need to specify the domain name as part of the username, in this format: “NETBIOS\username” or “corp.example.com\username”. For more information, please visit this page. /workspaces/faqs/;Can I apply the same Group Policy object settings from my on-premises Microsoft Active Directory to Amazon WorkSpaces?;Yes. If you’re using an interforest trust relationship between your on-premises Microsoft AD and your AWS Microsoft AD domain controller, you will need to ensure that your Group Policy object (GPO) settings are replicated across domains before they can be applied to Amazon WorkSpaces. If you are using AD Connector, your GPO settings will be applied to your WorkSpaces much like any other computer in your domain. /workspaces/faqs/;Can I apply Active Directory policies to my Amazon WorkSpaces using the directory that the WorkSpaces service creates for me?;Yes. Please see our documentation for more details. /workspaces/faqs/;What happens to my directory when I remove all of my Amazon WorkSpaces?;You may keep your AWS directory in the cloud and use it to domain join EC2 instances or provide directory users access to the AWS Management Console. You may also delete your directory. /workspaces/faqs/;Which AWS Directory Services support the use of PCoIP Zero Clients?;PCoIP Zero Clients can be used with the AD Connector and Simple AD directory services from AWS. Currently, Zero Clients cannot be used with the AWS Directory Service for Microsoft Active Directory. /workspaces/faqs/;What does Amazon CloudWatch monitor for Amazon WorkSpaces?;Amazon WorkSpaces is integrated with both CloudWatch Metrics and CloudWatch Events. /workspaces/faqs/;Will I be able to monitor how many hours my Amazon WorkSpaces have been running?;Yes, you will be able to monitor the total number hours your Amazon WorkSpaces has been running in a given period of time through Amazon CloudWatch “UserConnected” metric. /workspaces/faqs/;In what regions can I use Amazon WorkSpaces with CloudWatch Metrics?;CloudWatch Metrics are available with Amazon WorkSpaces in all AWS regions where WorkSpaces is available. /workspaces/faqs/;What does CloudWatch Metrics cost?;There is no additional cost for using CloudWatch Metrics with WorkSpaces via the CloudWatch console. There may be additional charges for setting up CloudWatch Alarms and retrieving CloudWatch Metrics via APIs. Please see CloudWatch pricing for more information. /workspaces/faqs/;How do I get started with CloudWatch Metrics for my Amazon WorkSpaces?;CloudWatch Metrics are enabled by default for all your WorkSpaces. Visit the AWS Management Console to review the metrics and set up alarms. /workspaces/faqs/;What metrics are supported for the Amazon WorkSpaces client application and PCOIP Zero Clients?;Please see the documentation for more information on Amazon CloudWatch metrics with Amazon WorkSpaces. /workspaces/faqs/;What metrics are supported for Amazon WorkSpaces web access usage?;The following metrics are currently supported for reporting on Amazon WorkSpaces web access usage: • Available • Unhealthy • UserConnected • Maintenance /workspaces/faqs/;What CloudWatch Events are generated by Amazon WorkSpaces?;Successful WorkSpace logins. Amazon WorkSpaces sends access event information to CloudWatch Events when a user successfully logs in to a WorkSpace from any WorkSpaces client application. /workspaces/faqs/;How can I utilize CloudWatch Events with WorkSpaces?;You can use CloudWatch Events to view, search, download, archive, analyze, and respond based on rules that you configure. You can either use the AWS Console under CloudWatch to view and interact with CloudWatch Events or use services such as Lambda, ElasticSearch, Splunk and other partner solutions using Kinesis Streams or Firehose to take actions based on your event data. For storage, CloudWatch Events recommends using Kinesis to push data to S3. For more information on how to use CloudWatch Events, see the Amazon CloudWatch Events User Guide. /workspaces/faqs/;What information is included in WorkSpaces Access Events?;Events are represented as JSON objects which include WAN IP address, WorkSpaces ID, Directory ID, Action Type (ex. Login), OS platform, Timestamp and a Success/Failure indicator for each successful login to WorkSpaces. See our documentation for more details here. /workspaces/faqs/;What does CloudWatch Events cost?;There is no additional cost for using CloudWatch Events with Amazon WorkSpaces. You will be charged for any other services you use that take action based on CloudWatch Events, such as Amazon ElasticSearch, and AWS Lambda. This also includes other CloudWatch services such as CloudWatch Metrics, CloudWatch Logs, and CloudWatch Alarms if your usage surpasses the CloudWatch Free Tier limits. All of these services are integrated with and can be triggered from CloudWatch Events. /workspaces/faqs/;Can I print from my Amazon WorkSpace?;Yes, Amazon WorkSpaces with Windows support local printers, network printers, and cloud printing services. Amazon WorkSpaces with Amazon Linux support network printers, and cloud printing services. /workspaces/faqs/;How do I enable printer auto-redirection for my Amazon WorkSpace?;By default, local printer auto-redirection is disabled. You can use the Group Policy settings to enable this feature. This will ensure that your local printer is set as the default every time you connect to your WorkSpace. /workspaces/faqs/;How do I print to my local printer?;If you have a local printer configured, it will show up in your WorkSpaces printer menu the next time you connect to your WorkSpace. If not, you will need to configure a local printer outside of your WorkSpace. Once this is done, select your local printer from the print menu, and select print. /workspaces/faqs/;Why can’t I see my local printer from the printing menu?;Most printers are already supported by Amazon WorkSpaces. If your printer is not recognized, you may need to install the appropriate device driver on your WorkSpace. /workspaces/faqs/;How do I print to a network printer?;Any printer which is on the same network as your Amazon WorkSpace and is supported by Windows Server 2016/2019 can be added as a network printer. Once a network printer is added, it can be selected for printing from within an application. /workspaces/faqs/;Can I use my Amazon WorkSpace with a cloud printing service?;You can use cloud printing services with your WorkSpace including, but not limited to, Cortado ThinPrint®. /workspaces/faqs/;Can I print from my tablet or Chromebook?;The Amazon WorkSpaces clients for tablets and Android-compatible Chrome OS devices support cloud printing services including, but not limited to, Cortado ThinPrint®. Local and network printing are not currently supported. /workspaces/faqs/;What self-service management capabilities are available for Amazon WorkSpaces?;You can choose to let users accomplish typical management tasks for their own WorkSpace, including restart, rebuild, change compute type, and change disk size. You can also let users switch from monthly to hourly billing (and back). You can choose to enable specific self-service management capabilities that suit your needs directly in the WorkSpaces Admin Console. /workspaces/faqs/;How do I get started with self-service management capabilities for my WorkSpaces users?;Self-service management capabilities are enabled by default when you register a directory with WorkSpaces. You can choose to not enable them when you register a directory. /workspaces/faqs/;How do end users access self-service management capabilities?;Self-service management capabilities are available to users through the WorkSpaces client on Windows, Mac, Android, and Chrome OS devices supporting Android apps. /workspaces/faqs/;Do I need to log into WorkSpaces to use self-service management capabilities?;Yes, you must authenticate to use any self-service management capabilities. /workspaces/faqs/;Can I continue to use my WorkSpace while a self-service management actions is being performed?;You can continue to use your WorkSpace while disk size or running mode is being changed. Restarting, rebuilding, restoring, and changing compute type requires disconnecting from your WorkSpaces session. /workspaces/faqs/;How much does it cost to use self-service management capabilities?;Self-service management capabilities are available at no additional cost. You can enable self-service management for tasks such as changing the WorkSpace bundle type, or increasing the volume size. When end users perform these tasks, the billing rate for those WorkSpaces may change. /workspaces/faqs/;How do I get high availability with Amazon WorkSpaces?;To reduce downtime from maintenance and disruptive events, deploy WorkSpaces in multiple Regions, making sure that regional WorkSpaces maintenance schedules do not overlap. Use cross-Region redirection, so that you can direct users to WorkSpaces Regions not under maintenance. For more information on WorkSpaces cross-Region redirection, please refer to Amazon WorkSpaces documentation. /workspaces/faqs/;How do I plan for disaster recovery for my WorkSpaces?;Use WorkSpaces Multi-Region Resilience with cross-Region redirection to deploy redundant virtual desktop infrastructure in a secondary WorkSpaces Region and design a cross-Region failover strategy in preparation for disruptive events. Leveraging Domain Name System(DNS) failover and health-check capabilities, WorkSpaces cross-Region redirection points your users to log into WorkSpaces in a disaster recovery Region when the primary WorkSpaces Region is not reachable. To learn more, please refer Amazon WorkSpaces documentation on WorkSpaces Multi-Region Resilience and cross-Region redirection. /workspaces/faqs/;How can I create standby WorkSpaces in a secondary WorkSpaces Region?; Yes. Amazon WorkSpaces Multi-Region Resilience leverages the existing cross-Region redirection capabilities and streamlines the process of redirecting users to a secondary Region when their primary WorkSpacesRegion is unreachable due to disruptive events. It does this without requiring users to switch the registration code when logging in to their standby WorkSpaces. You can use fully qualified domain name (FQDNas Amazon WorkSpaces registration codes for your users. When an outage occurs in your primary Region, you can redirect users to the standby WorkSpaces in the secondary Region based on your Domain Name System (DNS) failover policies for the FQDN. /workspaces/faqs/;How do I define my WorkSpaces’ primary Regions and backup Regions with cross-Region redirection?;You can define the Region priority by configuring routing policies for your FQDN on DNS. For more information, please refer to Amazon WorkSpaces documentation. /workspaces/faqs/;Will my old registration codes still work after I enable cross-Region redirection?;Yes. Old registration codes will keep working. Users can register with either old registration codes or fully qualified domain names (FQDN). Cross-Region redirection only works when end users register with FQDNs. /workspaces/faqs/;Can I use internal domain names for cross-Region redirection?;Yes. WorkSpaces cross-Region redirection works with both public domain names and domain names in private DNzones. If your end users use private FQDNfrom the public internet, the WorkSpaces clients will return errors reporting invalid registration codes. /workspaces/faqs/;What AWS Regions have the WorkSpaces cross-Region redirection support?;WorkSpaces cross-Region redirection works in all AWS Regions where Amazon WorkSpaces is available except AWS GovCloud and China Regions. /workspaces/faqs/;What client types support WorkSpaces cross-Region redirection?;Windows, macOS, and Linux WorkSpaces clients support cross-Region redirection. /workspaces/faqs/;How do I plan for disaster recovery for my WorkSpaces?;Use WorkSpaces Multi-Region Resilience ** with cross-Region redirection to deploy redundant virtual desktop infrastructure in a secondary WorkSpaces Region and design a cross-Region failover strategy in preparation for disruptive events. Leveraging Domain Name System (DNS) failover and health-check capabilities, WorkSpaces cross-Region redirection could point your users to log into WorkSpaces in a disaster recovery Region when the primary WorkSpaces Region is not reachable. To learn more, please refer Amazon WorkSpaces documentation on WorkSpaces Multi-Region Resilience and cross-Region redirection. /workspaces/faqs/;What is a remote display protocol and why is it important for WorkSpaces?; You must use a WorkSpaces client that supports WSP to connect to a WorkSpaces host running latest WSP host agent. Use the chart below to identify which client and host agents support WSP and version requirements. /workspaces/faqs/;How do I find my WSP host agent version?; To check client version, go to “About My Workspaces” after signing into the native client. /workspaces/faqs/;How do I find my client version?; You need to reboot your WorkSpaces instance in order to update the WSP host agent. Also, download and install the latest client. You must update both host agent and client to get the latest performance improvements and features. WSP will fall back to using an older version if either client or host agent is not updated. /workspaces/faqs/;If I already have a WSP WorkSpace, how do I update it?; We strive to offer our customers the flexibility to meet a wide variety of technical and business requirements. /workspaces/faqs/;Why are there 2 protocols available when I choose my WorkSpaces bundle?; Yes. When you provision a new WorkSpaces user in the directory, you can enable either WSP or PCoIP, as long as the WorkSpaces user is not already listed in that directory. /workspaces/faqs/;Can I include both PCoIP and WSP users in the same directory?; A. Yes. One streaming protocol is selected when a WorkSpace is provisioned for a given user. To switch to a different streaming protocol after a WorkSpace has been provisioned, you can use the WorkSpaces migrate API to update the Workspace’s protocol. /workspaces/faqs/;Can I switch between the PCoIP and WSP protocols on WorkSpaces?; A. Yes, as long as separate directories are created for each user. A single user cannot run both PCoIP and WSP on WorkSpaces from the same directory. However, a single directory can include a mix of both PCoIP and WSP-based WorkSpaces users. /workspaces/faqs/;Can the same user run both a PCoIP and WSP on WorkSpaces?; A. If you encounter any issues or want to provide feedback about WSP, contact AWS Support. /workspaces/faqs/;Can I use Microsoft Office on Amazon WorkSpaces?;Amazon WorkSpaces, via Microsoft Office Professional Plus bundle, offers license included Microsoft Office that is pre-installed on the instance. /workspaces/faqs/;Can I purchase perpetual Microsoft Office license from AWS for my WorkSpaces?;Microsoft licensing does not permit AWS to re-sell or offer perpetual licenses in a hosted environment like WorkSpaces. AWS leverages Services Provider License Agreement (SPLA), that allows AWS to license eligible Microsoft products such as Microsoft Office on a monthly basis. Under this offering licensing of Microsoft Office is included in the monthly WorkSpaces billing statement. /workspaces/faqs/;What versions of Microsoft Office are available on Amazon WorkSpaces?; Up until October 14, 2025, Microsoft is providing extended support for Microsoft Office 2016/2019. Until the extended support expires, AWS plans to continue to offer these software packages, which are also qualified to receive security upgrades from Microsoft. /workspaces/faqs/;What will happen after the Extended End Date expires?;After Microsoft Office 2016/2019 Extended End Date expires, the WorkSpaces public bundles with Office 2016/2019 will also reach end of life and you won’t be able to launch new WorkSpaces using public bundles. However, your existing custom bundles will continue to work as-is. You will also have an option to upgrade your existing custom bundle to the latest versions of Microsoft Office e.g. Microsoft Office 2021. /workspaces/faqs/;What will happen to my WorkSpaces running Microsoft Office 2016/2019 after the Extended End Date expires?;You can continue to use your WorkSpaces with Office 2016/2019 but there will be no support or security updates available for Office packages. /workspaces/faqs/;How can I uninstall M365 from my Amazon WorkSpaces and use Office Professional Plus bundle?;You can migrate your existing WorkSpaces to a bundle that has Office 2016 or 2019 pre-installed. For BYOL WorkSpaces create a new BYOL bundle with Office 2016/2019 subscription. During the BYOL image ingestion process, you have the option to subscribe to Microsoft Office Professional 2016 (32-bit) or 2019 (64-bit) through AWS. For non-BYOL WorkSpaces you can migrate your existing WorkSpaces to “Plus applications bundle for Windows Server 2016/2019 Powered WorkSpaces”. After the migration, your end users will retain their user volumes while the root volume will be recreated with Office 2016/2019 installed. /appstream2/faqs/;What is Amazon AppStream 2.0?;Amazon AppStream 2.0 is optimized for application streaming, SaaS conversion, and virtual desktop use cases. /appstream2/faqs/;What's the difference between the original Amazon AppStream and Amazon AppStream 2.0?;"Amazon AppStream 2.0 is the next-generation desktop application streaming service from AWS. Amazon AppStream was an SDK-based service that customers could use to set up their own streaming service with DIY engineering. AppStream 2.0 provides a fully managed streaming service with no DIY effort. AppStream 2.0 offers a greater range of instance types; streams desktop applications to HTML5-compatible web browsers with no plugins required; provides dual-monitor support on web browsers and 4-monitor, 4K monitor, and USB peripheral support through the AppStream 2.0 client for Windows. In addition, AppStream 2.0 simplifies application lifecycle management and lets your applications access services in your VPC." /appstream2/faqs/;Can I continue to use the original Amazon AppStream service?;No. You cannot use the original Amazon AppStream service. Amazon AppStream 2.0 offers a greater range of instance types, streams desktop applications with no rewrite, simplifies application lifecycle management, and allows your apps to access services in your VPC. /appstream2/faqs/;What are the benefits of streaming over rendering content locally?;Interactively streaming your application from the cloud provides several benefits: /appstream2/faqs/;Do some applications work better with Amazon AppStream 2.0 than others?;Many types of applications work well as streaming applications, including CAD, CAM, CAE, 3D modeling, simulation, games, video and photo-editing software, medical imaging, and life sciences applications. These applications benefit most from streaming because the application runs on the vast computational resources of AWS, yet your users can interact with the application using low-powered devices, with very little noticeable change in application performance. /appstream2/faqs/;Does Amazon AppStream 2.0 support microphones?;Yes. Amazon AppStream 2.0 supports most analog and USB microphones, including built-in microphones. /appstream2/faqs/;Does Amazon AppStream 2.0 support USB devices such as 3D mice?;Yes. Amazon AppStream 2.0 supports most USB devices such as 3D mice through the Windows Client. All USB devices are disabled by default. Administrators can enable USB devices for their users. /appstream2/faqs/;How do users enable audio input in an Amazon AppStream 2.0 streaming session?;Users enable audio input from the Amazon AppStream 2.0 toolbar by selecting the Settings icon and selecting Enable Microphone. /appstream2/faqs/;How can end users use their webcam from within an AppStream 2.0 session?; Users who are connected to a streaming session through the AppStream 2.0 client or a web browser can enable, disable, and select the webcam and microphone to use in their session from the AppStream 2.0 toolbar. /appstream2/faqs/;Can users use their webcam and microphone in an AppStream 2.0 streaming session?; The best AppStream 2.0 instance type depends on your video conferencing applications, performance requirements, and environment. We recommend that you test different instance types and evaluate how they perform in your environment with the video conferencing applications that you want to use. Doing so will help you choose the instance type that best suits your needs. For more information about available instance types, see Amazon AppStream 2.0 pricing. /appstream2/faqs/;What is the best AppStream 2.0 instance type to use for streaming video conferencing applications?; Google Chrome, Microsoft Edge, Firefox, and additional web browsers support audio-input in Amazon AppStream 2.0 streaming sessions. Microsoft Internet Explorer 11 (IE11) does not support audio-input, and the microphone option will not appear on the Amazon AppStream 2.0 toolbar in streaming sessions running in IE11. To use a local webcam within an AppStream 2.0 streaming session, connect from a Chromium-based web browser, including Google Chrome or Microsoft Edge. /appstream2/faqs/;What browsers support real-time audio-video (AV) in an Amazon AppStream 2.0 session?; A user needs to have applications set up by an administrator, a modern web browser that can support HTML5, a broadband internet connection with at least 2 Mbps capability, and outbound access to the internet via HTTPS (443). For web-based AppStream 2.0 streaming sessions, up to two monitors are supported. To use up to four monitors, 4K monitors and USB peripherals such as 3D mice, users can download and use the AppStream 2.0 client for Windows . /appstream2/faqs/;What is the AppStream 2.0 Windows Client?;The AppStream 2.0 client for Windows is a native application that is designed for users who require additional functionality not available from web browsers during their AppStream 2.0 streaming sessions. The AppStream 2.0 client lets users use multiple monitors and USB peripherals such as 3D mice with their applications. The client also supports keyboard shortcuts, such as Alt + Tab, clipboard shortcuts, and function keys. The AppStream 2.0 client is supported on the following versions of Windows: Windows 7, Windows 8, Windows 8.1, and Windows 10. Both 32-bit and 64-bit versions of Windows are supported. /appstream2/faqs/;What are the system requirements for using the AppStream 2.0 Windows Client?;The minimum system requirements are 2 GB of ram and 150 MB of disk space. /appstream2/faqs/;What monitor configurations are supported by the AppStream 2.0 Windows Client?;For browser-based streaming sessions, AppStream 2.0 supports the use of up to two monitors with a maximum display resolution of 2560x1440 pixels per monitor. The AppStream 2.0 client for Windows supports up to 4 monitors with a maximum display resolution of 2560x1440 pixels per monitor. For streaming sessions that are supported by the Graphics Design and Graphics Pro instance families, the AppStream 2.0 client also supports the use of up to 2 monitors with a maximum display resolution of 4096x2160 pixels per monitor. /appstream2/faqs/;How can I deploy the AppStream 2.0 Windows Client to my users?;Users can download and install the Windows Client. To use USB peripherials, a users need local administrator rights to install the AppStream 2.0 USB driver. You can remotely install the Windows Client using remote deployment tools like Microsoft System Center Configuration Manager (SCCM). Learn more in our documentation. /appstream2/faqs/;Can users configure location and language settings for their applications?;Yes. Users can set the time zone, locale, and input method to be used in their streaming sessions to match their location and language preferences. /appstream2/faqs/;Can users copy and paste between their local device and their Amazon AppStream 2.0 streaming applications?;Yes. Users can use the Windows Client and Google Chrome to access their streaming applications can copy and paste text between their local device and their streaming applications in the same way they copy and paste between applications on their local device - for example, using keyboard shortcuts. For other browsers, users can use the Amazon AppStream 2.0 web clipboard tool. /appstream2/faqs/;Can I provide my users a desktop experience?;Yes. AppStream 2.0 allows you to choose between an application or desktop stream view when you configure the fleet. The application view displays only the windows of the applications that are opened by users, while the desktop view displays the standard desktop experience that is provided by the operating system. /appstream2/faqs/;Can my Amazon AppStream 2.0 applications run offline?;No. Amazon AppStream 2.0 requires a sustained internet connection or network route to an AppStream 2.0 streaming VPC endpoint to access your applications. /appstream2/faqs/;What does Amazon AppStream 2.0 manage on my behalf?;Streaming resources: AppStream 2.0 launches and manages AWS resources to host your application, deploys your application on those resources, and scales your application to meet end user demand. /appstream2/faqs/;Can I use tags to categorize AppStream 2.0 resources?;Yes. you can assign tags to manage and track the following Amazon AppStream 2.0 resources: Applications, appblocks, image builders, images, fleets, and stacks. AWS enables you to assign metadata to your AWS resources in the form of tags. Tags let you categorize your AppStream 2.0 resources so you can easily identify their purpose and track costs accordingly. For example, you can use tags to identify all resources used by a particular department, project, application, vendor, or use case. Then, you can use AWS Cost Explorer to identify trends, pinpoint cost drivers, and detect anomalies in your account. /appstream2/faqs/;What resources can I create with AWS CloudFormation?;With CloudFormation, you can automate creating fleets, deploying stacks, adding and managing user pool users, launching image builders, and creating directory configurations alongside your other AWS resources. /appstream2/faqs/;Can I try sample applications?;Yes. Visit Try Sample Applications low-friction, setup-free trial experience for Amazon AppStream 2.0 service. /appstream2/faqs/;What do I need to start using Try It Now?;You need an AWS account and a broadband Internet connection with at least 1 Mbps bandwidth to use Try It Now. You also need a browser capable of supporting HTML5. /appstream2/faqs/;Will I be charged for using Try It Now?;You won’t be charged any AWS fees for using Try It Now. However, you may incur other fees such as Internet or broadband charges to connect to the Try It Now experience. /appstream2/faqs/;What applications can I use with Try It Now?;Try It Now includes popular productivity, design, engineering, and software development applications running on Amazon AppStream 2.0 for you to try. To see the full list of available applications, go to the Try It Now catalog page after signing in with your AWS account. /appstream2/faqs/;How long can I stream applications via Try It Now?;You can stream the applications included in Try It Now for up to 30 minutes. At the end of 30 minutes, your streaming session is automatically terminated and any unsaved data will be deleted. /appstream2/faqs/;Can I save files within Try It Now?;You can save files to your Amazon AppStream 2.0 session storage and download them to your client device before your streaming session ends. Your files are not saved when you disconnect from your Try It Now session, or when your session ends, and any unsaved data will be deleted. /appstream2/faqs/;Can I submit an application to be included in Try It Now?;Yes. You can submit a request to include your application in Try It Now. After your request is received, AWS usually reviews the request and responds within 10 business days. /appstream2/faqs/;How do I get started with Amazon AppStream 2.0?;You can begin using Amazon AppStream 2.0 by visiting the AWS Management Console, or by using the AWS SDK. Visit Stream Desktop Applications for a 10 step tutorial. /appstream2/faqs/;What resources do I need to set up to stream my applications using Amazon AppStream 2.0?;You need to create an Amazon AppStream 2.0 stack in your AWS account to start streaming applications to your users. A stack includes a fleet of Amazon AppStream 2.0 instances that executes and streams applications to end users. When you use Elastic fleets, each instance is launched using an AppStream 2.0-managed image, while Always-On and On-Demand fleets use an image that you create containing your applications. You can select the instance type and size for your fleet depending on what CPU, memory, and graphics your user needs. To learn more about Amazon AppStream 2.0 resources, please visit this page. /appstream2/faqs/;How do I import my applications to Amazon AppStream 2.0?;If your applications require Active Directory, a custom driver, or require a reboot to install, you will need to create an AppStream 2.0 image using an image builder via the AWS Management Console, then use an Always-On or On-Demand fleet to stream the applications to your users. Image Builder allows you to install and test your applications just as you would with any Microsoft Windows or Linux desktop, and then create an image. You can complete all the install, test, and creation steps for the image without leaving the console. /appstream2/faqs/;How do I create an Amazon AppStream 2.0 image to import my applications?;You can create an Amazon AppStream 2.0 image using Image Builder via the AWS Management Console. Image Builder allows you to install and test your applications just as you would with any Windows or Linux desktop, and then create an image. You can complete all the install, test, and creation steps for the image without leaving the console. /appstream2/faqs/;What instance types are available to use with my Amazon AppStream 2.0 fleet?;Amazon AppStream 2.0 provides a menu of instance types for configuring a fleet or an image builder. You can select the instance type that best matches your applications and end-user requirements. You can choose from General Purpose, Compute Optimized, Memory Optimized, Graphics Design, Graphics Pro and Graphics G4 instance families. /appstream2/faqs/;Can I change an instance type after creating a fleet?;Yes. You can change your instance type after you have created a fleet. To change the instance type, you will need to stop the fleet, edit the instance type, and then start the fleet again. For more information, see Set up AppStream 2.0 Stacks and Fleets. /appstream2/faqs/;Can I connect Amazon AppStream 2.0 instances to my VPC?;Yes. You can choose the VPCs to which your Amazon AppStream 2.0 instances (fleet and image builders) connect. When you create your fleet, or launch Image Builder, you can specify one or more subnets in your VPC. If you have a VPC with a VPN connection to your on-premises network, then Amazon AppStream 2.0 instances in your fleet can communicate with your on-premises network. You retain the usual control you have over network access within your VPC, using all the normal configuration options such as security groups, network access control lists, and routing tables. For more information about creating a VPC and working with subnets, see Working with VPCs and Subnets. /appstream2/faqs/;Can I use custom branding with Amazon AppStream 2.0?;Yes. You can customize your users' Amazon AppStream 2.0 experience with your logo, color, text, and help links in the application catalog page. To replace AppStream 2.0's default branding and help links, log in to the AppStream 2.0 console, navigate to Stacks, and select a your application stack. Then, click Branding, choose Custom, select your options, and click Save. Your custom branding will apply to every new application catalog launched using SAML 2.0 single-sign-on (SSO) or the CreateStreamingURL API. You can revert to the default AppStream 2.0 branding and help links at any time. To learn more, visit Add Your Custom Branding to Amazon AppStream 2.0. /appstream2/faqs/;Can I define default application settings for my users?;Yes, you can set default application settings for your users. This includes application connection profiles, browser settings, and installing plugins. /appstream2/faqs/;Can users save their application settings?;Yes. You can enable persistent application and Windows settings for your users on AppStream 2.0. Your users' plugins, toolbar settings, browser favorites, application connection profiles, and other settings will be saved and applied each time they start a streaming session. Your users' settings are stored in an S3 bucket you control in your AWS account. /appstream2/faqs/;Am I charged for persistent user application settings?;There is no additional AppStream 2.0 charge to use this feature. However, persistent user application settings are stored in an Amazon S3 bucket in your account, and you will be billed for the S3 storage used for your user’s settings data. See Amazon S3 pricing or Enable Application Settings Persistence for Your AppStream 2.0 Users for more information. /appstream2/faqs/;Is there a limit to the file size of my users' persistent application settings?; Yes. Your users' application settings persist across stacks. /appstream2/faqs/;Will my users' application settings persist across stacks?; Your users' application settings are encrypted in transit to the S3 bucket in your account using Amazon S3's SSL endpoints. Your users' application settings are encrypted at rest using S3-managed encryption keys. /appstream2/faqs/;How can I create images with my own applications?;You can use Amazon AppStream 2.0 Image Builder to create images with your own applications. To learn more, please visit the tutorial found on this page. /appstream2/faqs/;With which operating system do my apps need to be compatible?;Amazon AppStream 2.0 streams applications that can run on the following 64-bit Windows OS versions - Windows Server 2012 R2, Windows Server 2016 and Windows Server 2019. You can add support for 32-bit Windows applications by using the WoW64 extensions. If your application has other dependencies, such as the .NET framework, include those dependencies in your application installer. Amazon AppStream 2.0 also streams applications that can run on Amazon Linux 2 operating system. /appstream2/faqs/;Can I install anti-virus software on my Amazon AppStream 2.0 image to secure my applications?;You can install any tools, including anti-virus programs on your AppStream 2.0 image. However, you need to ensure that these applications do not block access to the AppStream 2.0 service. We recommend testing your applications before publishing them to your users. You can learn more by reading Windows Update and Antivirus Software on AppStream 2.0 and Data Protection in AppStream 2.0 in the Amazon AppStream 2.0 Administration Guide. /appstream2/faqs/;Can I customize the Windows operating system using group policies?;Any changes that are made to the image using Image Builder through local group policies will be reflected in your AppStream 2.0 images. Any customizations made with domain based group policies can only be applied to domain joined fleets. /appstream2/faqs/;How do I keep my Amazon AppStream 2.0 images updated?;AppStream 2.0 regularly releases base images that include operating system updates and AppStream 2.0 agent updates. The AppStream 2.0 agent software runs on your streaming instances and enables your users to stream applications. When you create a new image, the *Always use latest agent version* option is selected by default. When this option is selected, any new image builder or fleet instance that is launched from your image will always use the latest AppStream 2.0 agent version. If you deselect this option, your image will use the agent version you selected when you launched the image builder. Alternatively, you can use managed AppStream 2.0 image updates with your images to install the latest operating system updates, driver updates, and AppStream 2.0 agent software and create new images. You are responsible for installing and maintaining the updates for the operating system, your applications, and their dependencies. For more information, see Keep Your AppStream 2.0 Image Up-to-Date. /appstream2/faqs/;How do I update my applications in an existing image?;To update applications on the image, or to add new applications, launch Image Builder using an existing image, update your applications and create a new image. Existing streaming instances will be replaced with instances launched from the new image within 16 hours (Always-On instances) and 7 Days (Stopped instances for On-Demand fleets) or immediately after users have disconnected from them, whichever is earlier. You can immediately replace all the instances in the fleet with instances launched from the latest image by stopping the fleet, changing the image used, and starting it again. /appstream2/faqs/;Can I connect my Amazon AppStream 2.0 applications to my existing resources, such as a licensing server?;Yes. Amazon AppStream 2.0 allows you to launch streaming instances (fleets and image builders) in your VPC, which means you can control access to your existing resources from your AppStream 2.0 applications. For more information, see Network Settings for Fleet and Image Builder Instances. /appstream2/faqs/;Can I copy my Amazon AppStream 2.0 images?;Yes. You can copy your Amazon AppStream 2.0 application images across AWS Regions. To copy an image, launch the AppStream 2.0 console and select the region that contains your existing image. In the navigation pane, choose Images, select your existing image, click Actions, select Copy, and pick your target AWS Region. You can also use the CopyImage API to programmatically copy images. Visit Tag and Copy an Image for more information. /appstream2/faqs/;Can I share application images with other AWS Accounts?;Yes. You can share your AppStream 2.0 application images with other AWS accounts within the same AWS Region. You control the shared image and can remove it from another AWS account at any time. To learn more, visit Administer Your Amazon AppStream 2.0 Image /appstream2/faqs/;What permissions can I give other AWS accounts when I share my application image(s) with them?;You maintain full privileges to the application image. You can share the image with other AWS accounts, granting them permission to either create image builders, use for fleets, or both. These permissions can later be revoked. However, if you granted the destination AWS account permission to create image builders, you will not be able to revoke access to the image builders or images they create from your image. /appstream2/faqs/;If I share an application image with another AWS account, can I delete it or remove permissions?;Yes. You control the image. In order to delete the image, you will first have to stop sharing the image from all AWS accounts you shared it with. The AWS accounts you shared the image with will no longer see the image in their Image Registry, and will be unable to select it for new or existing fleets. Existing streaming instances in the fleets will continue to stream applications, but the fleet will terminate existing unused instances. If you originally granted permissions for creating image builders, they will be unable to create new image builders from it, but existing ones will continue to work. Images in the destination account created from image builders from the shared image will continue to work. /appstream2/faqs/;Does Amazon AppStream 2.0 offer GPU-accelerated instances?;Yes. Amazon AppStream 2.0 offers Graphics Design, Graphics Pro and Graphics G4 instance families. /appstream2/faqs/;What are fleets?;Fleets are an AppStream 2.0 resource that represent the configuration details for the streaming instances your users will use to launch their applications and desktops. The fleet consists of configuration details such as instance type and size, networking, and user session timeouts. /appstream2/faqs/;What types of fleets are available with Amazon AppStream 2.0?;Amazon AppStream 2.0 offers three fleet types: Always-On, On-Demand, and Elastic. These fleet types allow you to choose how applications and desktops are delivered, the speed of session start, and cost to stream. /appstream2/faqs/;What are the differences between the fleet types?;Always-On and On-Demand fleet streaming instances are launched using the custom AppStream 2.0 image that you create that contains your applications and configurations. You can specify how many instances to launch manually, or dynamically using Fleet Auto Scaling policies. Streaming instances must be provisioned before a user can stream. /appstream2/faqs/;Can I switch my Amazon AppStream 2.0 Always-On fleet to On-Demand or vice versa?;You can only specify the fleet type when you create a new fleet, and you cannot change the fleet type once the fleet has been created. /appstream2/faqs/;What are the benefits to Always-On and On-Demand fleets for Amazon AppStream 2.0?;Always-On and On-Demand fleets are best for when your applications require Microsoft Active Directory domain support, or can only be delivered using an AppStream 2.0 image. Always-On fleet streaming instances provide instant access to applications and you pay the running instance rate even when no users are streaming. On-Demand fleet streaming instances launch the application after an up to 2-minute wait, and you pay the running instance rate only when users are streaming. On-Demand fleet streaming instances that are provisioned but not yet used are charged at a lower stopped instance fee. You manage the capacity of Always-On and On-Demand fleet streaming instances using auto scaling rules. /appstream2/faqs/;What applications can I use with an Elastic fleet?;Elastic fleets can use applications that are designed to be self-contained, portable and can operate off a different volume. This is similar to installing an application to a USB hard disk drive, and running it from any PC you use. /appstream2/faqs/;How do I import my applications for Elastic fleets?;Elastic fleets use applications that are saved within virtual hard disk (VHD) files and saved to an S3 bucket within your AWS account. The VHD is downloaded to the streaming instance and mounted when your user chooses which application to launch. To learn more about importing applications for Elastic fleets, read Create and Manage App Blocks and Applications for Elastic Fleets in the Amazon AppStream 2.0 Administration Guide. /appstream2/faqs/;What are AppStream 2.0 AppBlocks and AppStream 2.0 Applications?;AppBlocks are an AppStream 2.0 resource that has the details for the virtual hard drive with your application’s files, and the setup script for how to mount it to the streaming instance. Applications are an AppStream 2.0 resource that has the details for how to launch applications from an AppBlock. You must associate your Applications to AppBlocks before you can associate them to the Elastic fleet. /appstream2/faqs/;How are App Blocks mounted to an Elastic fleet streaming instance?;When you create the App Block, you must specify a setup script. The setup script specifies how to mount the App Block to the streaming instance, and allows you to complete any customization or configuration needed before the application launches. To learn more about creating the setup script, read Create the Setup Script for the VHD in the Amazon AppStream 2.0 Administration Guide /appstream2/faqs/;What client operating systems and browsers are supported?;Amazon AppStream 2.0 can stream your applications to HTML5-compatible browsers, including the latest versions of Google Chrome, Mozilla Firefox, Microsoft Internet Explorer, and Microsoft Edge, on desktop devices, including Windows, Mac, Chromebooks, and Linux PCs. The AppStream 2.0 client for Windows lets your users use 4 monitors, 4K monitors and and USB peripherals such as 3D mouse with your applications on AppStream 2.0. The AppStream 2.0 client for Windows is supported on the following versions of Windows: Windows 7, Windows 8, Windows 8.1, and Windows 10. Both 32-bit and 64 bit versions of Windows are supported. /appstream2/faqs/;What Windows server operating system is supported?;Amazon AppStream 2.0 streams applications that can run on the following 64-bit OS versions - Windows Server 2012 R2, Windows Server 2016 and Windows Server 2019. You can add support for 32-bit applications by using the WoW64 extensions. If your application has other dependencies, such as the .NET framework, include those dependencies in your application installer. /appstream2/faqs/;Which Linux distribution is supported?;Amazon AppStream 2.0 supports Amazon Linux 2 operating system. /appstream2/faqs/;Which AWS regions does Amazon AppStream 2.0 support?;Please refer to the AWS Regional Products and Services page for details of Amazon AppStream 2.0 service availability by region /appstream2/faqs/;What instance types are available to use with my Amazon AppStream 2.0 fleet?;Amazon AppStream 2.0 provides a menu of instance types for configuring a fleet. You can select the instance type that best matches your applications and end-user requirements. You can choose from General Purpose, Compute Optimized, Memory Optimized, Graphics Design, Graphics Desktop, or Graphics Pro instance families. /appstream2/faqs/;How does Amazon AppStream 2.0 scale?;Amazon AppStream 2.0 Always-On and On-Demand fleets use Fleet Auto Scaling to launch Amazon AppStream 2.0 instances running your application and to adjust the number of streaming instances to match the demand for end-user sessions. Each end-user session runs on a separate instance, and all of the applications that are streamed within a session run on the same instance. An instance is used to stream applications for only one user, and is replaced with a new instance at the end of the session. For more information, read Fleet Auto Scaling for Amazon AppStream 2.0 in the Amazon AppStream 2.0 Administration Guide. /appstream2/faqs/;What scaling policy does Amazon AppStream 2.0 support?;You can set a fixed fleet size to keep a constant number of AppStream 2.0 streaming instances, or use dynamic scaling policies that adjust capacity based on a schedule, usage, or both. Using dynamic scaling policies allows you to manage your cost while ensuring there is sufficient capacity for your users to stream. For more information, read Fleet Auto Scaling for Amazon AppStream 2.0 in the Amazon AppStream 2.0 Administration Guide. /appstream2/faqs/;What is an Amazon AppStream 2.0 Fleet Auto Scaling policy?;A Fleet Auto Scaling policy is a dynamic scaling policy that allows you to scale the size of your fleet to match the supply of available instances to user demand. You can define scaling policies that adjust the size of your fleet automatically based on a variety of utilization metrics, and optimize the number of running instances to match user demand. For more information, read Fleet Auto Scaling for Amazon AppStream 2.0 in the Amazon AppStream 2.0 Administration Guide. /appstream2/faqs/;How can I create auto scaling policies for my Amazon AppStream 2.0 fleet?;You can create automatic scaling policies from the Fleets tab in the AppStream 2.0 console, or by using the AWS SDK. /appstream2/faqs/;Which Amazon AppStream 2.0 CloudWatch metrics can I use to build Fleet Auto Scaling polices?;You can use the following metrics to build your Fleet Auto Scaling policies: /appstream2/faqs/;Can my Amazon AppStream 2.0 fleet have more than one associated Fleet Auto Scaling policy?;Yes. You can have up to 50 Fleet Auto Scaling policies associated with a single fleet. Each policy allows you to set a single criteria and action for resizing your fleet. /appstream2/faqs/;What is the minimum size I can set for my Amazon AppStream 2.0 fleet when using Fleet Auto Scaling policies?;You can set your Fleet Auto Scaling policies to scale in to zero instances. Scaling policies associated with your fleet decrease fleet capacity until it reaches your defined minimum, or the default setting of one if you haven’t set a minimum. For more information, please see Fleet Auto Scaling for Amazon AppStream 2.0. /appstream2/faqs/;What is the maximum size I can set for my Amazon AppStream 2.0 fleet when using Fleet Auto Scaling policies?;Fleet Auto Scaling policies increase fleet capacity until it reaches your defined maximum size or until service limits apply. For more information, please see Fleet Auto Scaling for Amazon AppStream 2.0. For service limit information, please see Amazon AppStream 2.0 Service Limits. /appstream2/faqs/;Are there additional costs for using Fleet Auto Scaling policies with Amazon AppStream 2.0 fleets?;There are no charges for using Fleet Auto Scaling policies. However, each CloudWatch alarm that you create and use to trigger scaling policies for your AppStream 2.0 fleets may incur additional CloudWatch charges. For more information, see Amazon CloudWatch Pricing. /appstream2/faqs/;Does Amazon AppStream 2.0 offer persistent storage so that I can save and access files between sessions?;Yes. Amazon AppStream 2.0 offers multiple options for persistent file storage to allow users to store and retrieve files between their application streaming sessions. You can use a home folder backed by Amazon S3, Google Drive for G Suite, or Microsoft OneDrive for Business. Each of these are accessed from the my files tab within an active AppStream 2.0 streaming session, and content can be saved or opened directly from the File menu in most apps. /appstream2/faqs/;How do users access persistent storage from their Amazon AppStream 2.0 sessions?;Users can access a home folder during their application streaming session. Any file they save to their home folder will be available for use in the future. They can also connect their G Suite account to access Google Drive and connect their Microsoft OneDrive for Business account to access OneDrive within AppStream 2.0. New files added or changes made to existing files within a streaming session are automatically synced between AppStream 2.0 and their persistent storage options. /appstream2/faqs/;Can I enable multiple persistent storage options for an Amazon AppStream 2.0 stack?;Yes. You can enable Home Folders, Google Drive for G Suite, and Microsoft OneDrive for Business. To optimize your internet bandwidth, create a VPC endpoint for Amazon S3 and authorize AppStream 2.0 to access your VPC endpoint. This routes Home Folders data through your VPC and Google Drive or OneDrive data through the public Internet. /appstream2/faqs/;How do I enable Google Drive for G Suite for Amazon AppStream 2.0?;When creating an Amazon AppStream 2.0 stack, select the option to enable Google Drive for the stack, provide your G Suite domain names, and create the stack. To learn more, visit Enable and Administer Google Drive for Your AppStream 2.0 Users. /appstream2/faqs/;Can a user remove their Google Drive for G Suite account?;Yes. Users can remove permissions that AppStream 2.0 has to their Google account from their Google account permissions page. /appstream2/faqs/;Can I control which Google Drive for G Suite accounts integrate with AppStream 2.0?;Yes. Only user accounts with your G Suite organization's domain name can use their Google Drive account. Users cannot link any other accounts. To learn more, visit Enable and Administer Google Drive for Your Users. /appstream2/faqs/;What kind of data can users store in Google Drive during a streaming session?;Any file type that is supported by Google Drive can be stored during the streaming session. For more details on the file types supported by Google Drive, refer to Google Drive FAQs. /appstream2/faqs/;Can users transfer files from their device to Google Drive during their streaming session?;Yes. Users can transfer files to and from from their device and Google Drive using the MyFiles feature in the streaming session toolbar. Visit Enable Persistent Storage for Your AppStream 2.0 Users to learn more. /appstream2/faqs/;How do I enable Microsoft OneDrive for Business for Amazon AppStream 2.0?;When creating an Amazon AppStream 2.0 stack, select the option to enable OneDrive for Business for the stack, provide your OneDrive for Business domain names, and create the stack. To learn more, visit Enable and Administer OneDrive for Your AppStream 2.0 Users. /appstream2/faqs/;Can I control which Microsoft OneDrive for Business accounts integrate with AppStream 2.0?;Yes. Only user accounts with your OneDrive for Business domain names can use their accounts. Users cannot link any other accounts. To learn more, visit Enable and Administer OneDrive for Your AppStream 2.0 Users. /appstream2/faqs/;Can a user remove Microsoft OneDrive for Business?;Yes. Users can remove permissions that AppStream 2.0 has to their OneDrive for Business online account. /appstream2/faqs/;What kind of data can users store in Microsoft OneDrive for Business during a streaming session?;Any file type that is supported by OneDrive for Business can be stored during the streaming session. For more details on the file types supported by OneDrive for Business, refer to OneDrive for Business documentation. /appstream2/faqs/;Can users transfer files from their device to Microsoft OneDrive for Business during their streaming session?;Yes. Users can transfer files to and from from their device and OneDrive for Business using the MyFiles feature in the streaming session toolbar. To learn more, visit Enable and Administer OneDrive for Your AppStream 2.0 Users. /appstream2/faqs/;Which settings remain persistent between sessions?;You can enable persistent application and Windows settings for your users on AppStream 2.0. Your users' plugins, toolbar settings, browser favorites, application connection profiles, and other settings will be saved and applied each time they start a streaming session. Your users' settings are stored in an S3 bucket you control in your AWS account. /appstream2/faqs/;How do I monitor usage of my Amazon AppStream 2.0 fleet resources?;There are two ways you can monitor your Amazon AppStream 2.0 fleet. First, the AppStream 2.0 console provides a lightweight, real-time view of the state of your AppStream 2.0 fleet, and offers up to two weeks of historical usage data. Metrics are displayed automatically, and don’t require any setup. /appstream2/faqs/;What information can I get from the Amazon AppStream 2.0 usage metrics?;You can see the size of your Amazon AppStream 2.0 fleet, the number of running instances, the number of instances available to accept new connections, and the utilization of your fleet. You can track these metrics over time so that you can optimize your fleet settings to suit your needs. /appstream2/faqs/;Can I create custom Amazon CloudWatch metrics for Amazon AppStream 2.0?;Yes, you can create custom metrics for Amazon AppStream 2.0. For more information, see Publish Custom Metrics. /appstream2/faqs/;How frequently are Amazon AppStream 2.0 metrics published to Amazon CloudWatch?;Amazon AppStream 2.0 sends metrics to Amazon CloudWatch every 1 minute. The metrics are stored in CloudWatch using the standard retention policy. For more information, see Amazon CloudWatch FAQs. /appstream2/faqs/;How do I create CloudWatch alarms for Amazon AppStream 2.0?;You can create Amazon CloudWatch alarms for Amazon AppStream 2.0 using the CloudWatch console or the CloudWatch APIs. /appstream2/faqs/;Are there additional costs for using CloudWatch metrics with Amazon AppStream 2.0?;There is no additional charge for viewing CloudWatch metrics for AppStream 2.0. You may incur additional charges for setting up CloudWatch alarms and retrieving metrics via the CloudWatch APIs. For more information, see Amazon CloudWatch Pricing. /appstream2/faqs/;Does Amazon AppStream 2.0 offer a set of public APIs?;Yes, Amazon AppStream 2.0 includes APIs that you can use to easily integrate and extend the service. The APIs enable you to create, update, and delete Amazon AppStream 2.0 resources, and provide detailed information about resource states. You can create URLs for administrators to connect to their image builders to install applications, and create URLs for users to access their AppStream 2.0 applications. See our API reference for more information. /appstream2/faqs/;What streaming protocol does Amazon AppStream 2.0 use?;Amazon AppStream 2.0 uses NICE DCV to stream your applications to your users. NICE DCV is a proprietary protocol used to stream high-quality, application video over varying network conditions. It streams video and audio encoded using standard H.264 over HTTPS. The protocol also captures user input and sends it over HTTPS back to the applications being streamed from the cloud. Network conditions are constantly measured during this process and information is sent back to the encoder on the server. The server dynamically responds by altering the video and audio encoding in real time to produce a high-quality stream for a wide variety of applications and network conditions. /appstream2/faqs/;What is the maximum network latency recommended while accessing Amazon AppStream 2.0?;While the remoting protocol has a maximum round-trip latency recommendation of 250 ms, the best user experience is achieved at less than 100 ms. If you are located more than 2000 miles from the AWS Regions where Amazon AppStream 2.0 is currently available, you can still use the service, but your experience may be less responsive. /appstream2/faqs/;How do I restrict network access from fleets and image builders launched in my VPC?;Security groups enable you to specify network traffic that is allowed between your streaming instances and resources in your VPC. You can restrict network access by assigning an image builder or fleet to the security groups in your VPC. For more information, refer to Security Group for Your VPC. /appstream2/faqs/;Can I use existing VPC security groups to secure AppStream 2.0 fleets and image builders?;Yes. You can assign an image builder or fleet to existing security groups in your VPC. /appstream2/faqs/;How many security groups can I apply to a fleet or image builder?;You can assign an image builder or fleet to up to five security groups. /appstream2/faqs/;Can I change the security groups to which my fleets are assigned after they have been created?;Yes. You can change the security groups to which your fleets are assigned, so long as they are in the stopped status. /appstream2/faqs/;Can I change the security groups to which my image builders are assigned after they have been created?;No. You cannot change the security groups to which your fleets are assigned after they have been created. To assign an image builder to a different security groups, you will need to create a new image builder. /appstream2/faqs/;How is the data stored in my user's home folders secured?;Files and folders in your users' home folders are encrypted in transit using Amazon S3's SSL endpoints. Files and folders are encrypted at rest using Amazon S3-managed encryption keys. /appstream2/faqs/;How is the data from my streamed application encrypted to the client?;The streamed video and user inputs are sent over HTTPS and are SSL-encrypted between the Amazon AppStream 2.0 instance executing your applications, and your end users. /appstream2/faqs/;Can I control data transfer between AppStream 2.0 and my users' devices?;Yes. You can choose whether to allow users to transfer data between their streaming applications and their local device through copy or paste, file upload or download, or print actions. To learn more, visit Create Fleets and Stacks. /appstream2/faqs/;How do I authenticate users with Amazon AppStream 2.0 applications?;There are three options to authenticate users with Amazon AppStream 2.0: you can use built-in user management, you can build a custom identity, or you can set up federated access using SAML 2.0. /appstream2/faqs/;Can I use Amazon AppStream 2.0 with my existing user directory, including Microsoft Active Directory?;Yes. Amazon AppStream 2.0 supports identity federation using SAML 2.0, which allows you to use your existing user directory to manage end user access to your AppStream 2.0 apps. For details on setting up SAML integration, read Single Sign-on Access (SAML 2.0) in the Amazon AppStream 2.0 Administration Guide. /appstream2/faqs/;What type of identity federation does Amazon AppStream 2.0 support?;Amazon AppStream 2.0 supports federation using SAML 2.0 (Identity Provider initiated). This type of federated access allows a user to sign in by first authenticating with an identity federation provider, after which they can access their AppStream 2.0 apps. /appstream2/faqs/;What are the requirements for setting up identity federation with Amazon AppStream 2.0?;To configure identity federation with Amazon AppStream 2.0, you need a SAML 2.0 Identity Provider that links to an existing LDAP-compatible directory, such as Microsoft Active Directory. Microsoft Active Directory Federation Services (ADFS), Ping Identity, Okta, and Shibboleth, are all examples of SAML 2.0 Identity Providers that will work with AppStream 2.0. /appstream2/faqs/;Can I control which users access my Amazon AppStream 2.0?;Yes. When using built-in user management, you can control which users have access to your Amazon AppStream 2.0 stacks in the User Pool tab of the AppStream 2.0 management console. To learn more about user management within AppStream 2.0, see Using the AppStream 2.0 User Pool. /appstream2/faqs/;Can I enable multi-factor authentication for my users?;Yes. You can enable Multi-Factor Authentication when using federation with SAML 2.0 or when using your own entitlement service. /appstream2/faqs/;Can I dynamically entitle users to apps?;Yes, if your users are federating to AppStream 2.0 from a SAML 2.0 Identity Provider, you can control access to specific apps within your AppStream 2.0 stacks based on SAML 2.0 attribute assertions. Additionally, you can use dynamic app framework APIs to build a dynamic app provider that specifies what apps users can launch at run-time. The apps provided can be virtualized apps that are delivered from a Windows file share or other storage technology. To learn more about these options, see Manage Application Entitlements. /appstream2/faqs/;Can users choose which Amazon AppStream 2.0 stack they want to access during sign-in?;Yes. You can setup every AppStream 2.0 stack as an entity or a package in your federation service. This allows your users to select which stack they want to access while signing in from your application portal. Additionally, your SAML 2.0 federated user identities can access the AppStream 2.0 stacks they are entitled to from a single SAML 2.0 service provider application based on SAML 2.0 attribute assertions. /appstream2/faqs/;Who can access the management console for my Amazon AppStream 2.0 application?;You can use AWS Identity and Access Management (IAM) to add users to your AWS account and grant them access to view and manage your Amazon AppStream 2.0 application. For more information, see “What is IAM?” in the IAM User Guide. /appstream2/faqs/;Can I join Amazon AppStream 2.0 image builders to Microsoft Active Directory domains?;Yes, Amazon AppStream 2.0 Windows OS-based streaming instances can be joined to your Microsoft Active Directory domains. This allows you to apply your existing Active Directory policies to your streaming instances, and provides your users with single sign on access to Intranet sites, file shares, and network printers from within their applications. Your users are authenticated using a SAML 2.0 provider of your choice, and can access applications that require a connection to your Active Directory domain. You can join image builders, Always-On fleet streaming instances, and On-Demand fleet streaming instances that use the Windows OS to Active Directory domains. Linux OS-based AppStream 2.0 image builders, Always-On fleet streaming instances, and On-Demand fleet streaming instances cannot be joined to Active Directory domains. /appstream2/faqs/;What Microsoft Active Directory versions are supported?;Microsoft Active Directory Domain Functional Level Windows Server 2008 R2 and newer are supported by Amazon AppStream 2.0. /appstream2/faqs/;Which AWS Directory Services directory options are supported by Amazon AppStream 2.0?;Amazon AppStream 2.0 supports AWS Directory Services Microsoft AD. Other options such as AD Connector and Simple AD are not supported. To learn more about AWS Microsoft AD see What Is AWS Directory Service. /appstream2/faqs/;How do I join my Amazon AppStream 2.0 instances to my Microsoft Active Directory domain?;To get started you will need a Microsoft Active Directory domain that is accessible from an Amazon VPC, the credentials of a user with authority to join the domain, and the domain Organizational Unit (OU) you want to join to your fleet. For more information, see Using Active Directory Domains with AppStream 2.0. /appstream2/faqs/;Can I use my existing Organization Units (OU) structure with Amazon AppStream 2.0?;Yes, you can use your existing Organizational Unit (OU) structure with Amazon AppStream 2.0. To learn more, see Using Active Directory Domains with AppStream 2.0. /appstream2/faqs/;What gets joined to my Microsoft Active Directory domain by Amazon AppStream 2.0?;Amazon AppStream 2.0 will automatically create a unique computer object for every image builder and Always-On or On-Demand fleet instance you configure to be joined to your Microsoft Active Directory domain. /appstream2/faqs/;How can I identify Amazon AppStream 2.0 computer objects in my Microsoft Active Directory domain?;Amazon AppStream 2.0 computer objects are only be created in the Microsoft Active Directory Organization Unit (OU) you specify. The description field indicates that the object is an AppStream 2.0 instance, and to which fleet the object belongs. To learn more, see Using Active Directory Domains with AppStream 2.0. /appstream2/faqs/;How are computer objects that are created by Amazon AppStream 2.0 deleted from my Microsoft Active Directory domain?;Computer objects created by Amazon AppStream 2.0 that are no longer used will remain in your Active Directory (AD) if the AppStream 2.0 fleet or image builder is deleted, you update a fleet or image builder to a new OU, or select a different AD. To remove unused objects you will have to delete them manually from your AD domain. To learn more, see Using Active Directory Domains with AppStream 2.0. /appstream2/faqs/;How do I provide users with access to Amazon AppStream 2.0 streaming instances that are joined to a Microsoft Active Directory domain?;To enable user access, you will need to set up federated access using a SAML 2.0 provider of your choice. This allows you to use your existing user directory to control access to streaming applications available via Amazon AppStream 2.0. For details on setting up SAML 2.0 integration, see the steps outlined at Setting Up SAML. /appstream2/faqs/;Can I connect my users that are managed through User Pools to my Active Directory domain?;No. At this time we do not support User Pools users connecting to domain joined resources. To learn more about User Pools see, Using the AppStream 2.0 User Pool. /appstream2/faqs/;How do my users sign in to streaming instances that are joined to an Active Directory domain?;When your users access a streaming instance through a web browser, they sign in to their Microsoft Active Directory domain by entering their domain password. When your users access a streaming instance by using the AppStream 2.0 client for Windows, they can either enter their Active Directory domain password or use a smart card that is trusted by the Active Directory domain. /appstream2/faqs/;How much does Amazon AppStream 2.0 cost?;You are charged for the streaming resources in your Amazon AppStream 2.0 environment, and monthly user fees per unique authorized user accessing applications via Windows operating system based Amazon AppStream 2.0 streaming instance. You pay for these on-demand, and never have to make any long-term commitments. /appstream2/faqs/;Can I bring my own licenses and waive the user fees?;Yes. If you have Microsoft License Mobility, you may be eligible to bring your own Microsoft RDS CAL licenses and use them with Windows based Amazon AppStream 2.0. For users covered with your own licenses, you won’t incur the monthly user fees. For more information about using your existing Microsoft RDS SAL licenses with Amazon AppStream 2.0, please visit this page, or consult with your Microsoft representative. /appstream2/faqs/;What are the requirements for schools, universities, and public institutions to reduce their user fee?;Schools, universities, and public institutions may qualify for reduced user fees. Please reference the Microsoft Licensing Terms and Documents for qualification requirements. If you think you may qualify, please contact us. We will review your information and work with you to reduce your Microsoft RDS SAL fee. There is no user fee incurred when using image builder instances. /appstream2/faqs/;What do I need to provide to qualify as a school, university, or public institution?;You will need to provide AWS your institution's full legal name, principal office address, and public website URL. AWS will use this information to qualify you for AppStream 2.0's reduced user fees for qualified educational institutions. Please note: The use of Microsoft software is subject to Microsoft’s terms. You are responsible for complying with Microsoft licensing. If you have questions about your licensing or rights to Microsoft software, please consult your legal team, Microsoft, or your Microsoft reseller. You agree that we may provide the information to Microsoft in order to apply educational pricing to your Amazon AppStream 2.0 usage. /appstream2/faqs/;Does qualification for Amazon AppStream 2.0's reduced RDS SAL user fees affect other AWS cloud services?;No, your user fees are specific to Amazon AppStream 2.0, and do not affect any other AWS cloud services or licenses you have. /appstream2/faqs/;Can I use tags to obtain usage and cost details for Amazon AppStream 2.0 on my AWS monthly billing report?;Yes. When you set tags to appear on your monthly Cost Allocation Report, your AWS monthly bill will also include those tags. You can then easily track costs according to your needs. To do this, first assign tags to your Amazon AppStream 2.0 resources by following the steps in Tagging Your AppStream 2.0 Resources. Next, select the tag keys to include in your cost allocation report by following the steps in Setting Up Your Monthly Cost Allocation Report. /appstream2/faqs/;Are there any costs associated with tagging Amazon AppStream 2.0 resources?;There are no additional costs when using tags with Amazon AppStream 2.0. /appstream2/faqs/;Is Amazon AppStream 2.0 HIPAA eligible?;Yes. If you have an executed Business Associate Addendum (BAA) with AWS, you can use Amazon AppStream 2.0 with the AWS accounts associated with your BAA to stream desktop applications with data containing protected health information (PHI). If you don’t have an executed BAA with AWS, contact us and we will put you in touch with a representative from our AWS sales team. For more information, see HIPAA Compliance. /appstream2/faqs/;Is AppStream 2.0 PCI Compliant?;Yes. Amazon AppStream 2.0 is PCI compliant and conforms to the Payment Card Industry Data Security Standard (PCI DSS). PCI DSS is a proprietary information security standard administered by the PCI Security Standards Council, which was founded by American Express, Discover Financial Services, JCB International, MasterCard Worldwide and Visa Inc. PCI DSS applies to all entities that store, process or transmit cardholder data (CHD) and/or sensitive authentication data (SAD) including merchants, processors, acquirers, issuers, and service providers. The PCI DSS is mandated by the card brands and administered by the Payment Card Industry Security Standards Council. For more information, see PCI DSS Compliance. /appstream2/faqs/;Is Amazon AppStream 2.0 included in the System and Organizational Controls (SOC) reports?;Yes. Amazon AppStream 2.0 is included in the AWS System and Organizational Controls (SOC) reports. AWS System and Organization Controls Reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. The purpose of these reports is to help you and your auditors understand the AWS controls established to support operations and compliance. You can learn more about the AWS Compliance programs by visiting AWS Compliance Programs or by visiting the Services in Scope by Compliance Program. /iot-core/faqs/;What is AWS IoT Core?;AWS IoT Core is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices. AWS IoT Core can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely. With AWS IoT Core, your applications can keep track of and communicate with all your devices, all the time, even when they aren’t connected. /iot-core/faqs/;What does AWS IoT Core offer?;Connectivity between devices and the AWS cloud. First, with AWS IoT Core you can communicate with connected devices securely, with low latency and with low overhead. The communication can scale to as many devices as you want. AWS IoT Core supports standard communication protocols (HTTP, MQTT, and WebSockets and LoRaWAN are supported currently). Communication is secured using TLS. /iot-core/faqs/;How does AWS IoT Core work?;Connected devices, such as sensors, actuators, embedded devices, smart appliances, and wearable devices, connect to AWS IoT Core over HTTPS, WebSockets, or secure MQTT or LoRaWANIncluded in AWS IoT Core is a Device Gateway that allows secure, low-latency, low-overhead, bi-directional communication between connected devices and your cloud and mobile applications. /iot-core/faqs/;What is 2lemetry and how does it relate to AWS IoT?;2lemetry was acquired by AWS in 2015, and their capabilities provided foundational elements such as the MQTT Message Broker and the Rules Engine for AWS IoT Core. /iot-core/faqs/;In which regions is AWS IoT Core available?;See the AWS Region Table for the current list of regions for AWS IoT Core. /iot-core/faqs/;How do I get started with using AWS IoT Core?;Use the AWS IoT Console or refer to the Quickstart section of our developer guide to test drive the AWS IoT Core in minutes. /iot-core/faqs/;Which languages does the AWS IoT Console support?;​The AWS IoT Console supports English, French, Japanese, Korean, Simplified Chinese, German, Portuguese, Spanish, Italian and Traditional Chinese. /iot-core/faqs/;How can I switch the console's language?;Click on the language at the bottom left corner of the console to pick the language. The language selection will persist throughout the consoles of different AWS services. /iot-core/faqs/;What are the ways for accessing AWS IoT Core?;You can use the AWS Management Console, the AWS SDKs, the AWS CLI, and the AWS IoT Core APIs. Connected devices can use the AWS IoT Device SDKs to simplify the communication with AWS IoT Core. /iot-core/faqs/;What communication and authentication protocols does AWS IoT Core support?;For control plane operations, AWS IoT Core supports HTTPS. For data plane operations, AWS IoT Core supports HTTPS, WebSockets, and secure MQTT – a protocol often used in IoT scenarios. /iot-core/faqs/;Can devices that are NOT directly connected to the Internet access AWS IoT Core?;Yes, via a physical hub. Devices connected to a private IP network and devices using non-IP radio protocols such as ZigBee or Bluetooth LE can access AWS IoT Core as long as they have a physical hub as an intermediary between them and AWS IoT Core for communication and security. /iot-core/faqs/;How should applications access AWS IoT Core?;Applications connecting to AWS IoT Core largely fall in two categories: 1. companion apps and 2. server applications. Companion apps are mobile or client-side browser applications that interact with connected devices via the cloud. A mobile app that lets a consumer remotely unlock a smart lock in the consumer’s house is an example of a companion app. Server applications are designed to monitor and control a large number of connected devices at once. An example of a server application would be a fleet management website that plots thousands of trucks on a map in real-time. /iot-core/faqs/;Can I get a history of AWS IoT Core API calls made on my account for security analysis and operational troubleshooting purposes?;Yes, to receive a history of AWS IoT Core API calls made on your account, you simply turn on CloudTrail in the AWS Management Console. /iot-core/faqs/;How do I send feedback?;To send feedback, click on the “Feedback” link in the footer bar of the console. /iot-core/faqs/;What is the Device Gateway?;The Device Gateway forms the backbone of communication between connected devices and the cloud capabilities such as the Rules Engine, Device Shadow, and other AWS and 3rd-party services. /iot-core/faqs/;What is MQTT?;MQTT is a lightweight pub/sub protocol, designed to minimize network bandwidth and device resource requirements. MQTT also supports secure communication using TLS. MQTT is often used in IoT use cases. MQTT v3.1.1 is an OASIS standard, and the Device Gateway supports most of the MQTT specification. /iot-core/faqs/;What is the Rules Engine?;The Rules Engine enables continuous processing of inbound data from devices connected to AWS IoT Core. You can configure rules in the Rules Engine in an intuitive, SQL-like syntax to automatically filter and transform inbound data. You can further configure rules to route data from AWS IoT Core to several other AWS services as well as your own or 3rd party services. /iot-core/faqs/;How are the rules defined and triggered?;An AWS IoT Core rule consists of two main parts: /iot-core/faqs/;Where can I learn more about rules?;You can learn more about rule here Core Rules documentation /iot-core/faqs/;What is the Registry and what should I use it for?;IoT scenarios can range from a small number of mission-critical devices to large fleets of devices. The Registry allows you to organize and track those devices. You can maintain a logical handle in the Registry for every device you are connecting to AWS IoT Core. Each device in the Registry can be uniquely identified and can have metadata such as model numbers, support contact, and certificates associated with it. You can search for connected devices in the Registry based on the metadata. /iot-core/faqs/;What is a Thing Type?;Thing Types allow you to effectively manage your catalogue of devices by defining common characteristics for devices that belong to the same device category. In addition, a Thing associated with a Thing Type can now have up to 50 attributes including 3 searchable attributes. /iot-core/faqs/;What is Simplified Permission Management?;This feature allows you to easily manage permission policies for a large number of devices by using variables that reference Registry or X.509 certificate properties. The integration of Registry and Certificate properties with device policies offers the benefits listed below: /iot-core/faqs/;What is the Device Shadow?;The Device Shadow enables cloud and mobile applications to easily interact with the connected devices registered in AWS IoT Core. The Device Shadow in AWS IoT Core contains properties of a connected device. You can define any set of properties applicable to your use case. For example, for a smart light bulb, you might define ‘on-or-off’, ‘color’, and ‘brightness’ as the properties. The connected device is expected to report the actual values of those properties, which are stored in the Device Shadow. Applications get and update the properties simply by using a RESTful API provided by AWS IoT Core. AWS IoT Core and the Device SDKs take care of synchronizing property values between the connected device and its Device Shadow in AWS IoT Core. /iot-core/faqs/;Do I have to use the Registry and the Device Shadow?;You can have applications communicate directly to the connected devices using the Device Gateway and/or the Rules Engine in AWS IoT Core. However, we recommend using the Registry and the Device Shadow since they offer richer and more structured development and management experience that lets you focus on the unique value you want to create for your customers rather than having to focus on the underlying communication and synchronization between the connected devices and the cloud. /iot-core/faqs/;What is the lifecycle of a device and its Device Shadow in AWS IoT Core?;You register a device (such as a light bulb) in the Registry. You program connected device to publish a set of its property values or ‘state (“I am ON and my color is RED”) to the AWS IoT Core service. The last reported state is stored in the Device Shadow in AWS IoT Core. An application (such as a mobile app controlling the light bulb) uses a RESTful API to query AWS IoT Core for the last reported state of the light bulb, without the complexity of communicating directly with the light bulb. When a user wants to change the state (such as turning the light bulb from ON to OFF), the application uses a RESTful API to request an update, i.e. sets a ‘desired’ state for the device in AWS IoT Core. AWS IoT Core takes care of synchronizing the desired state to the device. The application gets notified when the connected device updates its state to the desired state. /iot-core/faqs/;Where can I learn more about the Registry and the Device Shadow?;For more information on the Registry, see the Registry documentation. For more information on the Device Shadow, see the Device Shadow documentation. /iot-core/faqs/;Can I configure fine-grained authorization in AWS IoT Core?;Yes. Similar to other AWS services, in AWS IoT Core you have fine-grained control over the set of API actions each identity is authorized to invoke. In addition, you have fine-grained control over the pub/sub topics that an identity can publish or subscribe to, as well as over the devices and the Device Shadow in the Registry that an identity can access. /iot-core/faqs/;Where can I learn more about Security and Access Control in AWS IoT Core?;For more information, see AWS IoT Core Security and Identity. /iot-core/faqs/;What is Just-in-time registration of certificates?;"Just-in-time registration (JITR) of device certificates expands on the ""Use Your Own Certificate"" feature launched in April 2016 by simplifying the process of enrolling devices with AWS IoT Core. Prior to support for JITR, the device enrollment process required two steps: first, registering the Certificate Authority (CA) certificate to AWS IoT Core, then individually registering the device certificates that were signed by the CA. Now, with JITR you can complete the second step by auto-registering device certificates when devices connect to AWS IoT Core for the first time. This saves time spent on registering device certificates and allows devices to remain off-line during the manufacturing process. To further automate IoT device provisioning, you can create an AWS IoT Core rule with a Lambda action that activates the certificates and attaches policies. For more information, visit the Internet of Things Blog on AWS or Developer Documentation." /iot-core/faqs/;What is the AWS IoT Device SDK?;The AWS IoT Device SDKs simplify and accelerate the development of code running on connected devices (micro-controllers, sensors, actuators, smart appliances, wearable devices, etc.). First, devices can optimize the memory, power, and network bandwidth consumption by using the Device SDKs. At the same time, Device SDKs enable highly secure, low-latency, and low-overhead communication with built-in TLS, WebSockets, and MQTT support. The Device SDKs also accelerate IoT application development by supporting higher level abstractions such as synchronizing the state of a device with its Device Shadow in AWS IoT Core. /iot-core/faqs/;Which programming languages and hardware platforms does the AWS IoT Device SDK support?;AWS currently offers the AWS IoT Device SDKs for C and Node.js languages, as well as for the Arduino Yún platform. /iot-core/faqs/;Should I use AWS IoT Device SDK or the AWS SDKs?;The AWS IoT Device SDK complements the AWS SDKs. IoT projects often involve code running on micro-controllers and other resource-constrained devices. However, IoT projects often include application running in the cloud and on mobile devices that interact with the micro-controllers/resource-constrained devices. AWS IoT Device SDKs are designed to be used on the micro-controllers/resource-constrained devices, while the AWS SDKs are designed for cloud and mobile applications. /iot-core/faqs/;Is AWS IoT Core available in AWS Free Tier?;Yes. Please visit our pricing page for more information. /iot-core/faqs/;How much does AWS IoT Core cost?;Please visit our pricing page for information. /iot-core/faqs/;What is the AWS IoT Core SLA?;The AWS IoT Core SLA stipulates that you may be eligible for a credit towards a portion of your monthly service fees if AWS IoT Core fails to achieve a Monthly Uptime Percentage of at least 99.9% for AWS IoT Core. /iot-core/faqs/;Why should I use the AVS Integration for AWS IoT?;Until now, producing an Alexa Built-in device required on-device memory and compute to be at least 50MB RAM and ARM Cortex 'A' class microprocessors, increasing the engineering bill of materials (eBOM) and MSRP. Additionally, retrieving, buffering, decoding, and mixing audio on devices can be complex and time consuming. The high production cost and complexity makes it difficult for device makers to quickly go to market with differentiated, voice-enabled experiences on resource-constrained IoT devices. /iot-core/faqs/;How do I use the Alexa Voice Service (AVS) Integration?;Learn how to create low-cost Alexa Built-in devices with the AVS Integration for AWS IoT Core Getting Started Guide. /iot-core/faqs/;How is AVS for IoT different from traditional AVS?;AVS Integration supports Device arbitration, Dialog, Multi-turn dialog, Timers, Alarms, Reminders, Flash Briefing, Routines, Alexa Announce, eBooks, and Skills. It does not support high-quality music playback, Whole Home Audio, Alexa Calling, Spotify, Bluetooth, and rich multi-modal displays. /iot-core/faqs/;What types of devices can I build with AVS?;The AVS Integration for AWS IoT is a great solution for device makers producing low-cost, resource-constrained devices (including light switches, light bulbs, home hubs, home appliances and more) that want to allow their customers to talk to these products directly with the wake word “Alexa,” and receive voice responses and content instantly. These devices will have Built-in microphones and speakers that are capable of playing back dialog, alerts, and the news but are not adequate to support high-quality music playback. Device makers who want full music playback with richer Alexa Music Playback capabilities such as high fidelity (>128kbps) music streaming, Spotify, synchronized music streaming over multiple speakers, should continue to build these devices using the existing Alexa Built-in solutions. /iot-core/faqs/;Do I get the Alexa Built-in badge using the AVS Integration for AWS IoT Core?;Similar to other Alexa Built in products, products built with the AVS Integration will need to pass the Alexa Voice Service product certification process comprising of Amazon-managed testing of security, acoustic performance, user experience, and functional testing to earn the Amazon Certified Alexa Built-in badge. /iot-core/faqs/;What AWS regions will AVS Integration for IoT Core be available in at launch?;The AVS Integration for AWS IoT Core is available in all AWS regions where AWS IoT Core is available other than China (Beijing and Ningxia), Asia Pacific (Hong Kong) and Middle East (Bahrain). See the AWS Region Table for the current list of regions for AWS IoT Core. /iot-core/faqs/;Where can I find the Basic Station source code, if required for AWS IoT Core for LoRaWAN?;Basic Station software is maintained and distributed by Semtech via their Github repository. /iot-core/faqs/;Which private LoRaWAN network components are owned and managed by AWS IoT Core vs the customer?;Devices: You own and connect your choice of LoRaWAN devices to AWS IoT Core. You can buy any LoRa device or sensor compliant with LoRa 1.0.3 or 1.1 specification (without any need to develop or update software). /iot-core/faqs/;What is Amazon Sidewalk?; AWS IoT Core for Amazon Sidewalk is a fully integrated feature that enables IoT developers to easily provision, onboard, and monitor Amazon Sidewalk devices through AWS IoT Core. The deeper integration of Amazon Sidewalk with AWS IoT Core provides developers with a simplified path to connect Sidewalk-enabled devices to the cloud and access over 200+ AWS services. /iot-core/faqs/;Who should use Device Advisor?;Developers at device manufacturers should use Device Advisor to test their devices against pre-built test scenarios to verify reliable and secure connectivity to AWS IoT Core. Device Advisor provides a test endpoint in the AWS cloud, which device manufacturers can immediately use to test their devices, saving time and cost of development and testing. The test setup also provides detailed logs for each test, enabling faster troubleshooting of device software issues. Device Advisor also provides test coverage for complex test scenarios, enabling customers to discover and fix issues during their device software development. This results in more reliable performance and lower maintenance costs for device fleets after deployment. /iot-core/faqs/;How do I use the Device Advisor?;Any device that has been built to connect to AWS IoT Core can take advantage of Device Advisor. Developers at device manufacturers can access Device Advisor from the AWS IoT Core console or by using the AWS SDK. Once developers are ready to test their devices, they can register the devices with AWS IoT Core and configure the device software with the Device Advisor end point. They can then choose and execute the pre-built tests with a few simple clicks in the IoT Core Console and instantly get the test results along with detailed logs. /iot-core/faqs/;What tests are provided by Device Advisor?;See the test cases section in the Device Advisor for details on the pre-built tests supported. /iot-core/faqs/;Is there a cost to use Device Advisor?;Device Advisor is free to use. However, developers will be responsible for any costs associated with AWS usage as part of the testing (e.g. AWS IoT Core, Amazon CloudWatch usage). The AWS resource usage as part of testing will be visible to developers in their AWS account and charges for these will apply to the developers’ AWS bill. /freertos/faqs/;What is FreeRTOS?;FreeRTOS is an open source, real time operating system for microcontrollers that makes small, low-power edge devices easy to program, deploy, secure, connect, and manage. Distributed freely under the MIT open source license, FreeRTOS includes a kernel and a growing set of software libraries suitable for use across industry sectors and applications. To support a growing number of use cases, AWS provides software libraries that offer enhanced functionality including connectivity, security, and over-the-air updates. For example, you can use FreeRTOS to securely connect your small, low-powered devices to AWS cloud services like AWS IoT Core or to more powerful edge devices running AWS IoT Greengrass. /freertos/faqs/;What is the relationship between Amazon FreeRTOS and FreeRTOS?;Since 2017, Amazon FreeRTOS has been an extension of the FreeRTOS project, so we have unified the two names to reduce customer confusion. The FreeRTOS project now includes the additional connectivity libraries, security libraries, and IoT reference integrations. /freertos/faqs/;Which AWS region is FreeRTOS available in?;You can download FreeRTOS code from GitHub irrespective of your geographic location and AWS region availability. For the availability of FreeRTOS over-the-air (OTA) update cloud services, see the AWS Region Table. /freertos/faqs/;What are some use cases for FreeRTOS?;FreeRTOS can be used in embedded systems spanning industrial, commercial, and consumer applications. For example, smart meters, oil pump sensors, appliances, commercial security systems, fitness trackers, and sensor networks can all benefit from FreeRTOS. Smart meters are used in homes to monitor electricity usage in real time. Fitness trackers send health data via the user’s mobile device to the cloud for real time monitoring or analytics. Utilities benefit from this data by enabling more efficient load balancing and power output from their generating stations. Oil pump sensors are used on oil rigs to monitor the output on wells that might be buried deep underwater. An oil rig might deploy FreeRTOS on those sensors and use an AWS IoT Greengrass Core to locally process data from pumps and valves in real time. The AWS IoT Greengrass Core could then send batches of preprocessed pump sensor data to the cloud for analytics and data warehousing. To learn more about AWS IoT Greengrass, click here. /freertos/faqs/;How can a microcontroller developer get access to FreeRTOS?;FreeRTOS developers can download the FreeRTOS microcontroller device software from GitHub or FreeRTOS.org. /freertos/faqs/;Who can benefit from FreeRTOS?;Semiconductor vendors manufacture microcontrollers and modules like connectivity sensors, security peripherals, and Ethernet controllers. These microcontrollers and modules are used by OEMs to build IoT devices. OEMs include industrial companies, commercial enterprises, and consumer brands. Microcontroller developers can use FreeRTOS to easily design and develop a connected device and IoT applications. Enterprises can use IoT connected devices that are powered by FreeRTOS to gain business and operational efficiency. /freertos/faqs/;What are the major components of FreeRTOS software?;FreeRTOS includes the FreeRTOS kernel, a real time operating system kernel for microcontrollers, and libraries that support connectivity, security, and over-the-air updates. See the list of FreeRTOS libraries at freertos.org. /freertos/faqs/;What minimum hardware specifications are required?;If you run all FreeRTOS libraries, including TLS, on the application microcontroller, you may need a microcontroller with >25 MHz processing speed and >64 KB RAM. If the communication and crypto stack (except for MQTT) is offloaded onto the networking processor, your microcontroller will only need 10 MHz processing speed and 16 KB RAM. However, these values are just approximations, as factors such as MCU architecture, compiler, and compiler optimization level may impact processing speed and RAM requirements. FreeRTOS needs 128 KB of program memory per executable image stored on the microcontroller. For over-the-air (OTA) update functionality, two executable images must be stored in program memory at the same time. /freertos/faqs/;What architectures does FreeRTOS support?;FreeRTOS provides IoT Reference Integrations for a wide range of microcontrollers from our partners in the AWS Partner Device Catalog. FreeRTOS includes the FreeRTOS kernel, which supports 40+ architectures, including the latest from RISC-V and ARMv8-M. /freertos/faqs/;How can I get started on FreeRTOS?;You can use the getting started guide for systematic instructions on how to run FreeRTOS on a qualified board. /freertos/faqs/;How can I get technical support?;Use any of the following channels to get support: FreeRTOS Community Forums Premium Support AWS Support GitHub Issues /freertos/faqs/;What happened to the Amazon FreeRTOS group on AWS Forums?;To create a better forums experience for our customers, we have migrated all content and users from the AWS Forums Amazon FreeRTOS group to the Amazon Web Services category on the FreeRTOS Community Forums. Learn more here. /freertos/faqs/;Is there a user guide?;Yes. You can use the FreeRTOS user guide to get started with connecting FreeRTOS devices to AWS. /freertos/faqs/;Can I use FreeRTOS to connect to other cloud services?;Yes. FreeRTOS is an open-source software, so it can be modified to fit any specific needs of your application. /freertos/faqs/;Can I make changes to the FreeRTOS source code for my project?;Yes. FreeRTOS is an open-source software distributed under the MIT license, so it can be modified to fit any specific needs of your application or project without the permission of AWS. /freertos/faqs/;How much do I pay for using FreeRTOS?;FreeRTOS is free to download and use under an open source MIT license. /freertos/faqs/;How can I explore FreeRTOS without buying hardware?;You can explore FreeRTOS code and functionality on a Windows machine by downloading the libraries and samples ported to run on Windows. This is a set of files referred to as the FreeRTOS simulator for Windows (Windows Simulator). Get started here. /freertos/faqs/;Does FreeRTOS include hardware?;No. FreeRTOS is an open source, real time operating system for microcontrollers. You can run FreeRTOS on your chosen microcontroller by porting FreeRTOS code and validating the ported code with AWS IoT Device Tester. To make it easier for you, we have provided IoT reference integrations and qualified ports for common microcontrollers in the AWS Partner Device Catalog. /freertos/faqs/;How do I understand FreeRTOS versioning?;See GitHub repository architecture and versioning on freertos.org. /freertos/faqs/;What is the FreeRTOS kernel?;Developed over an 18-year period and in partnership with the world's leading chip companies, the FreeRTOS kernel is the market-leading, real time operating system kernel and the de-facto standard solution for microcontrollers and small microprocessors. /freertos/faqs/;Does AWS maintain the FreeRTOS kernel?;Yes. The latest update to v10 of the FreeRTOS kernel includes support for RISC-V and Armv8-M (Cortex-M33 and Cortex-M23). /freertos/faqs/;What is the difference between the MIT open source license and the (previously used) modified GPL open source license?;Both licenses allow the software to be used for free, even in commercial products, and neither license imposes any obligations when distributing binary (executable) copies. The MIT license provides simplified wording and allows for more permissive use of our source code. With the MIT license, you can still develop and sell commercial products using FreeRTOS (including the kernel) but you are no longer obliged to open source modifications to our source code, meaning you own all the changes you make. The only requirements under MIT is that the copyright notice and permission notice shall be included in all copies or substantial portions of the software (source files). /freertos/faqs/;Which libraries are covered under FreeRTOS Long Term Support (LTS)?;The FreeRTOS LTS release includes the kernel and libraries needed for AWS IoT connectivity, security, and over-the-air (OTA) updates. See the complete list of LTS libraries here. /freertos/faqs/;What is the support period for FreeRTOS LTS libraries?;The support period for FreeRTOS LTS libraries is two years. FreeRTOS LTS libraries will not have any feature development and will include security updates and bug fixes that AWS determines as critical for at least two years from its release. /freertos/faqs/;Where do I obtain the FreeRTOS LTS libraries?;You can get the FreeRTOS LTS libraries by cloning the FreeRTOS LTS GitHub repository, cloning individual LTS libraries, or by downloading the FreeRTOS LTS zip file from FreeRTOS.org. /freertos/faqs/;How do I integrate FreeRTOS LTS libraries into my project?;You can update individual libraries to LTS libraries by cloning them from their corresponding repositories. For example, you can update your project to the FreeRTOS LTS MQTT library by downloading code from the coreMQTT GitHub repository. /freertos/faqs/;How do I find information on and download the FreeRTOS LTS patches?;You can visit the ‘FreeRTOS LTS Patches’ section in the LTS Libraries page on FreeRTOS.org for the latest information, or subscribe to GitHub notifications for the FreeRTOS LTS repository. FreeRTOS LTS releases use a date-based versioning scheme (YYYYMM) followed by a patch sequential number (.XX). For example, FreeRTOS 202012.02 LTS means the second patch to the December-2020 FreeRTOS LTS release. You can get the latest patch from GitHub by using the associated download link. /freertos/faqs/;What is the software license for FreeRTOS LTS?;FreeRTOS LTS libraries are distributed free under the MIT open source license. /freertos/faqs/;Do I have to pay to use FreeRTOS LTS libraries?;No. FreeRTOS LTS libraries are free for all users under the MIT open source license. /freertos/faqs/;Who is releasing and supporting FreeRTOS LTS?;AWS will release and provide ongoing maintenance of the FreeRTOS LTS libraries for the benefit of the FreeRTOS community. The FreeRTOS community is encouraged to provide feedback and contribute code in the form of GitHub pull requests. /freertos/faqs/;What is the release cycle for FreeRTOS LTS?;We expect new FreeRTOS LTS releases to happen every 1.5 years. /freertos/faqs/;What is the SLA for security updates and critical bug fixes?;We aim to address security vulnerabilities and critical bugs on FreeRTOS LTS libraries within seven days from successfully implementing a mitigation to releasing an update. /freertos/faqs/;Can I get support for more than two years?;Yes, see the FreeRTOS Extended Maintenance Plan. /freertos/faqs/;I am already using a version of FreeRTOS. How can I start using FreeRTOS LTS?;Visit the FreeRTOS LTS GitHub repository and include the libraries you need for your application. FreeRTOS LTS kernel versions are backward compatible with FreeRTOS kernel versions V8.0.0 or higher. So if you are already using FreeRTOS kernel versions v8.0.0 or higher, you can migrate to the latest kernel version in the LTS release with minimal changes to your application code. If you are using an older version of LTS libraries, see the migration guide and corresponding validation tests to upgrade your project to FreeRTOS LTS. /freertos/faqs/;Can I contribute code to FreeRTOS?;Yes, you can contribute code to FreeRTOS via GitHub. Please refer to Contributions.md file in GitHub for guidelines. /freertos/faqs/;What is AWS IoT Device Tester for FreeRTOS?;AWS IoT Device Tester for FreeRTOS is a Windows/Linux/Mac test automation tool that lets semiconductor vendors self test and qualify FreeRTOS on their microcontroller boards. With AWS IoT Device Tester, semiconductor vendors can verify whether their microcontroller boards can run FreeRTOS and be authenticated by and interoperate with AWS IoT services. /freertos/faqs/;Where do I get AWS IoT Device Tester for FreeRTOS?;You can get AWS IoT Device Tester for FreeRTOS here. /freertos/faqs/;Is AWS IoT Device Tester for FreeRTOS required for qualification and listing in the AWS Partner Device Catalog?;Yes, you can learn more about how to get listed here. /freertos/faqs/;What does AWS IoT Device Tester for FreeRTOS test?;AWS IoT Device Tester for FreeRTOS tests that the combination of a FreeRTOS IoT reference integration with a microcontroller board’s porting layer interfaces and underlying device drivers are compatible and can interoperate with AWS IoT services. AWS IoT Device Tester confirms the porting layer interfaces (implemented by semiconductor vendors) for FreeRTOS libraries function correctly on top of the device drivers. Also, AWS IoT Device Tester runs end-to-end tests to confirm the microcontroller board can authenticate and interoperate with AWS IoT services. /freertos/faqs/;How do I get technical support for AWS IoT Device Tester for FreeRTOS?;Use any of the following channels to get support: Premium Support Customer Support GitHub Issues /freertos/faqs/;How can I get my microcontroller-based hardware platform listed in the AWS Partner Device Catalog?;The AWS Device Qualification Program defines the process to get your microcontroller listed on AWS Partner Device Catalog. The high-level overview is as follows: First, you must pass the AWS IoT Device Tester for AWS FreeRTOS tests. Next, log into the AWS Partner Network Portal and upload the AWS IoT Device Tester for FreeRTOS report. Provide reference to your source code for ported FreeRTOS interfaces to make it available to OEMs. Once the ported code and report are verified by AWS and other device related artifacts (such as device image, data sheet, etc.) have been submitted, the device is listed in the AWS Partner Device Catalog. /freertos/faqs/;In which regions is AWS IoT Device Tester for FreeRTOS available?;AWS IoT Device Tester for FreeRTOS is available in all the regions where FreeRTOS is supported. /freertos/faqs/;How much does AWS IoT Device Tester for FreeRTOS cost?;AWS IoT Device Tester for FreeRTOS is free to use. However, you will be responsible for any costs associated with AWS usage as part of qualification tests. On average, a single run of the AWS IoT Device Tester would cost less than a cent. Please refer to AWS IoT Core pricing for associated costs. /freertos/faqs/;What is the difference between AWS IoT Greengrass and FreeRTOS?;AWS IoT Greengrass is software that lets you run local compute, messaging, data caching, sync, and ML inference capabilities for connected devices in a secure way. With AWS IoT Greengrass, connected devices can run AWS Lambda functions, keep device data in sync, and communicate with other devices securely – even when not connected to the Internet. Using AWS Lambda, AWS IoT Greengrass ensures your IoT devices can respond quickly to local events, use Lambda functions running on AWS IoT Greengrass Core to interact with local resources, operate with intermittent connections, stay updated with over the air updates, and minimize the cost of transmitting IoT data to the cloud. FreeRTOS is an open source, real time operating system for microcontrollers that operates on the edge and does not generally support chipsets that could run AWS IoT Greengrass. These microcontroller devices are found on a variety of IoT endpoints such as fitness trackers, pacemakers, electricity meters, automotive transmissions, and sensor networks. FreeRTOS devices cannot run AWS IoT Greengrass Core but can trigger the execution of Lambda functions on an AWS IoT Greengrass Core device. The hardware requirements and operating systems are different on both devices. /freertos/faqs/;Does FreeRTOS require the use of AWS IoT Greengrass?;"FreeRTOS does not require the use of AWS IoT Greengrass. FreeRTOS runs on IoT endpoints and is often responsible for the ""sensing"" and ""acting"" in an IoT topology. FreeRTOS devices can connect directly to the cloud or connect to AWS IoT Greengrass Core devices locally." /freertos/faqs/;How can I connect FreeRTOS devices to AWS IoT Greengrass Core devices?;The AWS IoT Greengrass discovery library is included in the FreeRTOS source code, enabling you to find and connect to an AWS IoT Greengrass Core device. For more information, refer to the FreeRTOS user guide. /freertos/faqs/;What is Bluetooth Low Energy support in FreeRTOS?;Bluetooth Low Energy support in FreeRTOS offers a standardized API layer for developers to write Bluetooth Low Energy applications that are portable across FreeRTOS qualified boards. It includes companion Android and iOS SDKs that enable a FreeRTOS device to consume AWS IoT services using an Android or iOS device as proxy. You can use standard Generic Access Profile (GAP) and Generic Attributes (GATT) profiles to write Bluetooth Low Energy applications and use custom profiles for MQTT over Bluetooth Low Energy and Wi-Fi provisioning via Bluetooth Low Energy. You can also use other AWS IoT services and features including AWS IoT Device Defender, Device Shadows, and over-the-air (OTA) updates. /freertos/faqs/;Why should I use FreeRTOS Bluetooth Low Energy?;If you are an embedded developer that needs to create a Bluetooth Low Energy application, connect your Bluetooth Low Energy devices to AWS IoT through an Android or iOS proxy, or use AWS IoT features such as AWS IoT Device Shadows, you will benefit from using Bluetooth Low Energy in FreeRTOS. The standardized Bluetooth Low Energy API for FreeRTOS allows you to code portable applications against FreeRTOS-qualified devices. If you decide to use a different microcontroller (e.g. for upgrading the product), you can use your existing Bluetooth Low Energy application code as a base for adding newer features. You can then concentrate on your application code and not worry about connectivity and security libraries underneath, which are not features that differentiate your product. /freertos/faqs/;Which boards are supported by Bluetooth Low Energy in FreeRTOS?;Visit our getting started page for more information on supported hardware. /freertos/faqs/;How do I find the libraries I need?;You can select the board and download the ported code via AWS Partner Device Catalog. FreeRTOS source code has demo examples, and the mobile SDKs have sample applications to help you quickly get started. /freertos/faqs/;Does Bluetooth Low Energy support in FreeRTOS work only with AWS?;No. The FreeRTOS libraries for Bluetooth Low Energy are open source and under the MIT license so developers can modify according to their specific need. /freertos/faqs/;What Bluetooth Low Energy versions are supported?;FreeRTOS supports Bluetooth Low Energy versions 4.2 and above. Bluetooth Low Energy version 4.2 raises the security bar by adding support for Bluetooth Low Energy Secure Connections, an enhanced security feature introduced in Bluetooth Low Energy version 4.2 to authenticate a peer device and create an encrypted channel. /freertos/faqs/;Is Amazon providing the Bluetooth Low Energy stack?;No. FreeRTOS is providing a standardized Bluetooth Low Energy API library that interfaces with a third-party (e.g., MCU vendor) Bluetooth Low Energy stack. /freertos/faqs/;What GATT services does FreeRTOS support for Bluetooth Low Energy enable?;Bluetooth Low Energy support in FreeRTOS enables developers to add any number of standard and custom GATT services, depending on the capabilities of the target hardware. FreeRTOS contains two customer profiles: 1) MQTT over Bluetooth Low Energy, to enable Bluetooth Low Energy devices to use AWS IoT services, and 2) Wi-Fi provisioning over Bluetooth Low Energy, to provision Wi-Fi credentials in an IoT device using Bluetooth Low Energy. /freertos/faqs/;Can the Bluetooth Low Energy proxy take a local action?;Currently, there is no mechanism to intercept messages flowing between a Bluetooth Low Energy device and AWS IoT. The Bluetooth Low Energy proxy only acts as a pass-through device. However, you can use methods and classes that are provided within the proxy libraries as a starting point and modify these libraries to intercept the messages and take local action. /freertos/faqs/;What are the benefits of using MQTT over Bluetooth Low Energy?;MQTT over Bluetooth Low Energy enables Bluetooth Low Energy devices to connect to AWS IoT via a proxy device, as well as enables you to use other AWS services and features including AWS IoT Device Defender, AWS IoT Device Shadows, and FreeRTOS over-the-air (OTA) updates. /freertos/faqs/;Can I use multiple connectivity options from the same device?;Yes. You can use MQTT over Wi-Fi and MQTT over Bluetooth Low Energy simultaneously as long as your device has the memory required to do so. /freertos/faqs/;How can I authenticate my proxy device with AWS IoT?;AWS IoT uses the Amazon Cognito service to authenticate mobile devices with cloud services. However, you can also use X.509 certificates that are supported by FreeRTOS mobile SDKs to authenticate your proxy device with AWS IoT. /freertos/faqs/;What is the FreeRTOS cellular interface library, and what else is included?;We have a preview FreeRTOS cellular library that makes it easier to develop secure LTE-M (or CAT-M1) IoT solutions. New reference integrations and demonstration projects are available from our partners Quectel, Sierra Wireless, and u-blox. /freertos/faqs/;Which cellular technologies are supported?;The FreeRTOS cellular library supports LTE-M cellular modems. LTE-M is a type of low power wide area network (LPWANradio technology standard developed by 3GPP to enable a wide range of cellular devices and services. /freertos/faqs/;Which cellular modems are supported in this preview?;Currently, the FreeRTOS cellular library offers interoperability across the following LTE-M modems: Quectel BG96, Sierra Wireless HL7802, and u-blox SARA-R4. /freertos/faqs/;Where can I get the source code?;Source code for the FreeRTOS cellular library and IoT reference integrations are available on the FreeRTOS Labs repository on GitHub. /freertos/faqs/;How do I update my devices with new firmware?;You can use the over-the-air (OTA) update feature of FreeRTOS. Within the AWS IoT Device Management console, all you need to do is provide a firmware image, select the devices to update, select a code signing method, and create the FreeRTOS OTA job update. For more information on the OTA update feature and code signing, refer to the FreeRTOS user guide. /freertos/faqs/;What is code signing?;Code signing enables developers to confirm the integrity and origin of firmware images scheduled for over-the-air (OTA) deployment to FreeRTOS devices. The process confirms the integrity of firmware images using a cryptographic hash that validates that the code has not been altered or corrupted since it was signed. The process also uses public-key cryptography to sign these images with proof of origin that can be validated on the device. Using the integrated FreeRTOS OTA update device job within the AWS IoT Device Management console, developers can upload a new firmware image, sign that image, and deliver it to a group of devices in the field. Those devices will validate the signature upon download and only install trusted code. Customers can use IAM to provide fine-grained access controls to signing tools, so only designated developers can sign and schedule new firmware updates. /freertos/faqs/;Do I have to use code signing?;No, you can also use your own signing service and upload a signed image directly into Amazon S3. You will need to modify the FreeRTOS over-the-air (OTA) agent to accept the signature format that you choose to use. /freertos/faqs/;What hardware supports over-the-air (OTA)?;You can find qualified hardware that support FreeRTOS OTA in the AWS Partner Device Catalog. /freertos/faqs/;How does FreeRTOS secure data within the device (at rest)?;FreeRTOS uses a standard application interface, called PKCS #11, for encryption, digital signatures, and cryptographic object management. Cryptographic objects are kept either in dedicated storage or in the flash memory of the main microcontroller if dedicated storage is not available. If your device requires data encryption at rest, we recommend that you use dedicated cryptographic hardware to protect your encryption keys. Use the PKCS #11 API to access keys and encrypt and decrypt application data. /freertos/faqs/;How can I stay informed of the latest security patches?;Security updates are provided via the FreeRTOS console, the FreeRTOS Security Updates page, and on GitHub. /freertos/faqs/;Where can I report a security concern?;To report a security issue, please visit Vulnerability Reporting for AWS. /freertos/faqs/;What is FreeRTOS Extended Maintenance Plan?;FreeRTOS Extended Maintenance Plan (EMP) provides you with security patches and critical bug fixes on your chosen FreeRTOS Long Term Support (LTS) version for up to 10 years beyond the expiry of the initial LTS period. With FreeRTOS EMP, your FreeRTOS-based long-lived devices can rely on a version that has feature stability and receives security updates during the term of your subscription. You receive timely notification of upcoming patches on FreeRTOS libraries, so you can plan the deployment of security patches on your Internet of Things (IoT) devices. Before the end of the current LTS period, you will be able to subscribe to Extended Maintenance Plan using your AWS account, and renew the subscription annually to cover the product lifecycle or until you’re ready to transition to a new FreeRTOS release. FreeRTOS EMP applies to libraries that are part of FreeRTOS LTS. /freertos/faqs/;Why should I use FreeRTOS EMP?;FreeRTOS EMP helps you maintain your FreeRTOS-based devices during the term of your subscription. It allows you to save operating system upgrade costs and reduce the risks of not being able to update devices in time. It provides security patches and critical bug fixes on feature-stable FreeRTOS LTS versions, so you don’t need to incur development, testing, and quality assurance costs to migrate to the latest FreeRTOS release. Updating devices involves project planning, release readiness testing, and over-the-air (OTA) update scheduling to deploy critical fixes. FreeRTOS EMP reduces the risk of delayed deployment by providing timely notification of upcoming patches and support with integration issues. /freertos/faqs/;What are the main features of FreeRTOS EMP?;Feature Description Why is it important? Feature stability Get FreeRTOS libraries that maintain the same set of features for years Save upgrade costs by using a stable FreeRTOS codebase for your product lifecycle API stability Get FreeRTOS libraries that have stable APIs for years Save upgrade costs by using a stable FreeRTOS codebase for your product lifecycle Critical fixes Receive security patches and critical bug* fixes on your chosen FreeRTOS libraries Security patches help keep your IoT devices secure for the product lifecycle Notification of patches Receive timely notification of upcoming patches Timely awareness of security patches helps you proactively plan the deployment of patches Flexible subscription plan Extend maintenance by a year or longer Continue to renew your annual subscription to keep the same version for the entire device lifecycle, or for a shorter period to buy time before upgrading to the latest FreeRTOS version * A critical bug is a defect determined by AWS to impact the functionality of the affected library and has no reasonable workaround. AWS will provide technical support to FreeRTOS EMP customers via AWS Support. AWS Support is not included in FreeRTOS EMP subscriptions. You can track issues (for example, issues related to AWS accounts, billing, or bugs) or get access to technical experts (on issues such as patch integration) based on your AWS Support plan. /freertos/faqs/;What is the subscription cost?;FreeRTOS EMP has a flexible subscription option that can be extended annually for up to 10 years. You can extend your subscriptions for a duration that aligns with your device lifecycle or application requirements. See the pricing page for more details. /freertos/faqs/;How can I get started?;You can create, configure, and manage your FreeRTOS Extended Maintenance Plan (EMP) subscriptions through the FreeRTOS EMP Console. The console will guide through the process of selecting the LTS version, choosing the appropriate license type, configuring notifications, and downloading code. To get started, see the FreeRTOS EMP Getting Started Guide. /freertos/faqs/;Do I have to commit to 10 years of FreeRTOS EMP?;No. FreeRTOS EMP has a flexible annual subscription plan. You can continue to renew your subscriptions annually for a duration (up to 10 years) that aligns with your device lifecycle or application requirements. /freertos/faqs/;Which FreeRTOS LTS versions does FreeRTOS EMP cover?;FreeRTOS EMP will be available for the current and all previous FreeRTOS LTS releases. Subscriptions can be renewed annually for up to 10 years from the end of the chosen LTS version’s support period. For example, a subscription for FreeRTOS 202012.01 LTS, whose LTS period ends March 2023, may be renewed annually until March 2033. /freertos/faqs/;What license applies to the FreeRTOS EMP libraries?;FreeRTOS EMP consists of an initial base code (LTS version) and subsequent patches for security vulnerabilities and critical bug fixes. FreeRTOS base code continues to be licensed under the MIT open source license. Any code, fixes, or patches (collectively, “Patches”) that you receive, obtain or access in connection with FreeRTOS EMP that have not been incorporated into the publicly available FreeRTOS libraries are provided to you under the AWS Intellectual Property License except that in addition to the rights granted under the AWS Intellectual Property License, AWS also grants you a limited, non-exclusive, non-sublicensable, non- transferrable license to (a) modify and create derivative works of the Patches and (b) to distribute the Patches in object code form only. See AWS Service Terms for more details. /freertos/faqs/;Can I get FreeRTOS EMP beyond the 10 years?;Contact AWS Sales if you are interested in longer terms. /freertos/faqs/;Do I need to buy separate subscriptions for different products or product lines?;It depends. Each FreeRTOS LTS version will have its own subscription. If you buy a multiple-product subscription, you pay for only one subscription when you use the same FreeRTOS LTS version for multiple end products. If you buy a single-product subscription, you can use your subscription for only one end product (see next FAQ for the definition of a product). /freertos/faqs/;Where can I get technical support?;AWS will provide technical support for FreeRTOS EMP customers via separate subscriptions to AWS Support. AWS Support is not included in FreeRTOS EMP subscriptions. You can track issues or speak to technical experts based on your AWS Support plan. See details on AWS Support plans here. /freertos/faqs/;Where can I get support for billing questions?;You can get support for your billing questions via AWS Support. /freertos/faqs/;Can I subscribe to FreeRTOS EMP if I’m not using other AWS services?;Yes. You can use the FreeRTOS EMP libraries to suit your specific application need. However, you must have or sign up for an AWS account to subscribe to FreeRTOS EMP. /freertos/faqs/;What is included in the FreeRTOS EMP patches?;FreeRTOS EMP patches include security updates and bug fixes that AWS determines to be critical for libraries in your FreeRTOS EMP project. /freertos/faqs/;Will AWS provide fixes for critical bugs resulting from my modifications on the LTS libraries?;No. AWS will provide fixes and support for the baseline LTS library source code only. /freertos/faqs/;When does an AWS Support escalation occur?;An escalation takes place when AWS Support transfers a technical support case to the FreeRTOS engineering team for a resolution. FreeRTOS EMP customers need to be AWS Support subscribers to be eligible for these escalations. /freertos/faqs/;What happens if I exceed the AWS Support escalations to the FreeRTOS engineering team?;You can escalate four (for single product subscriptions) and six (for multiple product subscriptions) AWS Support cases per year at no additional charge. After that, you may incur charges (evaluated case by case) in addition to the AWS Support fees. In case of charges, AWS will request your confirmation to proceed and charge at a rate of $3,100 per Software Development Engineer per week (without proration). /freertos/faqs/;Can I cancel my FreeRTOS EMP subscriptions?;Yes, FreeRTOS EMP subscriptions (once available) can be canceled anytime during the subscription period. /freertos/faqs/;Can I sign up for FreeRTOS EMP, receive the libraries and patches, and then cancel the subscription? If so, will I be charged a prorated amount?;You are obligated to pay for a minimum of one year of subscription each time you register to receive the service. We reserve the right to refuse to provide FreeRTOS EMP support to any customer that frequently registers for and terminates the service. /freertos/faqs/;How long will AWS provide Extended Maintenance for an LTS version?;You can continue to renew your Extended Maintenance subscriptions annually for up to 10 years. You may terminate your subscription at any time. AWS may terminate Extended Maintenance for any version of LTS as permitted under the agreement governing your use of AWS services, including upon at least 12 months' notice for any reason. Upon any termination of Extended Maintenance for an LTS version, your subscription to Extended Maintenance for such LTS version will also terminate. /greengrass/faqs/;What is AWS IoT Greengrass?;AWS IoT Greengrass is an Internet of Things (IoT) open source edge runtime and cloud service that helps you build, deploy, and manage device software. Customers use AWS IoT Greengrass for their IoT applications on millions of devices in homes, factories, vehicles, and businesses. You can program your devices to act locally on the data they generate, execute predictions based on machine learning models, filter and aggregate device data, and only transmit necessary information to the cloud. AWS IoT Greengrass lets you quickly and easily build intelligent device software. AWS IoT Greengrass enables local processing, messaging, data management, ML inference, and offers prebuilt components to accelerate application development. AWS IoT Greengrass also provides a secure way to seamlessly connect your edge devices to any AWS service as well as to third-party services. Once software development is complete, AWS IoT Greengrass enables you to remotely manage and operate software on your devices in the field without needing a firmware update. AWS IoT Greengrass helps keep your devices updated and makes them smarter over time. /greengrass/faqs/;How do I get started using AWS IoT Greengrass?;Click here to see the AWS IoT Greengrass getting started guide. You can review the list of qualified devices in the AWS IoT Partner Device Catalog. /greengrass/faqs/;Which AWS Regions are AWS IoT Greengrass service available in?;Please refer to the AWS Regions Table for the most updated information regarding Region availability of AWS IoT Greengrass. /greengrass/faqs/;What are the major components of AWS IoT Greengrass? What does each component do?;AWS IoT Greengrass consists of a cloud service and two software distribution for IoT devices: AWS IoT Greengrass Core, AWS IoT Device SDK, and the AWS IoT Greengrass SDK. Once the software distribution is installed on your device, you can further add or remove features, components, and manage your IoT device applications using AWS IoT Greengrass. The chart below describes the major components. AWS IoT Greengrass also works together with FreeRTOS. For more information about AWS IoT Greengrass and FreeRTOS, see the FAQ section: Connecting FreeRTOS and other Devices to AWS IoT Greengrass. /greengrass/faqs/;What are AWS IoT Greengrass Core devices? What minimum hardware specifications are required?;"The AWS IoT Greengrass Core software runs on an IoT device, hub, or gateway to automatically sync and interact with the cloud. AWS IoT Greengrass Core is designed to run on devices with a general-purpose processor that are powerful enough to run a general-purpose operating system, such as Linux. AWS IoT Greengrass requires at least 1 GHz of compute (either Arm or x86), 96MB* of RAM (v2.0 edge runtime or higher), plus additional resources to accommodate the desired OS, message throughput, and AWS Lambda execution depending on the use case. AWS IoT Greengrass Core can run on devices that range from a Raspberry Pi to a server-level appliance. *Based on an AWS study that used the following JDK: JDK version used for the tests: openjdk version ""1.8.0_275"", OpenJDK Runtime Environment (build 1.8.0_275-8u275-b01-0ubuntu1~18.04-b01), and OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode). Memory usage may be higher using different inputs." /greengrass/faqs/;What AWS Lambda development languages are supported by AWS IoT Greengrass?;AWS IoT Greengrass supports Lambda functions authored in the following languages: Python 2.7, 3.7, and 3.8 Node v8.10 and v12.x Java 8 or higher C C++ Any language that supports importing C libraries /greengrass/faqs/;Which Lambdas can be deployed to AWS IoT Greengrass?;Any Lambda that uses the Python 2.7, 3.7, or 3.8, Node v8.10 or v12.x, or Java 8 Lambda Runtime can be deployed to AWS IoT Greengrass Core. Lambdas that get deployed to AWS IoT Greengrass must be packaged together with the AWS IoT Greengrass Core SDK. In addition, you can choose to also add the AWS SDK to your Lambda’s package in order to easily interact with AWS services such as Amazon DynamoDB. Please note: Some cloud services that your Lambda relies upon (e.g. DynamoDB) will not be available to your Lambda functions when AWS IoT Greengrass Core is in offline mode, and API calls to those services will fail in offline mode. In addition, your Lambda functions need to use the appropriate namespace for each AWS IoT Greengrass Core SDK and AWS SDK, if you include both in the same package. /greengrass/faqs/;Can I use AWS IoT Greengrass with a Docker container?;Yes, you can run Docker containers on an AWS IoT Greengrass device or run AWS IoT Greengrass in a Docker container environment. You can deploy, run, and manage Docker containers with AWS IoT Greengrass. You can use any third-party tool to build Docker/Open Container Initiative (OCI) images, and your Docker images can be stored in Docker container registries, such as Amazon Elastic Container Registry (Amazon ECR), Docker Hub, or private Docker Trusted Registries (DTRs). You can run AWS IoT Greengrass in a Docker container by configuring your AWS IoT Greengrass group to run with no Lambda containerization. To get started, you can access an AWS IoT Greengrass Docker file here and you can find documentation about how you can pull the AWS IoT Greengrass Docker image from Amazon ECR here. You can also deploy AWS IoT Greengrass as a snap, a containerized software package that can run on a variety of Linux distributions. To get started, you can access the AWS IoT Greengrass snap here and get started here. /greengrass/faqs/;Can I run AWS IoT Greengrass on Mac OS or Windows?;Yes, by running AWS IoT Greengrass with no Greengrass Lambda containerization at the group level in a Docker container, you’ll be able to run AWS IoT Greengrass on Mac OS or Windows. You can learn more about this capability in our documentation. /greengrass/faqs/;What is the AWS IoT Greengrass SLA?;The AWS IoT Greengrass SLA for cloud management stipulates that you may be eligible for a credit towards a portion of your monthly service fees if AWS IoT Greengrass fails to achieve a Monthly Uptime Percentage of at least 99.9% for AWS IoT Greengrass cloud service. For full details on all of the terms and conditions of the SLA, as well as details on how to submit a claim, please see the AWS IoT Greengrass SLA details page. /greengrass/faqs/;What components of AWS IoT Greengrass are open source?;Beginning with AWS IoT Greengrass 2.0, the edge runtime and several components are now open source, and published in GitHub. For more details, see the list of open source components. /greengrass/faqs/;Can I make changes to the AWS IoT Greengrass edge runtime source code for my project?;Yes. The AWS IoT Greengrass open source edge runtime is distributed under the Apache 2.0 license, so it can be modified to fit any specific needs of your application or project without the permission of AWS. /greengrass/faqs/;What is an AWS IoT Greengrass local resource?;“Local resource” refers to buses and peripherals that are physically present on the AWS IoT Greengrass host, or a file system volume on the AWS IoT Greengrass host OS. For example, to communicate with devices connected via Modbus/CANbus, an AWS IoT Greengrass Lambda function would need to access the serial port on the device. A local resource is defined at AWS IoT Greengrass group scope, and all Lambdas in the AWS IoT Greengrass group can use the defined local resources. /greengrass/faqs/;When would I access a local resource?;AWS IoT Greengrass local resource allows your Lambda functions to securely interact with hardware such as sensors and actuators. For example, your Lambda function can read video streams from the camera on the device or send command and control to GPIO. /greengrass/faqs/;What is a hardware root of trust, and why might I want one?;Hardware roots of trust provide tamper-protected trusted execution environments where a true random number generator can produce the private keys used for encryption functions. These hardware “secure elements” are resistant to malware tampering and are physically tied to a given IoT device, establishing a strong root of trust upon which software can be deployed safely. /greengrass/faqs/;How do I introduce hardware root of trust security to my AWS IoT Greengrass architecture?;First, you must run your AWS IoT Greengrass Core software on an edge device with a secure element. Following the hardware vendor’s directions, generate a private key on that secure element. Next, follow our documentation to update the config.json file settings to use the secure element private key. /greengrass/faqs/;Which partners offer hardware with a secure element?;For a current list of integrated hardware, visit the AWS Partner Device Catalog. /greengrass/faqs/;How are secure elements qualified to work with the Hardware Security Integration feature?;Secure element vendors have configured their secure elements to use a set of PKCS#11 standard APIs to integrate with AWS IoT Greengrass. Vendors use a set of testing tools to qualify that their hardware is configured correctly. /greengrass/faqs/;How can I use a ML model compiled with Amazon SageMaker Neo?;On AWS IoT Greengrass devices, you can perform ML inference on locally-generated data using models optimized with Amazon SageMaker Neo. To prepare your device for inference, you can follow the instructions on installing the Neo DLR runtime on your device. For more information, see Installing DLR. You can compile a model in Amazon SageMaker Neo for your target hardware platform and store it in an Amazon Simple Storage Service (Amazon S3) bucket. Then you can configure AWS IoT Greengrass to use the Amazon S3 bucket to deploy the Neo optimized model for local inference on the device. /greengrass/faqs/;How can I use a ML model not trained in Amazon SageMaker?;You can bring your ML model trained elsewhere by placing it in .tar.gz and .zip format in Amazon S3. You will then let AWS IoT Greengrass know the Amazon S3 URI and AWS IoT Greengrass will deploy to target devices. /greengrass/faqs/;Which AWS Regions are AWS IoT Greengrass ML Inference available in?;AWS IoT Greengrass ML Inference is currently available in all Regions AWS IoT Greengrass is available in. Please refer to the AWS Regions Table for the most updated information regarding Region availability of AWS IoT Greengrass. You can use AWS IoT Greengrass ML Inference regardless of your geographic location, as long as you have access to one of these AWS Regions. /greengrass/faqs/;What are AWS IoT Greengrass components?;AWS IoT Greengrass components are building blocks that enable easy creation of complex workflows such as machine learning inference, local processing, messaging, and data management. AWS IoT Greengrass also offers prebuilt components such as Stream Manager that supports data export to local and cloud targets. These components help accelerate application development so you don't have to worry about understanding device protocols, managing credentials, or interacting with external APIs, and you can interact with AWS services and third-party applications without writing code. In addition, you can also build your own components on top of AWS IoT Greengrass. All components are designed to enable ease of use, as they can configured and managed through the AWS Greengrass console. These components enable you to reuse common business logic from one AWS IoT Greengrass device to another, as you can easily discover, import, configure, and deploy components at the edge. /greengrass/faqs/;How can I add an AWS IoT Greengrass component to my device configuration, or to my device?;AWS IoT Greengrass components can be added via the “components” section for each group in the AWS IoT Greengrass console. Once added, you can configure the AWS IoT Greengrass components parameters and deploy the group to add them to your AWS IoT Greengrass Core device. /greengrass/faqs/;Who can use AWS IoT Greengrass components?;Any AWS IoT Greengrass customer can use AWS IoT Greengrass components from within the AWS IoT Greengrass Console, which is accessible through the AWS Management Console. /greengrass/faqs/;What AWS IoT Greengrass components are available?;You can find available AWS IoT Greengrass components in our documentation. /greengrass/faqs/;How can I use AWS IoT Greengrass to implement alternative protocols?;Since Lambda functions running on AWS IoT Greengrass Cores have access to network resources, you can use Lambda to implement support for any protocol that is implemented on top of TCP/IP. In addition, you can also take advantage of AWS IoT Greengrass Local Resource Access to implement support for protocols that need access to hardware adapter/drivers. AWS IoT Greengrass also provides Modbus-RTU, Modbus-TCP, and EtherNet/IP Protocol Adapter connectors that can help you connect to edge devices. For more information, refer to the connector documentation here. /greengrass/faqs/;How can I ingest industrial device data into AWS IoT Greengrass?;You can use the IoT SiteWise connector to ingest device data from OPC UA servers, the Modbus-TCP connector to ingest device data from Modbus-TCP servers, and the EtherNet/IP connector to ingest device data from EtherNet/IP servers. Data export to AWS IoT SiteWise is enabled by default and you can use custom streams to export data to AWS IoT Analytics, Amazon Kinesis, and Amazon S3. You can also use custom streams to send data to Lambda functions to conduct local processing before you export the data. Alternatively, you can create a custom implementation that uses locally deployed Lambda functions to ingest and process device data and then deliver the data to local or cloud targets. /greengrass/faqs/;What are AWS IoT Greengrass Over the Air (OTA) Updates?;From time to time, AWS will publish updated versions of the AWS IoT Greengrass Core software to provide the following benefits: Introduce new or improved features Bug fixes Security improvements With AWS IoT Greengrass Over the Air Updates (OTA), customers can get all these benefits without having to manually download and reinstall the AWS IoT Greengrass Core software. /greengrass/faqs/;Do I have to use AWS IoT Greengrass OTA Updates?;No. You can always choose to download and install updates manually or follow a different software deployment process. /greengrass/faqs/;How will I be notified that new versions of AWS IoT Greengrass Core are available?;When new versions of AWS IoT Greengrass Core become available, we will announce it on the AWS IoT Greengrass software developer forum. You can find a link to that forum here. /greengrass/faqs/;What is AWS IoT Device Tester for AWS IoT Greengrass?;AWS IoT Device Tester for AWS IoT Greengrass is a test automation tool that lets you self-test and qualify AWS IoT Greengrass on your Linux-based devices. AWS IoT Device Tester provides a collection of automated tests that enable you to verify whether devices can run AWS IoT Greengrass and be authenticated by and interoperate with AWS IoT services. /greengrass/faqs/;Where do I get AWS IoT Device Tester for AWS IoT Greengrass?;You can get AWS IoT Device Tester for AWS IoT Greengrass here. /greengrass/faqs/;What does AWS IoT Device Tester for AWS IoT Greengrass test?;AWS IoT Device Tester for AWS IoT Greengrass verifies that the combination of a device’s CPU architecture, Linux kernel configuration, and drivers work with AWS IoT Greengrass by testing the following: Required software packages have been installed Linux kernel containing AWS IoT Greengrass required kernel configuration (e.g. kernel configured for cgroups) Over the air updates Device can connect with AWS IoT services and is able to run AWS Lambda functions Local resource access functionality Device shadow functionality /greengrass/faqs/;How do I get technical support for AWS IoT Device Tester for AWS IoT Greengrass?;Use any of the following channels to get support: AWS Forum for AWS IoT Greengrass Premium Support Customer Support /greengrass/faqs/;How do I get my device listed in the AWS Partner Device Catalog?;If you are an AWS partner, the AWS Device Qualification Program defines the process to get your device listed in the catalog. A high level overview of the process is as follows: Pass the AWS IoT Device Tester for AWS IoT Greengrass test Log into the AWS Partner Network Portal Upload the AWS IoT Device Tester report. Once the report is verified by AWS, and other device related artifacts such as picture and data sheet have been submitted, the device is listed in the AWS Partner Device Catalog. /greengrass/faqs/;In which Regions are AWS IoT Device Tester for AWS IoT Greengrass available?;AWS IoT Device Tester for AWS IoT Greengrass is available in all Regions where AWS IoT Greengrass is available. /greengrass/faqs/;How much does AWS IoT Device Tester for AWS IoT Greengrass cost?;AWS IoT Device Tester for AWS IoT Greengrass is free to use. However, you will be responsible for any costs associated with AWS usage as part of testing. A single run of AWS IoT Device Tester that tests on a single AWS IoT Greengrass device will cost less than 20 cents. /greengrass/faqs/;Which CPU architectures and operating systems is AWS IoT Greengrass compatible with?;Operating systems and CPU architectures supported by AWS IoT Greengrass Core and tested for compatibility by AWS are listed here. Other Linux variants that have not have been validated by the AWS IoT Greengrass team can also successfully run AWS IoT Greengrass. You can validate these variants for compatibility using the IoT Greengrass dependency checker on GitHub. Alternatively, you can run IoT Greengrass in “process mode” which lowers the compatibility threshold, but removes support for Linux containers. /greengrass/faqs/;What devices are compatible with AWS IoT Greengrass Core, and how can I get started quickly?;You can run AWS IoT Greengrass Core on a device that meets the minimum hardware and software requirements. You can also self-test your devices to see if they will run optimally with AWS IoT Greengrass and other AWS services using AWS IoT Device Tester. You can also discover and evaluate devices that are compatible with AWS IoT Greengrass in the AWS Partner Device Catalog. /greengrass/faqs/;How can I validate that my device will run AWS IoT Greengrass Core?;To ensure your devices work with AWS IoT Greengrass Core, you can test it using the AWS IoT Device Tester for AWS IoT Greengrass. You can download the tool and read the documentation. /greengrass/faqs/;How can I connect devices locally to AWS IoT Greengrass Core?;You can connect devices locally to AWS IoT Greengrass Core using FreeRTOS or the AWS IoT Device SDK. AWS IoT Greengrass discovery is available on the AWS IoT Device SDK via C++, Node.js, Java, and Python 2.7, 3.7, and 3.8. For more information, refer to the AWS IoT Greengrass developer guide. You can use the AWS IoT Greengrass discovery library in your FreeRTOS source code to find and connect to an AWS IoT Greengrass Core device. /greengrass/faqs/;Does FreeRTOS work with AWS IoT Greengrass?;Yes. FreeRTOS devices can connect directly to the cloud or connect to AWS IoT Greengrass. FreeRTOS runs on IoT endpoints and is often responsible for the ‘sensing’ and ‘acting’ in an IoT topology. /greengrass/faqs/;What is the difference between AWS IoT Greengrass and FreeRTOS?;AWS IoT Greengrass is software that lets you run local compute, messaging, data caching, sync, and machine learning inference capabilities for connected devices in a secure way. With AWS IoT Greengrass, connected devices can run AWS Lambda functions, Docker containers, or both, keep device data in sync, and communicate with other devices securely—even when not connected to the Internet. Using AWS Lambda, AWS IoT Greengrass ensures your IoT devices can respond quickly to local events, use Lambda functions running on AWS IoT Greengrass Core to interact with local resources, operate with intermittent connections, stay updated with over the air updates, and minimize the cost of transmitting IoT data to the cloud. FreeRTOS is an open source, real-time operating system for microcontrollers that operates on the edge and does not generally support chipsets that could run AWS IoT Greengrass. These microcontroller devices are found on a variety of IoT endpoints such as fitness trackers, pacemakers, electricity meters, automotive transmissions, and sensor networks. FreeRTOS devices cannot run AWS IoT Greengrass Core but can connect, send, and receive messages to and from an AWS IoT Greengrass Core device for local processing at the edge. The hardware requirements and operating systems are different on both devices. /aws-cost-management/faqs/;Who should use the AWS Cost Management products?;We have yet to meet a customer who does not consider cost management a priority. AWS Cost Management tools are used by IT professionals, financial analysts, resource managers, and developers across all industries to access detailed information related to their AWS costs and usage, analyze their cost drivers and usage trends, and take action on their insights. /aws-cost-management/faqs/;How do I get started with the AWS Cost Management tools?;The quickest way to get started with the AWS Cost Management tools is to access the Billing Dashboard. From there, you can access a number of products that can help you to better understand, analyze, and control your AWS costs, including, but not limited to, AWS Cost Explorer, AWS Budgets, and the AWS Cost & Usage Report. /aws-cost-management/faqs/;What are the benefits of using AWS Cost Explorer?;AWS Cost Explorer lets you explore your AWS costs and usage at both a high level and at a detailed level of analysis, and empowering you to dive deeper using a number of filtering dimensions (e.g., AWS Service, Region, Member Account, etc.) AWS Cost Explorer also gives you access to a set of default reports to help you get started, while also allowing you to create custom reports from scratch. /aws-cost-management/faqs/;What kinds of default reports are available?;AWS Cost Explorer provides a set of default reports to help you get familiar with the available filtering dimensions and types analyses that can be done using AWS Cost Explorer. These reports include a breakdown of your top 5 cost-accruing AWS services, and an analysis of your overall Amazon EC2 usage, an analysis of the total costs of your member accounts, and the Reserved Instance Utilization and Coverage reports. /aws-cost-management/faqs/;Can I create and save custom AWS Cost Explorer reports?;Yes. You can currently save up to 50 custom AWS Cost Explorer reports. /aws-cost-management/faqs/;What can I do with the AWS Cost Explorer API?;The AWS Cost Explorer API is the low-latency, ad-hoc query service that powers AWS Cost Explorer, and is accessible via a command-line interface and supported AWS SDKs. Using the AWS Cost Explorer API, you can build custom, interactive cost management applications without having to set up and maintain any additional infrastructure. /aws-cost-management/faqs/;When should I use AWS Compute Optimizer and when should I use AWS Cost Explorer?;You should use AWS Cost Explorer if you want to identify under-utilized EC2 instances that may be downsized on an instance by instance basis within the same instance family, and you want to understand the potential impact on your AWS bill by taking into account your RIs and Savings Plans. Cost Explorer offers recommendations for all commercial regions (outside of China) and supports the A, T, M, C, R, X, Z, I, D, H instance families. /aws-cost-management/faqs/;How can I get started using the AWS Cost & Usage Report?;The AWS Cost & Usage Report is your one-stop shop for accessing the most detailed information available about your AWS costs and usage. The AWS Cost & Usage Report can be generated at an hourly, daily, or monthly granularity. You can enable the AWS Cost & Usage Report from the Cost & Usage Reports page in the Billing Console. Please note that in order to receive the AWS Cost & Usage Report, you will need to create and configure an Amazon S3 bucket. /aws-cost-management/faqs/;What else can I do with the AWS Cost & Usage Report?;When setting up an AWS Cost & Usage report, you can select the option to integrate with Amazon Athena. From there, you can use the AWS CloudFormation template that’s delivered along with the Athena-compatible Parquet files to automate an integration with Athena. This will ensure that your latest cost and usage information is always available in Amazon Athena – with no additional work required to prepare your data for analysis. /aws-cost-management/faqs/;How do you determine the granularity of CUR?;It can depend on how much detail you need in your report. It is possible to choose the granularity of your data by selecting hourly, daily or monthly. Choosing “hourly” will time your data collection frequency by 24. The hourly granularity gives a more in-depth look at usage, such as EC2, enabling you to view spikes and trends in your data. However, this will increase the data volume in your CUR, and the data storage cost. /aws-cost-management/faqs/;What are Cost Allocation Tags in CUR?;These are tags that you define in your billing console which will be brought into your CUR file. For example, you can look up for costs per application, per environment or per team, all utilizing tags. This gives you the ability to translate your data to the way your teams think about their applications and get the most out of it. While we recommend having the tags set up before you create the CUR to allow this data to be picked up by the report and add an extra level of detail to your data, you can add cost allocation tags at any point. /aws-cost-management/faqs/;How can I use AWS Cost Management tools to better understand the costs and usage associated with my Reserved Instances (RIs)?;There are three main ways to gain insight into the costs and usage associated with your RIs: the default RI reports in Cost Explorer, the reservation-related data in the Cost & Usage Report, and AWS Cost Explorer's RI purchase recommendations. /aws-cost-management/faqs/;What are some of the insights you can glean using the RI reports in Cost Explorer?;AWS Cost Explorer provides two reports out-of-the-box--the RI Utilization and RI Coverage reports--to help you understand how you are using your RIs. The RI Utilization report visualizes the degree to which you are using your existing resources and helps you identify opportunities to improve your RI cost efficiencies. The RI Coverage report allows you to discover how much of your overall instance usage is covered by RIs, so that you can make informed decisions about when to purchase or modify an RI to ensure maximum coverage. /aws-cost-management/faqs/;What kind of RI-related information can you gain from the Cost & Usage Report?;The Cost & Usage Report gives you access to a wealth of RI-related information, including the ARN of the Reserved Instance that received the RI discount, the total reserved units in a reservation, and pricing information. This can help you trace your RI discounts, understand how well you are using your RIs, and analyze your savings compared to the On-Demand instance usage prices. /aws-cost-management/faqs/;What is AWS Budgets and how does it work?;Using AWS Budgets, you can set a budget that alerts you when you exceed (or are forecasted to exceed) your budgeted cost or usage amount. You can also set alerts based on your RI or Savings Plans Utilization and Coverage using AWS Budgets. /aws-cost-management/faqs/;What kinds of dimensions can be used to create a budget?;AWS Budgets gives you access to a number of filtering dimensions (i.e., AWS Service, Availability Zone, and Member Account), and allows you to create budgets that are tracked on a monthly, quarterly, or yearly cadence. Learn more about budgets dimensions and filters here. /aws-cost-management/faqs/;How many budgets can I create?;You can create up to 20,000 budgets. If you would like to increase your limit, please reach out to Customer Support. Learn more about AWS Budgets Limits and Restrictions. /aws-cost-management/faqs/;How many alerts and subscribers can I add for each budget?;For each budget, you are allowed to create up to five alerts. Each alert can be sent to 10 email subscribers and/or be published to an SNtopic. /aws-cost-management/faqs/;Is there a cost associated with using AWS Budgets?;Budgets without actions are free. You can create 2 actions-enabled budgets for free. Any additional active budgets accrue a cost that can be reviewed here. /aws-cost-management/faqs/;What is AWS Cost Anomaly Detection and how does it work?;Cost Anomaly Detection helps you detect and alert on any abnormal or sudden spend increases in your AWS account. This is possible by using machine learning to understand your spend patterns and trigger alert as they seem abnormal. /aws-cost-management/faqs/;How can I customize monitors to evaluate for anomalies?;Cost Anomaly Detection allows you to segment your spend by different dimensions (AWS Services, Linked Accounts, Cost Allocation Tags, and Cost Categories). This segmentation allows Anomaly Detection to detect more granular anomalies and customize alerting preferences. /aws-cost-management/faqs/;How many monitors can I create?;Cost Anomaly Detection allows you to create up to 101 monitors. There is a limit of 1 AWS Service monitor (which evaluates all of AWS Services separately) and up to 100 monitors for a combination of Linked Accounts, Cost Allocation Tags, and Cost Categories monitors. /aws-cost-management/faqs/;How many subscribers can I add to each monitor?;For each monitor, you can have up to 10 email recipients or 1 SNtopic. /aws-cost-management/faqs/;Is there a cost associated with using AWS Cost Anomaly Detection?;This service is provided free of charge. However, depending on your delivery settings, you may incur a charge, e.g. for SNS. /aws-cost-management/faqs/;What is AWS Purchase Order Management and how does it work?;AWS Purchase Order Management gives you the ability to define and manage your purchase orders (POs) for AWS services in a way that meets your unique business needs. You can configure multiple POs and define the rules of how they map to your invoices through PO line item configurations. You can define separate POs for different time periods, invoices, and AWS seller entities. You can also track the status as well as balance of your POs, and configure email contacts to receive PO balance tracking and expiration notifications. You have complete control to update your PO configuration at any time. /aws-cost-management/faqs/;What are purchase order line items?;Purchase order line items give you the flexibility to define various PO configurations according to your needs. By selecting different line item start and end periods, as well as line item types, you can define multiple configurations to match your POs to invoices, as well as track balances over different time periods and invoice types. To learn more, see Setting up purchase order configurations. /aws-cost-management/faqs/;How does purchase order balance tracking work?;Purchase order balance tracking is a feature that enables you to report and track the balance of your POs against your invoiced amounts. When adding/editing your PO details, you have the option to enable balance tracking and input amounts for your PO line items. Whenever an invoice is generated and matched with your PO, the balance of the corresponding line item as well as your PO will be reduced. If you have configured contacts on your PO, they will receive email notifications when the PO line item balance falls below a 75% threshold. /aws-cost-management/faqs/;What are purchase order notifications?;You can configure contacts on your POs to receive email notifications for your POs running out of balance or nearing expiration. PO notifications enable you to proactively take actions to ensure the validity of your POs, and achieve on-time and accurate payments. /aws-cost-management/faqs/;What is purchase order status management?;You can easily track the status of your POs on the Purchase Orders dashboard. When adding/updating your PO, you can input its effective and expiration periods. Your PO is Active during this time period and is used for matching with invoices. Once your PO is past its expiration date, it’s status is automatically updated to Expired and it is no longer used for your invoices. /aws-cost-management/faqs/;How many purchase orders can I add?;You can add up to 100 active purchase orders with up to 100 line items for each purchase order. /aws-cost-management/faqs/;How many email contacts can I configure to receive PO alerts?;You can add up to 10 email contacts for each purchase order. /aws-cost-management/faqs/;Is there a cost associated with using this feature?;No, the feature is provided free of charge. /aws-cost-management/faqs/;Can I update my invoicing and payment terms using this feature?;AWS Purchase Order Management is provided by AWS for your convenience. Any use of AWS Purchase Order Management does not modify the agreement between you and AWS governing your access or use of AWS services. For any questions related to payment terms, please reach out to Customer Support. /aws-cost-management/faqs/;What is AWS Cost Categories and how does it work?;Using AWS Cost Categories, you can group your cost and usage information into meaningful categories based on your business needs. You can create custom categories and map your cost and usage information into these categories based on the rules you define using various dimensions, such as, account, tag, service, charge type, and other cost categories. Once Cost Categories are set up, you can use them across various products in the AWS Billing and Cost Management console. This includes AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Reports (AWS CUR), and AWS Cost Anomaly Detection. A Cost Category comprises the following components: /aws-cost-management/faqs/;When should I create a Cost Category vs a Cost Category value?;Organizations have multiple perspectives on their business, such as projects, cost centers, applications, teams etc. A Cost Category is a unique perspective of your business that contains multiple groups of category values. For instance, if your business is organized by teams, you can create a cost category named Team. Then, you can map your costs to cost category values Team Alpha and Team Beta by selecting appropriate dimensions in the rule builder. Cost Category values are mutually exclusive, but rules are not, so you can write multiple rules that map your costs to a particular Cost Categories value. For instance, one rule can map account ABC and account XYZ using Account dimension into Team Alpha, and another rule can use the Tag dimension to map tag key owner with tag value as Alpha-owners into Team Alpha. Both of these rules can be used to categorize your costs into Team Alpha. /aws-cost-management/faqs/;How does Cost Categories work?;"Cost Categories uses a rule-based engine to categorize your cost and usage information. When your bill is computed (which happens multiple times every day), your costs are categorized to the values within each of your Cost Category based on the rules that you define. For example, consider you have three Cost Categories – Teams, Departments, and Cost Centers; your costs will be categorized based on your rules for each of your Cost Category into its values. Your Cost Category name will appear as a new column in your AWS Cost and Usage Report (CUR), and each row of your billing line item will be assigned the appropriate Cost Category value." /aws-cost-management/faqs/;What is an Inherited Value rule?;Inherited Value provides you the flexibility of defining a rule that dynamically inherits the Cost Category value from the dimension value defined. For example, if you want to dynamically group costs based on the value of a specific tag key, you can first choose the inherited value rule type, then choose the Tag dimension and specify the tag key to use. For instance, you can use a tag key, Teams, to tag your resources with values as alpha, beta, and gamma. Then, with an inherited value rule, you could select Tag as the dimension and specify teams as the tag key. This will dynamically generate alpha, beta, and gamma as Cost Category values. /aws-cost-management/faqs/;What is a Default value?;Any costs that are not captured in your Cost Category rules remain uncategorized. These costs show up on your AWS Cost and Usage Report with an empty value, and on your Cost Explorer with a “NCost Category” label. You can use Default Value to assign a contextually meaningful name to uncategorized costs. For example, for your Cost Categories Teams you can define the default value as other teams and reference it on all Cost Management products. /aws-cost-management/faqs/;What is the JSON Editor and why should I use it?;You have two ways to define your Cost Categories in your AWS Billing and Cost Management Console – the GUI based Rule builder or the JSON editor. The Rule builder supports static GUI based components and contains only the ANlogical operator to add dimensions to your rules. Using the JSON editor, you can write more complex rules with nested conditions, and use additional logical operators such as NOT and OR besides AND. Refer to API documentation for the JSON format to use with JSON editor /aws-cost-management/faqs/;How many Cost Categories can I create?;You can create up to 50 Cost Categories. With the Rule builder, you can create up to 100 rules per Cost Category, and with the JSON editor, you can create up to 500 rules per Cost Category. For more details on service limits, refer to the user guide. /aws-cost-management/faqs/;What are Split Charge Rules?;Every organization has a set of costs that are shared by multiple teams, business units, or financial owners, for instance, data transfer costs, enterprise support, or operational costs of a centralized infrastructure team. These costs are not directly attributable to a singular owner, and hence cannot be categorized into a singular cost category value. With Split Charge rules, you can equitably allocate these charges across your Cost Category values. /aws-cost-management/faqs/;Will Split Charge Rules introduce new line items in my Cost and Usage Report (CUR)?;Split charge rules as well as the resultant cost allocations are only presented within Cost Categories and are not surfaced in other Cost Management products such as AWS Cost Explorer, AWS Budgets, and AWS Cost Anomaly Detection. You can view your cost allocations before and after split charges are applied on the Cost Categories details page and download a CSV report of your cost allocations. /aws-cost-management/faqs/;Is there a cost associated with using AWS Cost Categories?;This service is provided free of charge. /premiumsupport/faqs/;What is Amazon Web Services Support (AWS Support)?;AWS Support gives customers help on technical issues and additional guidance to operate their infrastructures in the AWS cloud. Customers can choose a tier that meets their specific requirements, continuing the AWS tradition of providing the building blocks of success without bundling or long term commitments. /premiumsupport/faqs/;How are the enhanced AWS Support tiers different from Basic Support?;AWS Basic Support offers all AWS customers access to our Resource Center, Service Health Dashboard, Product FAQs, Discussion Forums, and Support for Health Checks – at no additional charge. Customers who desire a deeper level of support can subscribe to AWS Support at the Developer, Business, Enterprise On-Ramp, or Enterprise level. /premiumsupport/faqs/;;Your AWS Support covers development and production issues for AWS products and services, along with other key stack components. /premiumsupport/faqs/;What level of architecture support is provided by Support?;The level of architecture support provided varies by support level. Higher service levels provided progressively more support for the customer use case and application specifics. /premiumsupport/faqs/;I only use one or two services. Can I purchase support for just the one(s) I'm using?;No. Our Support offering covers the entire AWS service portfolio. As many of our customers are using multiple infrastructure web services together within the same application, we’ve designed AWS Support with this in mind. We’ve found that the majority of support issues, among users of multiple services, relate to how multiple services are being used together. Our goal is to support your application as seamlessly as possible. /premiumsupport/faqs/;How many support cases can I initiate with AWS Support?;As many as you need. Basic Support plan customers are restricted to customer support and service limit increase cases. /premiumsupport/faqs/;How many users can open technical support cases?;The Business, Enterprise On-Ramp, and Enterprise Support plans allow an unlimited number of users to open technical support cases (supported by AWS Identity and Access Management (IAM)). The Developer Support plan allows one user to open technical support cases. Customers with the Basic Support plan cannot open technical support cases. /premiumsupport/faqs/;;Our first-contact response times are based on your chosen severity level for each case. We will use all reasonable efforts to provide responses within these time frames. /premiumsupport/faqs/;How quickly will you fix my issue?;That depends on your issue. The problems that application or service developers encounter vary widely, making it difficult to predict issue resolution times. We can say, however, that we'll work closely with you to resolve your issue as quickly as possible. /premiumsupport/faqs/;;If you have a paid Support plan, you can open a web support case from Support Center. If you have Business, Enterprise On-Ramp, or Enterprise Support, you can request that AWS contact you at any convenient phone number or start a chat with one of our engineers through Support Center or the AWS Support App in Slack. /premiumsupport/faqs/;;"If customers encounter issues after following our step-by-step documentation, they can provide details such as screen prints and logs through a Support case. For high-severity issues, Business-level, Enterprise On-Ramp level, and Enterprise-level customers can chat with or call Support to receive help in real time. In some scenarios, Support provides detailed guidance through email. If necessary, Support will use our screen-sharing tool to remotely view the customer's screens to identify and troubleshoot problems. This tool is view-only—Support cannot act on behalf of customers within the screen-share session. Note, however, that the screen-share tool is not intended to assist with guiding customers through steps that are already documented. If a customer can’t use our screen sharing tool, AWS Support will try to use the screen share tool of the customer’s choice. For security considerations, some tools might not be supported. Developer-level customers can contact Cloud Support Engineers by email; however, screen share is not part of the support offered on their plan." /premiumsupport/faqs/;I'm not in the US. Can I sign up for AWS Support?;Yes, AWS Support is a global organization. Any AWS customer may sign up for and use AWS Support. /premiumsupport/faqs/;;AWS Support is available in English, Japanese, and Mandarin Chinese. /premiumsupport/faqs/;How do I access Japanese Support?;To access Japanese Support, subscribers should select Japanese as their language preference from the dropdown at the top right of any AWS web page. Once your language preference is set to Japanese, all Support inquiries will be sent to our Japanese Support team. /premiumsupport/faqs/;Who should use AWS Support?;We recommend all AWS customers use AWS Support to ensure a seamless experience leveraging AWS infrastructure services. We have created multiple tiers to fit your unique technical needs and budget. /premiumsupport/faqs/;How do I offer support for my end customers' AWS-related issues?;If an issue is related to your AWS account, we'll be happy to help you. For problems with a resource provisioned under their own accounts, your customers will need to contact us directly. Due to security and privacy concerns we can only discuss specific details with the account holder of the resource in question. You many also inquire about becoming an AWS Partner, which offers different end-customer support options. For more information, see AWS Partner Network. /premiumsupport/faqs/;I use an application someone else built on Amazon Web Services. Can I use AWS Support?;If the application uses resources provisioned under your AWS account, you can use AWS Support. First, we'll help you to determine whether the issue lies with an AWS resource or with the third-party application. Depending on that outcome, we'll either work to resolve your issue or let you know to contact the application developer for continued troubleshooting. /premiumsupport/faqs/;How can I get started with AWS Support?;You can add AWS Support during the sign up process for any AWS product. Or simply select an AWS Support Plan. /premiumsupport/faqs/;;AWS Support offers differing levels of service to align with your needs and budget, including our Developer, Business, Enterprise On-Ramp, and Enterprise Support plans. See our pricing table for more details. /premiumsupport/faqs/;Why does my AWS Support bill spike when I purchase EC2 and RDS Reserved Instances and ElasticCache Reserved Cache Nodes?;When you prepay for compute needs with Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances, Amazon Relational Database Service (Amazon RDS) Reserved Instances, Amazon Redshift Reserved Instances, or Amazon ElastiCache Reserved Cache Nodes and are enrolled in a paid AWS Support plan, the one-time (upfront) charges for the prepaid resources are included in the calculation of your AWS Support charges in the month you purchase the resources. In addition, any hourly usage charges for reserved resources are included in the calculation of your AWS Support charges each month. /premiumsupport/faqs/;How will I be charged and billed for my use of AWS Support?;Upon signup, you will be billed a minimum monthly service charge for the first calendar month (prorated). /premiumsupport/faqs/;How do I cancel my AWS Support subscription?;To cancel a paid Support plan, switch to the Basic support plan: /premiumsupport/faqs/;Can I sign up for AWS Support, receive assistance, and then cancel the subscription? If so, will I be charged a prorated amount?;You are obligated to pay for a minimum of one month of support each time you register to receive the service. While you may see a prorated refund when you cancel the service, your account will be charged again at the end of the month to account for the minimum subscription fee. We reserve the right to refuse to provide AWS Support to any customer that frequently registers for and terminates the service. /premiumsupport/faqs/;What is Infrastructure Event Management (IEM)?;AWS Infrastructure Event Management is a short term engagement with AWS Support, available as part of the Enterprise-level Support product offering, available one-per-year for Enterprise On-Ramp Support product offering, and available for additional purchase for Business-level Support subscribers. AWS Infrastructure Event Management will partner with your technical and project resources to gain a deep understanding of your use case and provide architectural and scaling guidance for an event. Common use case examples for AWS Event Management include advertising launches, new product launches, and infrastructure migrations to AWS. /premiumsupport/faqs/;How does Chat support work?;Chat is just another way, in addition to phone or email, to gain access to Technical Support engineers. By choosing the chat support icon in the Support Center, a chat session will be initiated through your browser. This provides a real-time, one-on-one interaction with our support engineers and allows additional information and links to be shared for faster issue resolution. /premiumsupport/faqs/;What are the best practices for fault tolerance?;Customers frequently ask us if there is anything they should be doing to prepare for a major event that could affect a single Availability Zone. Our response to this question is that customers should follow general best practices related to managing highly available deployments (e.g., having a backup strategy, distributing resources across Availability Zones). The following links provide a good starting point: /premiumsupport/faqs/;How do I configure Identity and Access Management (IAM) for support?;For details on how you can configure your IAM users to allow/deny access to AWS Support resources, see Accessing AWS Support. /premiumsupport/faqs/;How long is case history retained?;Case history information is available for 12 months after creation. /premiumsupport/faqs/;Can I get a history of AWS Support API calls made on my account for security analysis and operational troubleshooting purposes?;Yes. To receive a history of AWS Support API calls made on your account, you can enable CloudTrail in the AWS Management Console. For more information, see Logging AWS Support API Calls with AWS CloudTrail. /premiumsupport/faqs/;Does my Amazon Enterprise Support Subscription include Support for Amazon EKS Anywhere?;Amazon Enterprise Support covers general guidance for EKS Anywhere customers. To get additional support including detailed troubleshooting and pass-through support for 3rd party components bundled with EKS Anywhere, you will need to purchase a separate Amazon EKS Anywhere Support Subscription, in addition to your existing Amazon Enterprise Support Subscription. Learn more about AWS Enterprise Support here. For further information on EKS Anywhere Support Subscription, visit the EKS Anywhere pricing page. /premiumsupport/faqs/;Why are my attachments no longer showing as attachments?;Amazon is using a new process to share attachments. You will still be able to access all of the attachments sent to you, however you will need to click the link in the attachment box to download them, rather than an attachment itself. You will have access to downloads for 30 days, and then the link will expire. /premiumsupport/faqs/;What if I need to download the attachment after 30 days?;The links are valid for 30 days. After 30 days, you may reach out to the original sender and request that they send you the attachment again. The original sender will need to generate a new link. /premiumsupport/faqs/;It hasn’t yet been 30 days since I received the email, however the attachment won’t download and I receive an error message. What is happening here?;Access can be revoked by the sender prior to the 30 days. Reach out to the sender for more information. /premiumsupport/faqs/;I’m unable to download when I click the link. What do I do?;Check your (or your company’s) settings around firewall or other blocks on external websites. If you are unable to resolve the problem, reach out to the original sender. /premiumsupport/faqs/;What is Support for Health Checks?;Support for Health Checks monitors some of the status checks that are displayed in the Amazon EC2 console. When one of these checks does not pass, all customers have the option to open a high-severity Technical Support case. Support for Health Checks covers certain checks for Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS). /premiumsupport/faqs/;Which AWS services provide access to support through Support for Health Checks?;Support for Health Checks currently covers three health check scenarios: EC2 system status, EBS disabled I/O, and EBS stuck in attaching. /premiumsupport/faqs/;How can I get support if an EC2 instance fails the system status check?;If an EC2 system status check fails for more than 20 minutes, a button appears that allows any AWS customer to open a case. Most of the details about your case are auto-populated, such as instance name, region, and customer information, but you can add additional context with a free-form text description. /premiumsupport/faqs/;How can I get support if an EBS volume is stuck in attaching or has disabled I/O?;An EBS volume that has a health status of disabled I/O or is stuck in attaching displays a Troubleshoot Now button. You are presented with a number of self-remediation options that could potentially fix the problem without the need to contact support. If the EBS volume is still failing the health check after you have followed all applicable steps, choose Contact Support to open a case. /premiumsupport/faqs/;What is the response time for my Support for Health Checks support case?;A Support for Health Checks case opened through the console is a high-severity case. /premiumsupport/faqs/;How do I check the status of my case after it has been opened?;After you submit a case, the button changes from Contact Support to View Case. To view the case status, choose View Case. /premiumsupport/faqs/;Do I have to open a case for each instance that is unresponsive?;You can, but you don’t need to. You can include additional context and instance names in the text description submitted with your initial case. /premiumsupport/faqs/;Why must an EC2 instance fail the system status check for 20 minutes? Why not just allow customers to open a case immediately?;Most system status issues are resolved by automated processes in less than 20 minutes and do not require any action on the part of the customer. If the instance is still failing the check after 20 minutes, then opening a case brings the situation to the attention of our technical support team for assistance. /premiumsupport/faqs/;Can any of my Identity and Access Management (IAM) users open a case?;Any user can create and manage a Support case for Health Checks case using their root account credentials. IAM users associated with accounts that have a Business, Enterprise On-Ramp, or Enterprise Support plan can also create and manage a Support for Health Checks case. /premiumsupport/faqs/;What is AWS Trusted Advisor?;AWS Trusted Advisor is an application that draws upon best practices learned from AWS’s aggregated operational history of serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance, or closing security gaps. /premiumsupport/faqs/;How do I access Trusted Advisor?;Trusted Advisor is available in the AWS Management Console. You can access the Trusted Advisor console directly at https://console.aws.amazon.com/trustedadvisor/. /premiumsupport/faqs/;What does Trusted Advisor check?;Trusted Advisor includes an expanding list of checks in the categories of cost optimization, security, fault tolerance, performance, and service limits. For a complete list of checks and descriptions, explore AWS Trusted Advisor Best Practices. /premiumsupport/faqs/;;The Trusted Advisor notification feature helps you stay up-to-date with your AWS resource deployment. You will be notified by weekly email when you opt in for this service. A refresh of checks is required to ensure up-to-date summary of check status in email notification. Automated weekly refresh of checks is performed for accounts with AWS Business Support, AWS Enterprise On-Ramp, and AWS Enterprise Support. Accounts with AWS Developer Support and AWS Basic Support will need to login to the AWS Management Console to trigger check refresh. /premiumsupport/faqs/;"How does the ""Recent Changes"" feature work?";"Trusted Advisor tracks the recent changes to your resource status on the console dashboard. The most recent changes over the past 30 days appear at the top. The system will track seven updates per page, and you can go to different pages to view all recent changes by clicking the forward or the backward arrow displayed on the top-right corner of the ""Recent Changes"" area." /premiumsupport/faqs/;;If you don’t want to be notified about the status of a particular resource, you can choose to exclude (suppress) the reporting for that resource. You would normally do this after you have inspected the results of a check and decide not to make any changes to the AWS resource or setting that Trusted Advisor is flagging. /premiumsupport/faqs/;;Most items in a Trusted Advisor report have hyperlinks to the AWS Management Console, where you can take action on the Trusted Advisor recommendations. Action links are included for all services that support them. /premiumsupport/faqs/;;For the Trusted Advisor console, access is controlled by IAM policies that use the trustedadvisor namespace, and access options include viewing and refreshing individual checks or categories of checks. For more information, see Manage access for AWS Trusted Advisor. /premiumsupport/faqs/;How do I access AWS Trusted Advisor via API?;You can retrieve and refresh Trusted Advisor results programmatically. For more information, see AWS Support API Reference. /premiumsupport/faqs/;How often can I refresh my Trusted Advisor result?;"The minimum refresh interval varies based on the check. You can refresh individual checks or refresh all the checks at once by choosing ""Refresh All"" in the top-right corner of the summary dashboard. When you visit the Trusted Advisor dashboard, any checks that have not been refreshed in the last 24 hours are automatically refreshed; this can take a few minutes. The date and time of the last refresh is displayed to the right of the check title. In addition, for customers with Business, Enterprise On-Ramp, or Enterprise Support plans, the Trusted Advisor data is automatically refreshed weekly." /premiumsupport/faqs/;How do Trusted Advisor activities affect my AWS CloudTrail logs?;AWS CloudTrail logs Trusted Advisor activities from the API and console. For example, you can use the API to programmatically refresh a check or manually refresh a check in the Trusted Advisor console. CloudTrail records this activity and you can view details about the event in your logs. Automatic refreshes performed by Trusted Advisor also appear in CloudTrail logs. /premiumsupport/faqs/;;AWS Basic Support and AWS Developer Support customers get access to 6 security checks (S3 Bucket Permissions, Security Groups - Specific Ports Unrestricted, IAM Use, MFA on Root Account, EBS Public Snapshots, RDS Public Snapshots) and 50 service limit checks. AWS Business Support, AWS Enterprise On-Ramp, and AWS Enterprise Support customers get access to all 115 Trusted Advisor checks (14 cost optimization, 17 security, 24 fault tolerance, 10 performance, and 50 service limits) and recommendations. For a complete list of checks and descriptions, explore Trusted Advisor Best Practices. /premiumsupport/faqs/;Why are my CloudWatch event rules and metric alarms for the EC2 On-Demand Instances check not working?;If your account has been opted in to vCPU-based On-Demand Instance limits, you must adjust your metric alarms and event rules to account for the vCPU-based instance limits. To see if you are using vCPU-based On-Demand Instances, visit the Limits page on Amazon EC2 console. /premiumsupport/faqs/;What service limits do you check?;You can find the limits that Trusted Advisor checks in AWS Trusted Advisor Best Practices. For information about limits, see AWS Service Quotas. /premiumsupport/faqs/;"Why is it safe to ignore or suppress red flags from the ""Security Groups - Specific Ports Unrestricted"" and ""Security Groups - Unrestricted Access"" security checks for security groups created by AWS Directory Services?";AWS Directory Services is a managed service that automatically creates an AWS security group in your VPC with network rules for traffic in and out of AWS managed domain controllers. The default inbound rules allow traffic from any source (0.0.0.0/0) to ports required by Active Directory. These rules do not introduce security vulnerabilities, as traffic to the domain controllers is limited to traffic from your VPC, other peered VPCs, or networks connected using AWS Direct Connect, AWS Transit Gateway or Virtual Private Network. In addition, the ENIs the security group is attached to do not and cannot have Elastic IPs attached to them, limiting inbound traffic to local VPC and VPC routed traffic. Security groups created by AWS Directory Services can be recognized by the security group name (always in the format “directory-id_controllers” (e.g. d-1234567890_controllers) or the security group description (always in the format “AWS created security group for directory-id directory controllers”). /premiumsupport/faqs/;Does the recommendation consider volume discounts?;No, reservation recommendations are based on public pricing. /premiumsupport/faqs/;I just purchased a new Reserved Instance. Why isn’t it showing up in the recommendation?;Since these recommendations are based on previous on-demand usage, newly purchased reservations do not show until the corresponding usage shows up in your billing data. Recommendations may be inaccurate if reservations have been purchased during the past 30 days. /premiumsupport/faqs/;What does each field in the check result mean?;Region - The AWS Region of the recommended reservation. Instance Type - The type of instance that AWS recommends. Platform - The platform of the recommended reservation. The platform is the specific combination of operating system, license model, and software on an instance. Recommended Number of RIs to Purchase - The number of RIs that AWS recommends that you purchase. Expected Average RI Utilization - The expected average utilization of the your RIs. Estimated Savings with Recommendation (Monthly) - How much AWS estimates that this specific recommendation could save you in a month. Upfront Cost of Ris - How much purchasing this instance costs you upfront. Estimated cost of RIs (Monthly) - How much the RIs will cost on a monthly basis after purchase. Estimated On-Demand Cost Post Recommended RI Purchase (Monthly) - How much AWS estimates that you will spend per month on On-Demand Instances after purchasing the recommended RIs. Estimated Break Even (Months) - How long AWS estimates that it takes for this instance to start saving you money, in months. Lookback Period (Days) - How many days of previous usage that AWS considers when making this recommendation. Term (Years) - The term of the reservation that you want recommendations for, in years. /premiumsupport/faqs/;What is AWS Trusted Advisor Priority?;AWS Trusted Advisor Priority helps you focus on the most important recommendations to optimize your cloud deployments, improve resilience, and address security gaps. Available to AWS Enterprise Support customers, Trusted Advisor Priority provides prioritized and context-driven recommendations that come from your AWS account team as well as machine-generated checks from AWS services. /premiumsupport/faqs/;How do I access AWS Trusted Advisor Priority?;Trusted Advisor Priority is available from the management or delegated administrator account on the Enterprise Support plan. If you have an Enterprise Support plan and are the management account owner for your organization, please contact your AWS account team to request access. Recommendations in Trusted Advisor Priority are aggregated across member accounts in your organization. /premiumsupport/faqs/;Where do AWS Trusted Advisor Priority recommendations come from?;Trusted Advisor Priority recommendations can come from one of two sources: /premiumsupport/faqs/;How is AWS Personal Health Dashboard different from the AWS Service Health Dashboard?;The Service Health Dashboard is a good way to view the overall status of each AWS service, but provides little in terms of how the health of those services is impacting your resources. AWS Personal Health Dashboard provides a personalized view of the health of the specific services that are powering your workloads and applications. What’s more, Personal Health Dashboard proactively notifies you when AWS experiences any events that may affect you, helping provide quick visibility and guidance to help you minimize the impact of events in progress, and plan for any scheduled changes, such as AWS hardware maintenance. /premiumsupport/faqs/;What actions should I take based on the status of AWS Personal Health Dashboard?;You will be able to view details about the event that is impacting your environment. AWS Personal Health Dashboard will continue to update the event regularly until the event ends and provide remediation guidance. /premiumsupport/faqs/;What language will the notification be in?;All notifications will be available only in English. We will add support for other languages over time. /premiumsupport/faqs/;What notifications channels are available?;AWS Personal Health Dashboard supports API, email, and CloudWatch Events (SQS, SNS, Lambda, Kinesis). Personal Health Dashboard also supports showing alerts in AWS Management Console navigation bar. /premiumsupport/faqs/;How do I sign up for notifications?;You can navigate to the CloudWatch Events console and write custom rules to filter events of interest. These rules can be wired to targets such as SNS, SQS, Lambda or Kinesis that will be invoked when you rule pattern matches AWS Personal Health Dashboard events on the CloudWatch Events bus. /premiumsupport/faqs/;Can I customize AWS Personal Health Dashboard?;Yes. You can customize Personal Health Dashboard through setting up notification preferences for the various types of events. You can also create custom remediation actions that are triggered in response to events. Set this up by visiting the CloudWatch Events console. /premiumsupport/faqs/;Can AWS Personal Health Dashboard automate any actions I take today to recover from known events?;AWS Personal Health Dashboard will not take any actions on your behalf on your AWS environment. It will provide you the tooling required to wire up custom actions defined by you. The Personal Health Dashboard events will be published on the CloudWatch Events channel. You can write rules to capture these events and wire them to a Lambda functions. AWS Personal Health Dashboard also provides best practices and ‘how-to’ guides that help you define your automated run-books. /premiumsupport/faqs/;Can I create custom actions with Lambda?;Yes, you can define custom actions in Lambda, and use CloudWatch Events to trigger Lambda actions in response to events. /premiumsupport/faqs/;Can I run diagnostics in AWS Personal Health Dashboard?;No. At this time running diagnostics directly inside AWS Personal Health Dashboard is not available. However, you could attach a diagnostics automation script that will be executed by Lambda when an event occurs if wired appropriately. /premiumsupport/faqs/;Will customers have API access to events on AWS Personal Health Dashboard?;Yes. The AWS Personal Health Dashboard event repository will be accessible with the Health API to customers who are on Business, Enterprise On-Ramp, and Enterprise Support plans. Learn more about the AWS Health API. /premiumsupport/faqs/;How does AWS Personal Health Dashboard work with Amazon CloudWatch?;CloudWatch and AWS Personal Health Dashboard can coexist to provide additional value beyond what just one service can provide by itself. Where you can create CloudWatch metrics and set alarms for the services available within the console, the Personal Health Dashboard provides notifications and information regarding issues that impact the underlying AWS infrastructure. /premiumsupport/faqs/;How do I get started with my Cloud Operations Review?;Enterprise On-Ramp customers can contact the pool of Technical Account Managers to initiate a Cloud Operations Review. /premiumsupport/faqs/;What third-party software is supported?;AWS Support Business, Enterprise On-Ramp, and Enterprise levels include limited support for common operating systems and common application stack components. AWS Support engineers can assist with the setup, configuration, and troubleshooting of the following third-party platforms and applications: /premiumsupport/faqs/;What if you can’t resolve my third-party software issue?;In the case that we are not able to resolve your issue we will collaborate with, or refer you to, the appropriate vendor support for that product. In some cases you may need to have a support relationship with the vendor to receive support from them. /premiumsupport/faqs/;What are some of the most common reasons a customer might require third-party software support?;AWS Support can assist with installation, configuration, and troubleshooting of third-party software on the supported list. For other more advanced topics such as performance tuning, troubleshooting of custom code or scripts, security questions, etc. it may be necessary to contact the third-party software provider directly. While AWS Support will make every effort to assist, any assistance beyond installation, configuration, and basic troubleshooting of supported third-party software will be on a best-effort basis only. /premiumsupport/faqs/;How do I close my AWS account?;"Before closing your account, be sure to back up any applications and data that you need to retain. AWS may not be able to retrieve your account data after your account is closed. After completing your backup, visit your Account Settings page and choose ""Close Account"". This will close your AWS account and unsubscribe you from all AWS services. You will not be able to access AWS services or launch new resources when your account is closed." /premiumsupport/faqs/;I received an error message when I tried to close my AWS account. What do I need to do?;If you receive an error message when trying to close your account, you can contact your account representative or open an account and billing support case for assistance. /premiumsupport/faqs/;Will I be billed after I close my account?;Usage and billing stops accruing when your account is closed. You will be billed for any usage that has accrued up until the time you closed your account, and your final charges will be billed at the beginning of the following month. /premiumsupport/faqs/;What is cross-account support?;Cross-account support is when a customer opens a premium support case from one account (e.g. account 12345678910) and requests assistance for resources owned by another account (e.g. an instance in account 98765432109). /premiumsupport/faqs/;Why is cross-account support not performed?;Support engineers have no way to determine the access that someone (acting under a user or role in one account) has been granted to resources owned by another account. Due to security and privacy concerns we can only discuss specific details with the account holder of the resource in question. /premiumsupport/faqs/;I have access to all relevant accounts, how do I log a support request for a shared resource?;Please open a support case from the resource-owning account. If there is a requirement to access the resource from a second account, please open a new support case from the second account as well. Then ask your support engineer to link the two cases, referencing the two case IDs. /premiumsupport/faqs/;Does membership of AWS Organizations (or Consolidated Billing) allow me to log a cross-account support request?;No. Accounts can be separated in an Organization to isolate resources and permissions among individuals. If you are not restricted to a specific account, please see the previous FAQ. /premiumsupport/faqs/;How do I securely control access to my AWS services and resources?;We recommend that you use AWS Identity and Access Management (IAM), which enables you to securely control access to AWS services and resources for your users. Using IAM, you can create and manage AWS users and groups and use permissions to allow and deny access to AWS resources. IAM enables security best practices by allowing you to grant unique security credentials to users and groups to specify which AWS service APIs and resources they can access. /premiumsupport/faqs/;What is consolidated billing?;Consolidated billing is a feature that allows you to consolidate payment for multiple AWS accounts in your organization by designating one of them to be the payer account. /premiumsupport/faqs/;How can I use my AWS bill to evaluate costs?;AWS provides a number of different ways to explore your AWS monthly bill and to allocate costs by account, resource ID, or customer-defined tags. /premiumsupport/faqs/;What are blended rates?;For billing purposes, AWS treats all the accounts in a Consolidated Billing family as if they're one account. Blended rates appear on your bill as an average price for variable usage across an account family. This allows you to take advantage of two features that are designed to ensure that you pay the lowest available price for AWS products and services: /premiumsupport/faqs/;Why don't I see the same figures in the Billing and Cost Management console as I see in the detailed billing report?;The Billing and Cost Management console and the detailed billing report provide different information based on blended and unblended rates. For more information, see Understanding Blended Rates or contact us through the AWS Support Center. /premiumsupport/faqs/;How do I tell which accounts benefited from Reserved Instance pricing?;The Detailed Billing Report shows the linked accounts that benefited from a Reserved Instance on your consolidated bill. The costs of the Reserved Instances can be unblended to show how the discount was distributed. Reserved Instance utilization reports also show the total cost savings (including upfront Reserved Instance costs) across a Consolidated Bill. /premiumsupport/faqs/;How do I use the AWS Billing and Cost Management console?;The AWS Billing and Cost Management Console is a service that you use to pay your AWS bill, monitor costs, and visualize your AWS spend. There are many ways to use this tool for your account. /premiumsupport/faqs/;How do I use Cost Explorer?;You can use Cost Explorer to visualize patterns in your spending on AWS resources over time. You can quickly identify areas that need further inquiry, and you can see trends that you can use to understand spend and to predict future costs. /premiumsupport/faqs/;How do I use the Amazon EC2 instance usage reports?;You can use the instance usage reports to view your instance usage and cost trends. You can see your usage data in either instance hours or cost. You can choose to see hourly, daily, and monthly aggregates of your usage data. You can filter or group the report by region, Availability Zone, instance type, AWS account, platform, tenancy, purchase option, or tag. After you configure a report, you can bookmark it so that it's easy to get back to later. /premiumsupport/faqs/;How do I use the Reserved Instance utilization report?;The Reserved Instance utilization report describes the utilization over time of each group (or bucket) of Amazon EC2 Reserved Instances that you own. Each bucket has a unique combination of region, Availability Zone, instance type, tenancy, offering type, and platform. You can specify the time range that the report covers, from a single day to weeks, months, a year, or three years. /premiumsupport/faqs/;;Three tools are available to determine Reserved Instance utilization: /premiumsupport/faqs/;How do I see how Reserved Instances are applied across my entire Consolidated Bill?;"The detailed billing report shows the hourly detail of all charges on an account or consolidated bill. Near the bottom of the report, line items explain Reserved Instance utilization in an aggregated format (xxx hours purchased; xxx hours used). To configure your account for this report, see Getting Set Up for Usage Reports." /premiumsupport/faqs/;How do I tell if and why a Reserved Instance is underutilized?;In addition to the three tools listed in How do I tell if my Reserved Instances are being used, AWS Trusted Advisor provides best practices (or checks) in four categories: Cost Optimization, Security, Fault Tolerance, and Performance. The Cost Optimization section includes a check for Amazon EC2 Reserved Instances Optimization. For more information about the Trusted Advisor check, see Reserved Instance Optimization Check Questions. /premiumsupport/faqs/;Which accounts are charged sales tax and why?;Tax is normally calculated at the linked account level. Each account must add its own tax exemption. For more information on US sales taxes and VAT taxes, see the following: /premiumsupport/faqs/;How do I submit an urgent limit increase request?;"Submit limit increase requests in the AWS Support Center. Choose ""Create case"", select ""Service Limit Increase"", and then select an item from the""Limit Type"" list." /premiumsupport/faqs/;How do we bill our end customers based on the detailed billing report?;AWS does not support the billing of reseller end customers because each reseller uses unique pricing and billing structures. We do recommend that resellers not use blended rates for billing—these figures are averages and are not meant to reflect actual billed rates. The detailed billing report can show unblended costs for each account on a consolidated bill, which is more helpful for the purpose of billing end customers. /premiumsupport/faqs/;How does AWS determine the location of an account?;The location of an account is determined by the customer tax settings as described here. /premiumsupport/faqs/;How is billing for regional pricing calculated?;The billing for regional pricing is calculated in similar way to the ES billing methodology as described here. /premiumsupport/faqs/;What determines eligibility for regional pricing?;Customers will qualify for the regional pricing if all of their accounts subscribed to Enterprise Support are located in any combination of the qualifying countries. A customer is still eligible for regional pricing even if the customer has multiple accounts in different countries as long as all the countries are all on the list of specified countries. There are four main criteria for determining eligibility for regional pricing: /premiumsupport/faqs/;What is Microsoft End of Support (EOS)?;Microsoft Lifecycle Policy offers 10 years of support (5 years for Mainstream Support and 5 years for Extended Support) for Business and Developer products (such as SQL Server and Windows Server). As per the policy, after the end of the Extended Support period there will be no patches or security updates. /premiumsupport/faqs/;How does EOS affect my existing instances on Amazon Web Services (AWS)?;There is no direct impact to existing instances. Customers can continue to start, run, and stop instances. /premiumsupport/faqs/;Can I launch new instances that include EOS software from my Custom Amazon Machine Images (AMIs)?;Yes. /premiumsupport/faqs/;Can I import images that contain EOS software into AWS using AWS tools?;Yes, customers can continue to import images to AWS using VM Import/Export (VMIE), Server Migration Service (SMS), or CloudEndure. /premiumsupport/faqs/;How does EOS affect Managed AWS Windows AMIs?;There is no direct impact to existing AMIs registered in customer accounts. /premiumsupport/faqs/;Can I create additional Custom AMIs from existing Custom AMIs in my account that contain EOS software?;Yes. /premiumsupport/faqs/;What are my options for running Microsoft software that is approaching EOS?;AWS customers running EOS software on EC2 instances have several options: /premiumsupport/faqs/;Can I purchase Extended Security Updates to cover instances that run on AWS, utilizing Microsoft EOS software?;Yes, Extended Security Updates are available directly from Microsoft or a Microsoft licensing partner. Read more about Microsoft's Extended Security Updates here. /premiumsupport/faqs/;Which Microsoft products sold by Amazon are approaching EOS, and when will Microsoft cease support?;Note: Information reflects publicly available Microsoft EOS dates as of April 4th, 2019. /premiumsupport/faqs/;What Amazon products and services are affected by EOS and when will changes be made?;Starting July 1st, 2019 Microsoft requires AWS to no longer publish and distribute License Included Managed AWS Windows AMIs (available in AWS Management Console and Quick Start), media, and services that use or contain Microsoft EOS products. Products that have reached end of support in prior years are also subject to these restrictions. The following products and services are affected: /premiumsupport/faqs/;Does the change to Microsoft’s EOS software distribution policy only apply to AWS?;Microsoft has advised that this change will apply to all hyperscale cloud providers. /premiumsupport/faqs/;What are my options for running Microsoft software that is approaching EOS?;AWS customers running EOS software on EC2 instances have several options: /premiumsupport/faqs/;What are other AWS Customers doing?;AWS customers such as Sysco, Hess, Ancestry, and Expedia have successfully migrated and modernized their Windows workloads on AWS. Read more about what AWS customers are doing here. /premiumsupport/faqs/;What are the cost implications of moving to a supported Microsoft Operating System or SQL Server version?;License Included: There is no additional licensing costs to move to a newer version of the software when using Amazon's License Included options, for example: /premiumsupport/faqs/;If I experience a technical issue running a product that has reached Microsoft EOS, will AWS Support assist me?;Yes, customers with AWS Support plans will be able to engage AWS Support for technical issues. /premiumsupport/faqs/;If I have further questions around the use of Microsoft EOS on AWS, whom should I contact?;Please email aws.EOS.Microsoft@amazon.com. /premiumsupport/faqs/;Specifically, which License Included Managed AWS Windows AMIs are affected and when does this take effect?;July 1st, 2019 /premiumsupport/faqs/;What regions is AWS Incident Detection and Response available in?;AWS Incident Detection and Response is available in English for workloads hosted in the following regions: US East (Ohio), US East (NVirginia), US West (Oregon), US West (NCalifornia), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), South America (São Paulo). As a premium AWS Support Service, AWS Incident Detection and Response provides 24x7 coverage supported by a global team of engineers. When AWS Incident Detection and Response responds to an incident, the first available Incident Manager from any of our regions is assigned to your case to help you as quickly as possible. /premiumsupport/faqs/;How do I subscribe (or unsubscribe) an account to AWS Incident Detection and Response?;AWS Incident Detection and Response supports Amazon CloudWatch alarms and New Relic via EventBridge. AWS Incident Detection and Response does not replace your monitoring team. AWS Incident Detection and Response works in partnership with your monitoring teams and is focused on the management of critical incidents. /premiumsupport/faqs/;Can I purchase AWS Incident Detection and Response for a fixed duration?;AWS Incident Detection and Response supports Amazon CloudWatch alarms and New Relic via EventBridge. AWS Incident Detection and Response does not replace your monitoring team. AWS Incident Detection and Response works in partnership with your monitoring teams and is focused on the management of critical incidents. /premiumsupport/faqs/;All of my workloads are concentrated in a single account but I only want to enroll a fraction of my workloads into the service. Can I be billed for only the workloads I onboard for monitoring?;AWS Incident Detection and Response supports Amazon CloudWatch alarms and New Relic via EventBridge. AWS Incident Detection and Response does not replace your monitoring team. AWS Incident Detection and Response works in partnership with your monitoring teams and is focused on the management of critical incidents. /premiumsupport/faqs/;How do I onboard individual workloads into the service?;AWS Incident Detection and Response supports Amazon CloudWatch alarms and New Relic via EventBridge. AWS Incident Detection and Response does not replace your monitoring team. AWS Incident Detection and Response works in partnership with your monitoring teams and is focused on the management of critical incidents. /premiumsupport/faqs/;How do you engage me during an incident?;AWS Incident Detection and Response supports Amazon CloudWatch alarms and New Relic via EventBridge. AWS Incident Detection and Response does not replace your monitoring team. AWS Incident Detection and Response works in partnership with your monitoring teams and is focused on the management of critical incidents. /premiumsupport/faqs/;How does AWS Incident Detection and Response help me during an AWS service event?;AWS Incident Detection and Response supports Amazon CloudWatch alarms and New Relic via EventBridge. AWS Incident Detection and Response does not replace your monitoring team. AWS Incident Detection and Response works in partnership with your monitoring teams and is focused on the management of critical incidents. /premiumsupport/faqs/;Can I use AWS Incident Detection and Response with my existing monitoring tools?;AWS Incident Detection and Response supports Amazon CloudWatch alarms and New Relic via EventBridge. AWS Incident Detection and Response does not replace your monitoring team. AWS Incident Detection and Response works in partnership with your monitoring teams and is focused on the management of critical incidents.