image_url
stringlengths
95
235
article_url
stringlengths
60
173
title
stringlengths
28
135
image
imagewidth (px)
181
7.09k
content
stringlengths
1.24k
29k
description
stringlengths
285
461
author
stringlengths
7
28
date_published
stringlengths
25
25
categories
sequencelengths
1
14
https://d2908q01vomqb2.c…lity-679x630.png
https://aws.amazon.com/blogs/architecture/automating-multi-az-high-availability-for-weblogic-administration-server/
Automating multi-AZ high availability for WebLogic administration server
AWS Architecture BlogAutomating multi-AZ high availability for WebLogic administration serverby Jack Zhou, Robin Geddes, Sami Hoda, and Tony Ten Broeck | on22 SEP 2023| inAmazon EC2,Architecture,AWS Lambda|Permalink|Comments|ShareOracle WebLogic Serveris used by enterprises to power production workloads, including Oracle E-Business Suite (EBS) and Oracle Fusion Middleware applications.Customer applications are deployed to WebLogic Server instances (managed servers) and managed using an administration server (admin server) within a logical organization unit, called a domain. Clusters of managed servers provide application availability and horizontal scalability, while the single-instance admin server does not host applications.There are various architectures detailing WebLogic-managed server high availability (HA). In this post, we demonstrate using Availability Zones (AZ) and a floating IP address to achieve a “stretch cluster” (Oracle’s terminology).Figure 1. Overview of a WebLogic domainOverview of problemThe WebLogic admin server is important for domain configuration, management, and monitoring both application performance and system health. Historically, WebLogic was configured using IP addresses, with managed servers caching the admin server IP to reconnect if the connection was lost.This can cause issues in a dynamic Cloud setup, as replacing the admin server from a template changes its IP address, causing two connectivity issues:Communication­ within the domain:the admin and managed servers communicate via the T3 protocol, which is based on Java RMI.Remote access to admin server console:allowing internet admin access and what additional security controls may be required is beyond the scope of this post.Here, we will explore how to minimize downtime and achieve HA for your admin server.Solution overviewFor this solution, there are three approaches customers tend to follow:Use afloating virtual IPto keep the address static. This solution is familiar to WebLogic administrators as it replicates historical on-premise HA implementations. The remainder of this post dives into this practical implementation.UseDNSto resolve the admin server IP address. This is also a supported configuration.Run in a“headless configuration”and not (normally) run the admin server.Use WebLogic Scripting Tool to issues commandsCollect and observe metrics through other toolsRunning “headless” requires a high level of operational maturity. It may not be compatible for certain vendor packaged applications deployed to WebLogic.Using a floating IP address for WebLogic admin serverHere, we discuss the reference WebLogic deployment architecture on AWS, as depicted in Figure 2.Figure 2. Reference WebLogic deployment with multi-AZ admin HA capabilityIn this example, a WebLogic domain resides in a virtual private cloud’s (VPC) private subnet. The admin server is on its ownAmazon Elastic Compute Cloud(Amazon EC2) instance. It’s bound to the private IP 10.0.11.8 that floats across AZs within the VPC. There are two ways to achieve this:Create a “dummy” subnetin the VPC (in any AZ), with the smallest allowed subnet size of /28. Excluding the first “4” and the last IP of the subnet because they’re reserved, choose an address. For a 10.0.11.0/28 subnet, we will use 10.0.11.8 and configure WebLogic admin server to bind to that.Use an IP outside of the VPC.We discuss this second way and compare both processes in the later section “Alternate solution for multi-AZ floating IP”.This exampleAmazon Web Servicesstretch architecture with one WebLogic domain and one admin server:Create a VPC across two or more AZs, with one private subnet in each AZ for managed servers and an additional “dummy” subnet.Create two EC2 instances, one for each of the WebLogic Managed Servers (distributed across the private subnets).Use an Auto Scaling group to ensure a single admin server running.Create an Amazon EC2 launch template for the admin server.Associate the launch template and an Auto Scaling group with minimum, maximum, and desired capacity of 1. The Auto Scaling Group (ASG) detects EC2 and/or AZ degradation and launches a new instance in a different AZ if the current fails.Create anAWS Lambdafunction (example to follow) to be called by the Auto Scale group lifecycle hook to update the route tables.Update the user data commands (example to follow) of the launch template to:Add the floating IP address to the network interfaceStart the admin server using the floating IPTo route traffic to the floating IP, we update route tables for both public and private subnets.We create a Lambda function launched by the Auto Scale group lifecycle hookpending:InServicewhen a new admin instance is created. This Lambda code updates routing rules in both route tables mapping the dummy subnet CIDR (10.0.11.0/28) of the “floating” IP to the admin Amazon EC2. This updates routes in both the public and private subnets for the dynamically launched admin server, enabling managed servers to connect.Enabling internet access to the admin serverIf enabling internet access to the admin server, create an internet-facing Application Load Balancer (ALB) attached to the public subnets. With the route to the admin server, the ALB can forward traffic to it.Create an IP-based target group that points to the floating IP.Add a forwarding rule in the ALB to route WebLogic admin traffic to the admin server.User data commands in the launch template to make admin server accessible upon ASG scale outIn the admin server EC2 launch template, add user data code to monitor the ASG lifecycle state. When it reaches InService state, a Lambda function is invoked to update route tables. Then, the script starts the WebLogic admin server Java process (and associated NodeManager, if used).Theadmin server instance’s SourceDestCheckattribute needs to be set to false, enabling it to bind to the logical IP. This change can also be done in the Lambda function.When a user accesses the admin server from the internet:Traffic flows to the elastic IP address associated to the internet-facing ALB.The ALB forwards to the configured target group.The ALB uses the updated routes to reach 10.0.11.8 (admin server).When managed servers communicate with the admin server, they use the updated route table to reach 10.0.11.8 (admin server).The Lambda functionHere, we present a Lambda function example that sets the EC2 instance SourceDeskCheck attribute to false and updates the route rules for the dummy subnet CIDR (the “floating” IP on the admin server EC2) in both public and private route tables.import { AutoScalingClient, CompleteLifecycleActionCommand } from "@aws-sdk/client-auto-scaling"; import { EC2Client, DeleteRouteCommand, CreateRouteCommand, ModifyInstanceAttributeCommand } from "@aws-sdk/client-ec2"; export const handler = async (event, context, callback) => { console.log('LogAutoScalingEvent'); console.log('Received event:', JSON.stringify(event, null, 2)); // IMPORTANT: replace with your dummy subnet CIDR that the floating IP resides in const destCIDR = "10.0.11.0/28"; // IMPORTANT: replace with your route table IDs const rtTables = ["rtb-**************ff0", "rtb-**************af5"]; const asClient = new AutoScalingClient({region: event.region}); const eventDetail = event.detail; const ec2client = new EC2Client({region: event.region}); const inputModifyAttr = { "SourceDestCheck": { "Value": false }, "InstanceId": eventDetail['EC2InstanceId'], }; const commandModifyAttr = new ModifyInstanceAttributeCommand(inputModifyAttr); await ec2client.send(commandModifyAttr); // modify route in two route tables for (const rt of rtTables) { const inputDelRoute = { // DeleteRouteRequest DestinationCidrBlock: destCIDR, DryRun: false, RouteTableId: rt, // required }; const cmdDelRoute = new DeleteRouteCommand(inputDelRoute); try { const response = await ec2client.send(cmdDelRoute); console.log(response); } catch (error) { console.log(error); } const inputCreateRoute = { // addRouteRequest DestinationCidrBlock: destCIDR, DryRun: false, InstanceId: eventDetail['EC2InstanceId'], RouteTableId: rt, // required }; const cmdCreateRoute = new CreateRouteCommand(inputCreateRoute); await ec2client.send(cmdCreateRoute); } // continue on ASG lifecycle const params = { AutoScalingGroupName: eventDetail['AutoScalingGroupName'], /* required */ LifecycleActionResult: 'CONTINUE', /* required */ LifecycleHookName: eventDetail['LifecycleHookName'], /* required */ InstanceId: eventDetail['EC2InstanceId'], LifecycleActionToken: eventDetail['LifecycleActionToken'] }; const cmdCompleteLifecycle = new CompleteLifecycleActionCommand(params); const response = await asClient.send(cmdCompleteLifecycle); console.log(response); return response; };Amazon EC2 user dataThe following code in Amazon EC2 user data shows how to add logical secondary IP address to the Amazon EC2 primary ENI, keep polling the ASG lifecycle state, and start the admin server Java process upon Amazon EC2 entering theInServicestate.Content-Type: text/x-shellscript; charset="us-ascii"MIME-Version: 1.0Content-Transfer-Encoding: 7bitContent-Disposition: attachment; filename="userdata.txt"#!/bin/baship addr add 10.0.11.8/28 br 10.0.11.255 dev eth0 TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600") for x in {1..30} do target_state=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/autoscaling/target-lifecycle-state) if [ \"$target_state\" = \"InService\" ]; then su -c 'nohup /mnt/efs/wls/fmw/install/Oracle/Middleware/Oracle_Home/user_projects/domains/domain1/bin/startWebLogic.sh &' ec2-user break fi sleep 10 doneAlternate solution for multi-AZ floating IPAn alternative solution for the floating IP is to use an IP external to the VPC. The configurations for ASG, Amazon EC2 launch template, and ASG lifecycle hook Lambda function remain the same. However, the ALB cannot access the WebLogic admin console webapp from the internet due to its requirement for a VPC-internal subnet. To access the webapp in this scenario, stand up a bastion host in a public subnet.While this approach “saves” 16 VPC IP addresses by avoiding a dummy subnet, there are disadvantages:Bastion hosts are not AZ-failure resilient.Missing true multi-AZ resilience like the first solution.Requires additional cost and complexity in managing multiple bastion hosts across AZs or a VPN.ConclusionAWS has a track record of efficiently running Oracle applications, Oracle EBS, PeopleSoft, and mission critical JEE workloads. In this post, we delved into a HA solution using a multi-AZ floating IP for the WebLogic admin server, and using ASG to ensure a singular admin server. We showed how to use ASG lifecycle hooks and Lambda to automate route updates for the floating IP and configuring an ALB to allow Internet access for the admin server. This solution achieves multi-AZ resilience for WebLogic admin server with automated recovery, transforming a traditional WebLogic admin server from a pet to cattle.Jack ZhouJack Zhou is a Senior Solutions Architect at AWS in Worldwide Public Sector, empowering global consulting partners to build long-term, profitable businesses with AWS. Previously, he was a Senior Technical Account Manager managing high-profile enterprise support accounts.Robin GeddesRobin Geddes is an AWS Migration Specialist focusing on Oracle technology and applications.Sami HodaSami Hoda is an AWS Senior Solutions Architect in the Partners Consulting division covering the Worldwide Public Sector. Sami is passionate about projects where equal parts design thinking, innovation, and emotional intelligence can be used to solve problems for and impact people in need.Tony Ten BroeckTony Ten Broeck serves as an AWS Senior Solutions Architect for the Partners Consulting division, focusing on public sector. He's committed to positive impact through practical solutions and emphasizes both effectiveness and sound judgment , or as he calls it, "Trying to do less stupid."CommentsView Comments
Oracle WebLogic Server is used by enterprises to power production workloads, including Oracle E-Business Suite (EBS) and Oracle Fusion Middleware applications. Customer applications are deployed to WebLogic Server instances (managed servers) and managed using an administration server (admin server) within a logical organization unit, called a domain. Clusters of managed servers provide application availability and horizontal […]
Jack Zhou
2023-09-22T10:06:05-07:00
[ "Amazon EC2", "Architecture", "AWS Lambda" ]
https://d2908q01vomqb2.c…itect_REVIEW.jpg
https://aws.amazon.com/blogs/architecture/lets-architect-leveraging-in-memory-databases/
Let’s Architect! Leveraging in-memory databases
AWS Architecture BlogLet’s Architect! Leveraging in-memory databasesby Luca Mezzalira, Federica Ciuffo, Vittorio Denti, and Zamira Jaupaj | on13 SEP 2023| inAmazon ElastiCache,Amazon MemoryDB for Redis,Architecture,Database,Thought Leadership|Permalink|Comments|ShareIn-memory databases play a critical role in modern computing, particularly in reducing the strain on existing resources, scaling workloads efficiently, and minimizing the cost of infrastructure. The advanced performance capabilities of in-memory databases make them vital for demanding applications characterized by voluminous data, real-time analytics, and rapid response requirements.In this edition ofLet’s Architect!, we are introducing caching strategies and, further, examining case studies that useAmazon Web Services(AWS), likeAmazon ElastiCacheorAmazon MemoryDB for Redis, in real workloads where customers share the reasoning behind their approaches. It is very important understanding the context for leveraging a specific solution or pattern, and many common questions can be answered with these resources.Caching challenges and strategiesMany services built at Amazon rely on caching systems in the background to speed up performance, deal with low latency requirements, and avoid overloading on source databases and other microservices. Operating caches and adding caches into our systems may present complex challenges in terms of monitoring, data consistency, and load on the other components of the system. Indeed, a cache can give big benefits, but it’s also a new component to run and keep healthy. Furthermore, engineers may need to use empirical methods to choose the cache size, expiration policy, and eviction policy: we always have to perform tests and use the metrics to tune the setup.With thisAmazon Builder’s Libraryresource, you can learn strategies for using caching in your architecture and best practices directly from Amazon’s engineers.Take me to this Amazon Builder’s Library article!Strategies applied in Amazon applications at scale, explained and contextualized by Amazon engineersHow Yahoo cost optimizes their in-memory workloads with AWSDiscover how Yahoo effectively leverages the power of Amazon ElastiCache and data tiering to process an astounding 1.3 million advertising data events per second, all while generating savings of up to 50% on their overall bill.Data tiering is an ingenious method to scale up to hundreds of terabytes of capacity by intelligently managing data. It achieves this by automatically shifting the least-recently accessed data between RAM and high-performance SSDs.In this video, you will gain insights into how data tiering operates and how you can unlock ultra-fast speeds and seamless scalability for your workloads in a cost-efficient manner. Furthermore, you can also learn how it’s implemented under the hood.Take me to this re:Invent 2022 video!A snapshot of how Yahoo architecture leverages Amazon ElastiCacheUse MemoryDB to build real-time applications for performance and durabilityMemoryDB is a robust, durable database marked by microsecond reads, low single-digit millisecond writes, scalability, and fortified enterprise security. It guarantees an impressive 99.99% availability, coupled with instantaneous recovery without any data loss.In this session, we explore multiple use cases across sectors, such as Financial Services, Retail, and Media & Entertainment, like payment processing, message brokering, and durable session store applications. Moreover, through a practical demonstration, you can learn how to utilize MemoryDB to establish a microservices message broker for a Media & Entertainment application.Take me to this AWS Online Tech Talks video!A sample use case for retail applicationSamsung SmartThings powers home automation with Amazon MemoryDBMemoryDB offers the kind of ultra-fast performance that only an in-memory database can deliver, curtailing latency to microseconds and processing 160+ million requests per second —without data loss. In this re:Invent 2022 session, you will understand why Samsung SmartThings selected MemoryDB as the engine to power the next generation of their IoT device connectivity platform, one that processes millions of events every day.You can also discover the intricate design of MemoryDB and how it ensures data durability without compromising the performance of in-memory operations, thanks to the utilization of a multi-AZ transactional log. This session is an enlightening deep-dive into durable, in-memory data operations.Take me to this re:Invent 2022 video!The architecture leveraged by Samsung SmartThings using Amazon MemoryDB for RedisAmazon ElastiCache: In-memory datastore fundamentals, use cases and examplesIn this edition of AWS Online Tech Talks, explore Amazon ElastiCache, a managed service that facilitates the seamless setup, operation, and scaling of widely used, open-source–compatible, in-memory datastores in the cloud environment. This service positions you to develop data-intensive applications or enhance the performance of your existing databases through high-throughput, low-latency, in-memory datastores. Learn how it is leveraged for caching, session stores, gaming, geospatial services, real-time analytics, and queuing functionalities.This course can help cultivate a deeper understanding of Amazon ElastiCache, and how it can be used to accelerate your data processing while maintaining robustness and reliability.Take me to this AWS Online Tech Talks course!A free training course to increase your skills and leverage better in-memory databasesSee you next time!Thanks for joining us to discuss in-memory databases! In 2 weeks, we’ll talk about SQL databases.To find all the blogs from this series, visit theLet’s Architect!list of content on theAWS Architecture Blog.TAGS:cost optimization,database,Let's Architect,re:Invent,RedisLuca MezzaliraLuca is Principal Solutions Architect based in London. He has authored several books and is an international speaker. He lent his expertise predominantly in the solution architecture field. Luca has gained accolades for revolutionizing the scalability of front-end architectures with micro-frontends, from increasing the efficiency of workflows, to delivering quality in products.Federica CiuffoFederica is a Solutions Architect at Amazon Web Services. She is specialized in container services and is passionate about building infrastructure with code. Outside of the office, she enjoys reading, drawing, and spending time with her friends, preferably in restaurants trying out new dishes from different cuisines.Vittorio DentiVittorio Denti is a Machine Learning Engineer at Amazon based in London. After completing his M.Sc. in Computer Science and Engineering at Politecnico di Milano (Milan) and the KTH Royal Institute of Technology (Stockholm), he joined AWS. Vittorio has a background in distributed systems and machine learning. He's especially passionate about software engineering and the latest innovations in machine learning science.Zamira JaupajZamira is an Enterprise Solutions Architect based in the Netherlands. She is highly passionate IT professional with over 10 years of multi-national experience in designing and implementing critical and complex solutions with containers, serverless, and data analytics for small and enterprise companies.CommentsView Comments
In-memory databases play a critical role in modern computing, particularly in reducing the strain on existing resources, scaling workloads efficiently, and minimizing the cost of infrastructure. The advanced performance capabilities of in-memory databases make them vital for demanding applications characterized by voluminous data, real-time analytics, and rapid response requirements. In this edition of Let’s Architect!, […]
Luca Mezzalira
2023-09-13T08:36:46-07:00
[ "Amazon ElastiCache", "Amazon MemoryDB for Redis", "Architecture", "Database", "Thought Leadership" ]
https://d2908q01vomqb2.c…ge-1-927x630.jpg
https://aws.amazon.com/blogs/architecture/operating-models-for-web-app-security-governance-in-aws/
Operating models for Web App Security Governance in AWS
AWS Architecture BlogOperating models for Web App Security Governance in AWSby Chamandeep Singh, Preet Sawhney, and Prabhakaran Thirumeni | on11 SEP 2023| inAmazon API Gateway,Architecture,AWS Firewall Manager,AWS WAF|Permalink|Comments|ShareFor most organizations, protecting their high value assets is a top priority.AWS Web Application Firewall(AWS WAF) is an industry leading solution that protects web applications from the evolving threat landscape, which includes common web exploits and bots. These threats affect availability, compromise security, or can consume excessive resources. Though AWS WAF is a managed service, the operating model of this critical first layer of defence is often overlooked.Operating models for a core service like AWS WAF differ depending on your company’s technology footprint, and use cases are dependent on workloads. While some businesses were born in the public cloud and have modern applications, many large established businesses have classic and legacy workloads across their business units. We will examine three distinct operating models using AWS WAF,AWS Firewall Manager service(AWS FMS),AWS Organizations, and other AWS services.Operating ModelsI. CentralizedThe centralized model works well for organizations where the applications to be protected by AWS WAF are similar, and rules can be consistent. With multi-tenant environments (where tenants share the same infrastructure or application), AWS WAF can be deployed with the sameweb access control lists(web ACLs) and rules for consistent security. Content management systems (CMS) also benefit from this model, since consistent web ACL and rules can protect multiple websites hosted on their CMS platform. This operating model provides uniform protection against web-based attacks and centralized administration across multiple AWS accounts. For managing all your accounts and applications in AWS Organizations, use AWS Firewall Manager.AWS Firewall Manager simplifies your AWS WAF administration and helps you enforce AWS WAF rules on the resources in all accounts in an AWS Organization, by usingAWS Configin the background. The compliance dashboard gives you a simplified view of the security posture. A centralized information security (IS) team can configure and manage AWS WAF’s managed and custom rules.AWS Managed Rulesare designed to protect against common web threats, providing an additional layer of security for your applications. By leveraging AWS Managed Rules and their pre-configured rule groups, you can streamline the management of WAF configurations. This reduces the need for specialized teams to handle these complex tasks and thereby alleviates undifferentiated heavy lifting.A centralized operating pattern (see Figure 1) requires IS teams to construct an AWS WAF policy by using AWS FMS and then implement it at scale in each and every account. Keeping current on the constantly changing threat landscape can be time-consuming and expensive. Security teams will have the option of selecting one or more rule groups from AWS Managed Rules or anAWS Marketplacesubscription for each web ACL, along with any custom rule needed.Figure 1. Centralized operating model for AWS WAFAWS Config managed rule setsensure AWS WAF logging, rule groups, web ACLs, and regional and global AWS WAF deployments have no empty rule sets. Managed rule sets simplify compliance monitoring and reporting, while assuring security and compliance.AWS CloudTrailmonitors changes to AWS WAF configurations, providing valuable auditing capability of your operating environment.This model places the responsibility for defining, enforcing, and reviewing security policies, as well as remediating any issues, squarely on the security administrator and IS team. While comprehensive, this approach may require careful management to avoid potential bottlenecks, especially in larger-scale operations.II. DistributedMany organizations start their IT operations on AWS from their inception. These organizations typically have multi-skilled infrastructure and development teams and a lean operating model. The distributed model shown in Figure 2 is a good fit for them. In this case, the application team understands the underlying infrastructure components and the Infrastructure as Code (IaC) that provisions them. It makes sense for these development teams to also manage the interconnected application security components, like AWS WAF.The application teams own the deployment of AWS WAF and the setup of the Web ACLs for their respective applications. Typically, the Web ACL will be a combination of baseline rule groups anduse casespecific rule groups, both deployed and managed by the application team.One of the challenges that comes with the distributed deployment is the inconsistency in rules’ deployment which can result in varying levels of protection. Conflicting priorities within application teams can sometimes compromise the focus on security, prioritizing feature rollouts over comprehensive risk mitigation, for example. A strong governance model can be very helpful in situations like these, where the security team might not be responsible for deploying the AWS WAF rules, but do need security posture visibility. AWS Security services like Security Hub and Config rules can help set these parameters. For example, some of the managed Config rules and Security Hub controls check if AWS WAF isenabledforApplication Load Balancer(ALB) andAmazon API Gateway, and also if the associated Web ACL isempty.Figure 2. Distributed operating model for AWS WAFIII. HybridAn organization that has a diverse range of customer-facing applications hosted in a number of different AWS accounts can benefit from a hybrid deployment operating model. Organizations whose infrastructure is managed by a combination of an in-house security team, third-party vendors, contractors, and a managed cybersecurity operations center (CSOC) can also use this model. In this case, the security team can build and enforce a core AWS WAF rule set using AWS Firewall Manager. Application teams, can build and manage additional rules based on the requirements of the application. For example,use casespecific rule groups will be different for PHP applications as compared to WordPress-based applications.Information security teams can specify how core rule groups are ordered. The application administrator has the ability to add rules and rule groups that will be executed between the two rule group sets. This approach ensures that adequate security is applied to all legacy and modern applications, and developers can still write and manage custom rules for enhanced protection.Organizations should adopt a collaborativeDevSecOps modelof development, where both the security team and the application development teams will build, manage, and deploy security rules. This can also be considered a hybrid approach combining the best of the central and distributed models, as shown in Figure 3.Figure 3. Hybrid operating model for AWS WAFGovernance is shared between the centralized security team responsible for baseline rules sets deployed across all AWS accounts, and the individual application team responsible for AWS WAF custom rule sets. To maintain security and compliance,AWS Config checksAmazon CloudFront,AWS AppSync, Amazon API Gateway, and ALB for AWS WAF association with managed rule sets.AWS Security Hubcombines and prioritizesAWS Firewall Manager security findings, enabling visibility into AWS WAF rule conformance across AWS accounts and resources. This model requires close coordination between the two teams to ensure that security policies are consistent and all security issues are effectively addressed.The AWS WAF incident response strategy includes detecting, investigating, containing, and documenting incidents, alerting personnel, developing response plans, implementing mitigation measures, and continuous improvement based on lessons learned. Threat modelling for AWS WAF involves identifying assets, assessing threats and vulnerabilities, defining security controls, testing and monitoring, and staying updated on threats and AWS WAF updates.ConclusionUsing the appropriate operating model is key to ensuring that the right web application security controls are implemented. It accounts for the needs of both business and application owners. In the majority of implementations, the centralized and hybrid model works well, by providing a stratified policy enforcement. However, the distributed method can be used to manage specific use cases. Amazon Firewall Manager services can be used to streamline the management of centralized and hybrid operating models across AWS Organizations.Chamandeep SinghChamandeep is a Senior Partner Solutions Architect with AWS. He is passionate about cloud security and helps enterprises create secure & well architected cloud solutions. He collaborates with the Global Security & Edge Field team to enhance AWS services and create security recommendations. Chamandeep works with AWS GSI partners and customers for the development of scalable, secure, and resilient cloud solutions.Preet SawhneyPreet is a Senior Security Consultant with AWS Professional Services in Sydney, Australia. He specializes in collaborating with AWS customers and partners to develop tailored security strategies that meet the ever-changing demands of cloud security.Prabhakaran ThirumeniPrabhakaran is an AWS Senior Partner Solutions Architect, serving as a vital member of the Global Security, Network, & Edge Field community. His expertise lies in delivering exceptional consulting and thought leadership to esteemed GSI partners, and helping shape their digital transformation strategies and solution design choices.CommentsView Comments
For most organizations, protecting their high value assets is a top priority. AWS Web Application Firewall (AWS WAF) is an industry leading solution that protects web applications from the evolving threat landscape, which includes common web exploits and bots. These threats affect availability, compromise security, or can consume excessive resources. Though AWS WAF is a […]
Chamandeep Singh
2023-09-11T14:53:45-07:00
[ "Amazon API Gateway", "Architecture", "AWS Firewall Manager", "AWS WAF" ]
https://d2908q01vomqb2.c…ure-1200x630.png
https://aws.amazon.com/blogs/architecture/reduce-costs-and-enable-integrated-sms-tracking-with-braze-url-shortening/
Reduce costs and enable integrated SMS tracking with Braze URL shortening
AWS Architecture BlogReduce costs and enable integrated SMS tracking with Braze URL shorteningby Umesh Kalaspurkar, Donnie Kendall, and Ian Abels | on01 SEP 2023| inAmazon DynamoDB,Amazon DynamoDB Accelerator (DAX),Amazon VPC,Architecture|Permalink|Comments|ShareAs competition grows fiercer, marketers need ways to ensure they reach each user with personalized content on their most critical channels. Short message/messaging service (SMS) is a key part of that effort, touching more than 5 billion people worldwide, with an impressive 82% open rate. However, SMS lacks the built-in engagement metrics supported by other channels.To bridge this gap, leading customer engagement platform,Braze, recently built an in-house SMS link shortening solution usingAmazon DynamoDBandAmazon DynamoDB Accelerator(DAX). It’s designed to handle up to 27 billion redirects per month, allowing marketers to automatically shorten SMS-related URLs. Alongside theBraze Intelligence Suite, you can use SMS click data in reporting functions and retargeting actions. Read on to learn how Braze created this feature and the impact it’s having on marketers and consumers alike.SMS link shortening approachMany Braze customers have used third-party SMS link shortening solutions in the past. However, this approach complicates the SMS composition process and isolates click metrics from Braze analytics. This makes it difficult to get a full picture of SMS performance.Figure 1. Multiple approaches for shortening URLsThe following table compares all 3 approaches for their pros and cons.Scenario#1 – Unshortened URL in SMS#2 – 3rd Party Shortener#3 – Braze Link Shortening & Click TrackingLow Character CountX✓✓Total ClicksX✓✓Ability to Retarget UsersXX✓Ability to Trigger Subsequent MessagesXX✓With link shortening built in-house and more tightly integrated into the Braze platform, Braze can maintain more control over their roadmap priority. By developing the tool internally, Braze achieved a 90% reduction in ongoing expenses compared with the $400,000 annual expense associated with using an outside solution.Braze SMS link shortening: Flow and architectureFigure 2. SMS link shortening architectureThe following steps explain the link shortening architecture:First, customers initiate campaigns via the Braze Dashboard. Using this interface, they can also make requests to shorten URLs.The URL registration process is managed by a Kubernetes-deployed Go-based service. This service not only shortens the provided URL but also maintains reference data in Amazon DynamoDB.After processing, the dashboard receives the generated campaign details alongside the shortened URL.The fully refined campaign can be efficiently distributed to intended recipients through SMS channels.Upon a user’s interaction with the shortened URL, the message gets directed to the URL redirect service. This redirection occurs through an Application Load Balancer.The redirect service processes links in messages, calls the service, and replaces links before sending to carriers.Asynchronous calls feed data to a Kafka queue for metrics, using the HTTP sink connector integrated with Braze systems.The registration and redirect services are decoupled from the Braze platform to enable independent deployment and scaling due to different requirements. Both the services are running the same code, but with different endpoints exposed, depending on the functionality of a given Kubernetes pod. This restricts internal access to the registration endpoint and permits independent scaling of the services, while still maintaining a fast response time.Braze SMS link shortening: ScaleRight now, our customers use the Braze platform to send about 200 million SMS messages each month, with peak speeds of around 2,000 messages per second. Many of these messages contain one or more URLs that need to be shortened. In order to support the scalability of the link shortening feature and give us room to grow, we designed the service to handle 33 million URLs sent per month, and 3.25 million redirects per month. We assumed that we’d see up to 65 million database writes per month and 3.25 million reads per month in connection with the redirect service. This would require storage of 65 GB per month, with peaks of ~2,000 writes and 100 reads per second.With these needs in mind, we carried out testing and determined that Amazon DynamoDB made the most sense as the backend database for the redirect service. To determine this, we tested read and write performance and found that it exceeded our needs. Additionally, it was fully managed, thus requiring less maintenance expertise, and included DAX out of the box. Most clicks happen close to send, so leveraging DAX helps us smooth out the read and write load associated with the SMS link shortener.Because we know how long we must keep the relevant written elements at write time, we’re able to useDynamoDB Time to Live(TTL) to effectively manage their lifecycle. Finally, we’re careful to evenly distribute partition keys to avoid hot partitions, and DynamoDB’s autoscaling capabilities make it possible for us to respond more efficiently to spikes in demand.Braze SMS link shortening: FlowFigure 3. Braze SMS link shortening flowWhen the marketer initiates an SMS send, Braze checks its primary datastore (a MongoDB collection) to see if the link has already been shortened (see Figure 3). If it has, Braze re-uses that shortened link and continues the send. If it hasn’t, the registration process is initiated to generate a new site identifier that encodes the generation date and saves campaign information in DynamoDB via DAX.The response from the registration service is used to generate a short link (1a) for the SMS.A recipient gets an SMS containing a short link (2).Recipient decides to tap it (3). Braze smoothly redirects them to the destination URL, and updates the campaign statistics to show that the link was tapped.UsingAmazon Route 53’s latency-based routing, Braze directs the recipient to the nearest endpoint (Braze currently has North America and EU deployments), then inspects the link to ensure validity and that it hasn’t expired. If it passes those checks, the redirect service queries DynamoDB via DAX for information about the redirect (3a). Initial redirects are cached at send time, while later requests query the DAX cache.The user is redirected with a P99 redirect latency of less than 10 milliseconds (3b).Emit campaign-level metrics on redirects.Braze generates URL identifiers, which serve as the partition key to the DynamoDB collection, by generating a random number. We concatenate the generation date timestamp to the number, then Base66 encode the value. This results in a generated URL that looks like https://brz.ai/5xRmz, with “5xRmz” being the encoded URL identifier. The use of randomized partition keys helps avoid hot, overloaded partitions. Embedding the generation date lets us see when a given link was generated without querying the database. This helps us maintain performance and reduce costs by removing old links from the database. Other cost control measures include autoscaling and the use of DAX to avoid repeat reads of the same data. We also query DynamoDB directly against a hash key, avoiding scatter-gather queries.Braze link shortening feature resultsSince its launch, SMS link shortening has been used by over 300 Braze customer companies in more than 700 million SMS messages. This includes 50% of the total SMS volume sent by Braze during last year’s Black Friday period. There has been a tangible reduction in the time it takes to build and send SMS. “The Motley Fool”, a financial media company, saved up to four hours of work per month while driving click rates of up to 15%. Another Braze client utilized multimedia messaging service (MMS) and link shortening to encourage users to shop during their “Smart Investment” campaign, rewarding users with additional store credit. Using the engagement data collected with Braze link shortening, they were able to offer engaged users unique messaging and follow-up offers. They retargeted users who did not interact with the message via other Braze messaging channels.ConclusionThe Braze platform is designed to be both accessible to marketers and capable of supporting best-in-class cross-channel customer engagement. Our SMS link shortening feature, supported by AWS, enables marketers to provide an exceptional user experience and save time and money.Further reading:Braze SMS Marketing 101Braze SMS Link Shortening DocsUmesh KalaspurkarUmesh is a Sr Solutions Architect at AWS, and brings more than 20 years of experience in design and delivery of Digital Innovation and Transformation projects, across enterprises and startups. He is motivated by helping customers identify and overcome challenges. Outside of work, Umesh enjoys being a father, skiing, and traveling.Donnie KendallDonnie is a Sr Software Engineer at Braze, and has over a decade of experience building highly scalable software, both in the cloud and on-premises. Outside of work, Donnie enjoys being a father, traveling, and playing the sax.Ian AbelsIan is a Product Manager at Braze, and brings a pragmatic approach to product development. He comes from an engineering background, and has helped onboard a number of the largest Braze customers. In his spare time, Ian enjoys reading and playing music.CommentsView Comments
As competition grows fiercer, marketers need ways to ensure they reach each user with personalized content on their most critical channels. Short message/messaging service (SMS) is a key part of that effort, touching more than 5 billion people worldwide, with an impressive 82% open rate. However, SMS lacks the built-in engagement metrics supported by other […]
Umesh Kalaspurkar
2023-09-01T05:31:58-07:00
[ "Amazon DynamoDB", "Amazon DynamoDB Accelerator (DAX)", "Amazon VPC", "Architecture" ]
https://d2908q01vomqb2.c…itect_REVIEW.jpg
https://aws.amazon.com/blogs/architecture/lets-architect-cost-optimizing-aws-workloads/
Let’s Architect! Cost-optimizing AWS workloads
AWS Architecture BlogLet’s Architect! Cost-optimizing AWS workloadsby Luca Mezzalira, Federica Ciuffo, Vittorio Denti, and Zamira Jaupaj | on30 AUG 2023| inAmazon Elastic Container Service,Architecture,AWS Budgets,AWS Cloud Financial Management,AWS Lambda,Compute,Graviton,Thought Leadership|Permalink|Comments|ShareEvery software component built by engineers and architects is designed with a purpose: to offer particular functionalities and, ultimately, contribute to the generation of business value. We should consider fundamental factors, such as the scalability of the software and the ease of evolution during times of business changes. However, performance and cost are important factors as well since they can impact the business profitability.This edition of Let’s Architect! followsa similar series post from 2022, which discusses optimizing the cost of an architecture. Today, we focus on architectural patterns, services, and best practices to design cost-optimized cloud workloads. We also want to identify solutions, such as the use of Graviton processors, for increased performance at lower price. Cost optimization is a continuous process that requires the identification of the right tools for each job, as well as the adoption of efficient designs for your system.AWS re:Invent 2022 – Manage and control your AWS costsGovern cloud usage and avoid cost surprises without slowing down innovation within your organization. In this re:Invent 2022 session, you can learn how to set up guardrails and operationalize cost control within your organizations using services, such asAWS BudgetsandAWS Cost Anomaly Detection, and explore the latest enhancements in the AWS cost control space. Additionally,Mercado Libreshares how they automate their cloud cost control through central management and automated algorithms.Take me to this re:Invent 2022 video!Work backwards from team needs to define/deploy cloud governance in AWS environmentsCompute optimizationWhen it comes to optimizing compute workloads, there are many tools available, such asAWS Compute Optimizer,Amazon EC2 Spot Instances,Savings Plans, andGravitoninstances. Modernizing your applications can also lead to cost savings, but you need to know how to use the right tools and techniques in an effective and efficient way.ForAWS Lambdafunctions, you can use theAWS Lambda Cost Optimization videoto learn how to optimize your costs. The video covers topics, such as understanding and graphing performance versus cost, code optimization techniques, and avoiding idle wait time. If you are usingAmazon Elastic Container Service(Amazon ECS) andAWS Fargate, you can watch a Twitch video oncost optimization using Amazon ECS and AWS Fargateto learn how to adjust your costs. The video covers topics like using spot instances, choosing the right instance type, and using Fargate Spot.Finally, withAmazon Elastic Kubernetes Service(Amazon EKS), you can useKarpenter, an open-source Kubernetes cluster auto scaler to help optimize compute workloads. Karpenter can help you launch right-sized compute resources in response to changing application load, help you adopt spot and Graviton instances. To learn more about Karpenter, read the postHow CoStar uses Karpenter to optimize their Amazon EKS Resourceson theAWS Containers Blog.Take me toCost Optimization using Amazon ECS and AWS Fargate!Take me toAWS Lambda Cost Optimization!Take me toHow CoStar uses Karpenter to optimize their Amazon EKS Resources!Karpenter launches and terminates nodes to reduce infrastructure costsAWS Lambda general guidance for cost optimizationAWS Graviton deep dive: The best price performance for AWS workloadsThe choice of the hardware is a fundamental driver for performance, cost, as well as resource consumption of the systems we build. Graviton is a family of processors designed by AWS to support cloud-based workloads and give improvements in terms of performance and cost. This re:Invent 2022 presentation introduces Graviton and addresses the problems it can solve, how the underlying CPU architecture is designed, and how to get started with it. Furthermore, you can learn the journey to move different types of workloads to this architecture, such as containers, Java applications, and C applications.Take me to this re:Invent 2022 video!AWS Graviton processors are specifically designed by AWS for cloud workloads to deliver the best price performanceAWS Well-Architected Labs: Cost OptimizationTheCost Optimization section of the AWS Well Architected Workshophelps you learn how to optimize your AWS costs by using features, such as AWS Compute Optimizer, Spot Instances, and Savings Plans. The workshop includes hands-on labs that walk you through the process of optimizing costs for different types of workloads and services, such asAmazon Elastic Compute Cloud, Amazon ECS, and Lambda.Take me to this AWS Well-Architected lab!Savings Plans is a flexible pricing model that can help reduce expenses compared with on-demand pricingSee you next time!Thanks for joining us to discuss cost optimization! In 2 weeks, we’ll talk about in-memory databases and caching systems.To find all the blogs from this series, visit theLet’s Architect!list of contenton theAWS Architecture Blog.TAGS:cost optimization,Let's Architect,re:Invent,serverless,workshopsLuca MezzaliraLuca is Principal Solutions Architect based in London. He has authored several books and is an international speaker. He lent his expertise predominantly in the solution architecture field. Luca has gained accolades for revolutionizing the scalability of front-end architectures with micro-frontends, from increasing the efficiency of workflows, to delivering quality in products.Federica CiuffoFederica is a Solutions Architect at Amazon Web Services. She is specialized in container services and is passionate about building infrastructure with code. Outside of the office, she enjoys reading, drawing, and spending time with her friends, preferably in restaurants trying out new dishes from different cuisines.Vittorio DentiVittorio Denti is a Machine Learning Engineer at Amazon based in London. After completing his M.Sc. in Computer Science and Engineering at Politecnico di Milano (Milan) and the KTH Royal Institute of Technology (Stockholm), he joined AWS. Vittorio has a background in distributed systems and machine learning. He's especially passionate about software engineering and the latest innovations in machine learning science.Zamira JaupajZamira is an Enterprise Solutions Architect based in the Netherlands. She is highly passionate IT professional with over 10 years of multi-national experience in designing and implementing critical and complex solutions with containers, serverless, and data analytics for small and enterprise companies.CommentsView Comments
Every software component built by engineers and architects is designed with a purpose: to offer particular functionalities and, ultimately, contribute to the generation of business value. We should consider fundamental factors, such as the scalability of the software and the ease of evolution during times of business changes. However, performance and cost are important factors […]
Luca Mezzalira
2023-08-30T06:33:21-07:00
[ "Amazon Elastic Container Service", "Architecture", "AWS Budgets", "AWS Cloud Financial Management", "AWS Lambda", "Compute", "Graviton", "Thought Leadership" ]
https://d2908q01vomqb2.c…itect_REVIEW.jpg
https://aws.amazon.com/blogs/architecture/lets-architect-security-in-software-architectures/
Let’s Architect! Security in software architectures
AWS Architecture BlogLet’s Architect! Security in software architecturesby Luca Mezzalira, Federica Ciuffo, Vittorio Denti, and Zamira Jaupaj | on16 AUG 2023| inAmazon Elastic Container Service,Amazon Elastic Kubernetes Service,Architecture,AWS Secrets Manager,Containers,Security, Identity, & Compliance,Thought Leadership|Permalink|Comments|ShareSecurity is fundamental for each product and service you are building with. Whether you are working on the back-end or the data and machine learning components of a system, the solution should be securely built.In 2022, we discussed security in our postLet’s Architect!Architecting for Security. Today, we take a closer look at general security practices for your cloud workloads to secure both networks and applications, with a mix of resources to show you how to architect for security using the services offered by Amazon Web Services (AWS).In this edition ofLet’s Architect!, we share some practices for protecting your workloads from the most common attacks, introduce theZero Trustprinciple (you can learn how AWS itself is implementing it!), plus how to move to containers and/or alternative approaches for managing your secrets.A deep dive on the current security threat landscape with AWSThis session from AWS re:Invent, security engineers guide you through the most common threat vectors and vulnerabilities that AWS customers faced in 2022. For each possible threat, you can learn how it’s implemented by attackers, the weaknesses attackers tend to leverage, and the solutions offered by AWS to avert these security issues. We describe this as fundamental architecting for security: this implies adopting suitable services to protect your workloads, as well as follow architectural practices for security.Take me to this re:Invent 2022 session!Statistics about common attacks and how they can be launchedZero Trust: Enough talk, let’s build better securityWhat isZero Trust? It is a security model that produces higher security outcomes compared with the traditional network perimeter model.How does Zero Trust work in practice, and how can you start adopting it? This AWS re:Invent 2022 session defines the Zero Trust models and explains how to implement one. You can learn how it is used within AWS, as well as how any architecture can be built with these pillars in mind. Furthermore, there is a practical use case to show you howDelphixput Zero Trust into production.Take me to this re:Invent 2022 session!AWS implements the Zero Trust principle for managing interactions across different servicesA deep dive into container security on AWSNowadays, it’s vital to have a thorough understanding of a container’s underlying security layers. AWS services, likeAmazon Elastic Kubernetes ServiceandAmazon Elastic Container Service, have harnessed these Linux security-layer protections, keeping a sharp focus onthe principle of least privilege. This approach significantly minimizes the potential attack surface by limiting the permissions and privileges of processes, thus upholding the integrity of the system.This re:Inforce 2023 session discusses best practices for securing containers for your distributed systems.Take me to this re:Inforce 2023 session!Fundamentals and best practices to secure containersMigrating your secrets to AWS Secrets ManagerSecrets play a critical role in providing access to confidential systems and resources. Ensuring the secure and consistent management of these secrets, however, presents a challenge for many organizations.Anti-patterns observed in numerous organizational secrets management systems include sharing plaintext secrets via unsecured means, such as emails or messaging apps, which can allow application developers to view these secrets in plaintext or even neglect to rotate secrets regularly. This detailed guidance walks you through the steps of discovering and classifying secrets, plus explains the implementation and migration processes involved in transferring secrets toAWS Secrets Manager.Take me to this AWS Security Blog post!An organization’s perspectives and responsibilities when building a secrets management solutionConclusionWe’re glad you joined our conversation on building secure architectures! Join us in a couple of weeks when we’ll talk about cost optimization on AWS.To find all the blogs from this series, visit theLet’s Architect!list of content on theAWS Architecture Blog.TAGS:containers,Kubernetes,Let's Architect,Networking,security,well architectedLuca MezzaliraLuca is Principal Solutions Architect based in London. He has authored several books and is an international speaker. He lent his expertise predominantly in the solution architecture field. Luca has gained accolades for revolutionizing the scalability of front-end architectures with micro-frontends, from increasing the efficiency of workflows, to delivering quality in products.Federica CiuffoFederica is a Solutions Architect at Amazon Web Services. She is specialized in container services and is passionate about building infrastructure with code. Outside of the office, she enjoys reading, drawing, and spending time with her friends, preferably in restaurants trying out new dishes from different cuisines.Vittorio DentiVittorio Denti is a Machine Learning Engineer at Amazon based in London. After completing his M.Sc. in Computer Science and Engineering at Politecnico di Milano (Milan) and the KTH Royal Institute of Technology (Stockholm), he joined AWS. Vittorio has a background in distributed systems and machine learning. He's especially passionate about software engineering and the latest innovations in machine learning science.Zamira JaupajZamira is an Enterprise Solutions Architect based in the Netherlands. She is highly passionate IT professional with over 10 years of multi-national experience in designing and implementing critical and complex solutions with containers, serverless, and data analytics for small and enterprise companies.CommentsView Comments
Security is fundamental for each product and service you are building with. Whether you are working on the back-end or the data and machine learning components of a system, the solution should be securely built. In 2022, we discussed security in our post Let’s Architect! Architecting for Security. Today, we take a closer look at […]
Luca Mezzalira
2023-08-16T05:58:33-07:00
[ "Amazon Elastic Container Service", "Amazon Elastic Kubernetes Service", "Architecture", "AWS Secrets Manager", "Containers", "Security, Identity, & Compliance", "Thought Leadership" ]
https://d2908q01vomqb2.c…11/Figure-1..png
https://aws.amazon.com/blogs/architecture/how-seatgeek-uses-aws-to-control-authorization-authentication-and-rate-limiting-in-a-multi-tenant-saas-application/
How SeatGeek uses AWS Serverless to control authorization, authentication, and rate-limiting in a multi-tenant SaaS application
AWS Architecture BlogHow SeatGeek uses AWS Serverless to control authorization, authentication, and rate-limiting in a multi-tenant SaaS applicationby Umesh Kalaspurkar, Anderson Parra, Anton Aleksandrov, João Mikos, and Samit Kumbhani | on14 AUG 2023| inAmazon API Gateway,Amazon DynamoDB,Architecture,AWS Lambda,Serverless|Permalink|Comments|ShareSeatGeekis a ticketing platform for web and mobile users, offering ticket purchase and reselling for sports games, concerts, and theatrical productions. In 2022, SeatGeek had an average of 47 million daily tickets available, and their mobile app was downloaded 33+ million times.Historically, SeatGeek used multiple identity and access tools internally. Applications were individually managing authorization, leading to increased overhead and a need for more standardization. SeatGeek sought to simplify the API provided to customers and partners by abstracting and standardizing the authorization layer. They were also looking to introduce centralized API rate-limiting to preventnoisy neighborproblems in their multi-tenant SaaS application.In this blog, we will take you through SeatGeek’s journey and explore the solution architecture they’ve implemented. As of the publication of this post, many B2B customers have adopted this solution to query terabytes of business data.Building multi-tenant SaaS environmentsMulti-tenant SaaS environments allow highly performant and cost-efficient applications by sharing underlying resources across tenants. While this is a benefit, it is important to implement cross-tenant isolation practices to adhere to security, compliance, and performance objectives. With that, each tenant should only be able to access their authorized resources. Another consideration is the noisy neighbor problem that occurs when one of the tenants monopolizes excessive shared capacity, causing performance issues for other tenants.Authentication, authorization, and rate-limiting are critical components of a secure and resilient multi-tenant environment. Without these mechanisms in place, there is a risk of unauthorized access, resource-hogging, and denial-of-service attacks, which can compromise the security and stability of the system. Validating access early in the workflow can help eliminate the need for individual applications to implement similar heavy-lifting validation techniques.SeatGeek had several criteria for addressing these concerns:They wanted to use their existingAuth0instance.SeatGeek did not want to introduce any additional infrastructure management overhead; plus, they preferred to use serverless services to “stitch” managed components together (with minimal effort) to implement their business requirements.They wanted this solution to scale as seamlessly as possible with demand and adoption increases; concurrently, SeatGeek did not want to pay for idle or over-provisioned resources.Exploring the solutionThe SeatGeek team used a combination ofAmazon Web Services (AWS) serverlessservices to address the aforementioned criteria and achieve the desired business outcome.Amazon API Gatewaywas used to serve APIs at the entry point to SeatGeek’s cloud environment. API Gateway allowed SeatGeek to use a customAWS Lambdaauthorizer for integration with Auth0 and defining throttling configurations for their tenants. Since all the services used in the solution are fully serverless, they do not require infrastructure management, are scaled up and down automatically on-demand, and provide pay-as-you-go pricing.SeatGeek created a set of tiered usage plans in API Gateway (bronze, silver, and gold) to introduce rate-limiting. Each usage plan had a pre-defined request-per-second rate limit configuration. A unique API key was created by API Gateway for each tenant.Amazon DynamoDBwas used to store the association of existing tenant IDs (managed by Auth0) to API keys (managed by API Gateway). This allowed us to keep API key management transparent to SeatGeek’s tenants.Each new tenant goes through an onboarding workflow. This is an automated process managed withTerraform. During new tenant onboarding, SeatGeek creates a new tenant ID in Auth0, a new API key in API Gateway, and stores association between them in DynamoDB. Each API key is also associated with one of the usage plans.Once onboarding completes, the new tenant can start invoking SeatGeek APIs (Figure 1).Figure 1. SeatGeek’s fully serverless architectureTenant authenticates with Auth0 usingmachine-to-machine authorization. Auth0 returns a JSON web token representing tenant authentication success. The token includes claims required for downstream authorization, such as tenant ID, expiration date, scopes, and signature.Tenant sends a request to the SeatGeak API. The request includes the token obtained in Step 1 and application-specific parameters, for example, retrieving the last 12 months of booking data.API Gateway extracts the token and passes it toLambda authorizer.Lambda authorizer retrieves the token validation keys from Auth0. The keys are cached in the authorizer, so this happens only once for each authorizer launch environment. This allows token validation locally without calling Auth0 each time, reducing latency and preventing an excessive number of requests to Auth0.Lambda authorizer performs token validation, checking tokens’ structure, expiration date, signature, audience, and subject. In case validation succeeds, Lambda authorizer extracts the tenant ID from the token.Lambda authorizer uses tenant ID extracted in Step 5 to retrieve the associated API key from DynamoDB and return it back to API Gateway.The API Gateway uses API key to check if the client making this particular request is above the rate-limit threshold, based on the usage plan associated with API key. If the rate limit is exceeded, HTTP 429 (“Too Many Requests”) is returned to the client. Otherwise, the request will be forwarded to the backend for further processing.Optionally, the backend can perform additional application-specific token validations.Architecture benefitsThe architecture implemented by SeatGeek provides several benefits:Centralized authorization:Using Auth0 with API Gateway and Lambda authorizer allows for standardization the API authentication and removes the burden of individual applications having to implement authorization.Multiple levels of caching:Each Lambda authorizer launch environment caches token validation keys in memory to validate tokens locally. This reduces token validation time and helps to avoid excessive traffic to Auth0. In addition, API Gateway can be configured with up to 5 minutes of caching for Lambda authorizer response, so the same token will not be revalidated in that timespan. This reduces overall cost and load on Lambda authorizer and DynamoDB.Noisy neighbor prevention:Usage plans and rate limits prevent any particular tenant from monopolizing the shared resources and causing a negative performance impact for other tenants.Simple management and reduced total cost of ownership:Using AWS serverless services removed the infrastructure maintenance overhead and allowed SeatGeek to deliver business value faster. It also ensured they didn’t pay for over-provisioned capacity, and their environment could scale up and down automatically and on demand.ConclusionIn this blog, we explored how SeatGeek used AWS serverless services, such as API Gateway, Lambda, and DynamoDB, to integrate with external identity provider Auth0, and implemented per-tenant rate limits with multi-tiered usage plans. Using AWS serverless services allowed SeatGeek to avoid undifferentiated heavy-lifting of infrastructure management and accelerate efforts to build a solution addressing business requirements.Umesh KalaspurkarUmesh is a Sr Solutions Architect at AWS, and brings more than 20 years of experience in design and delivery of Digital Innovation and Transformation projects, across enterprises and startups. He is motivated by helping customers identify and overcome challenges. Outside of work, Umesh enjoys being a father, skiing, and traveling.Anderson ParraAnder Parra is Staff Software Engineer at SeatGeek and has over 10 years of industry experience in software engineering. He has worked with many different types of systems with diverse constraints and challenges, mainly based on JVM (Java and Scala). He is also experienced in high-scale distributed systems and functional programming. Outside work, Ander enjoys being a father, coffee, and traveling.Anton AleksandrovAnton Aleksandrov is a Principal Solutions Architect for AWS Serverless and Event-Driven architectures. Having over 20 years of hands-on software engineering and architecture experience, Anton is working with major ISV and SaaS customers to design highly scalable, innovative, and secure cloud solutions. Throughout his career Anton has held multiple leading roles architecting solutions for enterprise cloud applications, developer experience, mobile services, and the security space.João MikosJoão Mikos has over 20 years of experience working with development and infrastructure in distributed systems for companies sprawling four continents across various business verticals. Previously, he led infrastructure teams for highly regulated fintech companies and a Solutions Architecture team at AWS. Now a Director of Engineering at SeatGeek, he leads the Developer Acceleration team, responsible for enhancing the experience of customer and partner engineers developing with SeatGeek and the teams developing within SeatGeek.Samit KumbhaniSamit Kumbhani is a Sr. Solutions Architect at AWS based out of New York City area. He has 18+ years of experience in building applications and focuses on Analytics, Business Intelligence, and Databases. He enjoys working with customers to understand their challenges and solve them by creating innovative solutions using AWS services. Outside of work, Samit loves playing cricket, traveling, and spending time with his family and friends.CommentsView Comments
SeatGeek is a ticketing platform for web and mobile users, offering ticket purchase and reselling for sports games, concerts, and theatrical productions. In 2022, SeatGeek had an average of 47 million daily tickets available, and their mobile app was downloaded 33+ million times. Historically, SeatGeek used multiple identity and access tools internally. Applications were individually […]
Umesh Kalaspurkar
2023-08-14T07:12:41-07:00
[ "Amazon API Gateway", "Amazon DynamoDB", "Architecture", "AWS Lambda", "Serverless" ]
https://d2908q01vomqb2.c…eusable-etl2.png
https://aws.amazon.com/blogs/architecture/use-a-reusable-etl-framework-in-your-aws-lake-house-architecture/
Use a reusable ETL framework in your AWS lake house architecture
AWS Architecture BlogUse a reusable ETL framework in your AWS lake house architectureby Ashutosh Dubey and Prantik Gachhayat | on11 AUG 2023| inAmazon EventBridge,Amazon Managed Workflows for Apache Airflow (Amazon MWAA),Amazon Redshift,Architecture,AWS Glue,AWS Lambda|Permalink|Comments|ShareData lakes and lake house architectures have become an integral part of a data platform for any organization. However, you may face multiple challenges while developing a lake house platform and integrating with various source systems. In this blog, we will address these challenges and show how our framework can help mitigate these issues.Lake house architecture using AWSFigure 1 shows a typical lake house implementation in anAmazon Web Services(AWS) environment.Figure 1. Typical lake house implementation in AWSIn this diagram we have five layers. The number of layers and names can vary per environmental requirements, so checkrecommended data layersfor more details.Landing layer.This is where all source files are dropped in their original format.Raw layer.This is where all source files are converted and stored in a common parquet format.Stage layer.This is where we maintain a history of dimensional tables asSlowly Changing DimensionType 2 (SCD2).Apache Hudiis used for SCD2 in theAmazon Simple Storage Service(Amazon S3) bucket, and anAWS Gluejob is used to write to Hudi tables. AWS Glue is used to perform any extract, transform, and load (ETL) job to move, cleanse, validate, or transform files between any two layers. For details, seeusing the Hudi framework in AWS Glue.Presentation layer.This is where data is being cleansed, validated, and transformed, using an AWS Glue job, in accordance with business requirements.Data warehouse layer.Amazon Redshiftis being used as the data warehouse where the curated or cleansed data resides. We can either copy the data using an AWS Glue python shell job, or create a Spectrum table out of the Amazon S3 location.The data lake house architecture shows two types of data ingestion patterns,pushandpull. In thepull-basedingestion, services like AWS Glue orAWS Lambdaare used to pull data from sources like databases, APIs, or flat files into the data lake. In thepush-basedpattern, third-party sources can directly upload files into a landing Amazon S3 bucket in the data lake.Amazon Managed Workflows for Apache Airflow(Amazon MWAA) is used to orchestrate data pipelines that move data from the source systems into a data warehouse.Amazon EventBridgeis used to schedule the Airflow directed acyclic graph (DAG) data pipelines.Amazon RDS for PostgreSQLis used to store metadata for configuration of the data pipelines. A data lake architecture with these capabilities provides a scalable, reliable, and efficient solution for data pipelines.Data pipeline challengesMaintaining data pipelines in a large lake house environment can be quite challenging. There are a number of hurdles one faces regularly. Creating individual AWS Glue jobs for each task in every Airflow DAG can lead to hundreds of AWS Glue jobs to manage. Error handling and job restarting gets increasingly more complex as the number of pipelines grows. Developing a new data pipeline from scratch takes time, due to the boilerplate code involved. The production support team can find it challenging to monitor and support such a large number of data pipelines. Data platform monitoring becomes arduous at that scale. Ensuring overall maintainability, robustness, and governability of data pipelines in a lake house is a constant struggle.The benefits of a data pipeline frameworkHaving a data pipeline framework can significantly reduce the effort required to build data pipelines. This framework should be able to create a lake house environment that is easy to maintain and manage. It should also increase the reusability of code across data pipelines. Effective error handling and recovery mechanisms in the framework should make the data pipelines robust. Support for various data ingestion patterns like batch, micro batch, and streaming should make the framework versatile. A framework with such capabilities will help you build scalable, reliable, and flexible data pipelines, with reduced time and effort.Reusable ETL frameworkIn a metadata-driven reusable framework, we have pre-created templates for different purposes. Metadata tables are used to configure the data pipelines.Figure 2 shows the architecture of this framework:Figure 2. Reusable ETL framework architectureIn this framework, there are pre-created AWS Glue templates for different purposes, like copying files from SFTP to landing bucket, fetching rows from a database, converting file formats in landing to parquet in the raw layer, writing to Hudi tables, copying parquet files to Redshift tables, and more.These templates are stored in a template bucket, and details of all templates are maintained in a template config table with atemplate_idinAmazon Relational Database Service(Amazon RDS). Each data pipeline (Airflow DAG) is represented as aflow_idin the main job config table. Eachflow_idcan have one or more tasks, and each task refers to atemplate_id. This framework can support both the type of ingestions—pull-based(scheduled pipelines) andpush-based(initiated pipelines). The following steps show the detailed flow of the pipeline in Figure 2.To schedule a pipeline, the “Scheduled DAG Invoker Lambda” is scheduled in EventBridge, withflow_idof the pipeline as the parameter.The source drops files in a landing bucket.An event is initiated and calls the “Triggered DAG Invoker” Lambda. This Lambda function gets the file name from the event to call the Airflow API.A Lambda function queries an RDS metadata table with the parameter to get the DAG name.Both of the Lambda functions call the Airflow API to start the DAG.The Airflow webserver locates the DAG from the S3 location and passes it to the executor.The DAG is initiated.The DAG calls the functions in the common util python script with all required parameters.For any pipeline, the util script gets all the task details from the metadata table, along with the AWS Glue template name and location.For any database or API connectivity, the util function gets the secret credentials fromAWS Secrets Managerbased on thesecret_id.The AWS Glue template file from the S3 location starts the AWS Glue job using Boto3 API by passing the required parameters. Once the AWS Glue job completes successfully, it deletes the job.If the pipeline contains any Lambda calls, the util script calls the Lambda function as per the configuration parameter.If the AWS Glue job fails due to any error in Step #11, the script captures the error message and sends anAmazon Simple Notification Service(Amazon SNS) notification.For developing any new pipeline, the developer must identify the number of tasks that need to be created for the DAG. Identify which template can be used for which task, and insert configuration entries to the metadata tables accordingly. If there is no template available, create a new template to reuse later. Finally, create the Airflow DAG script and place it in the DAG location.ConclusionThe proposed framework leverages AWS native services to provide a scalable and cost-effective solution. It allows faster development due to reusable components. You can dynamically generate and delete AWS Glue jobs as needed. This framework enables jobs tracking by configuration tables, supports error handling, and provides email notification. You can create scheduled and event-driven data pipelines to ingest data from various sources in different formats. And you can tune the performance and cost of AWS Glue jobs, by updating configuration parameters without changing any code.A reusable framework is a great practice for any development project, as it improves time to market and standardizes development patterns in a team. This framework can be used in any AWS data lake or lake house environments with any number of data layers. This makes pipeline development faster, and error handing and support easier. You can enhance and customize even further to have more features like data reconciliation, micro-batch pipelines, and more.Further reading:Land data from databases to a data lake at scale using AWS Glue blueprintsCreating a source to Lakehouse data replication pipe using Apache Hudi, AWS Glue, AWS DMS, and Amazon RedshiftTemporal data lake architecture for benchmark and indices analyticsAshutosh DubeyAshutosh is a Global Technical leader and Solutions Architect at Amazon Web Services based out of New Jersey, USA. He has extensive experience specializing in the Data, Analytics, and Machine Learning field, and has helped Fortune 500 companies in their cloud journey to AWS.Prantik GachhayatPrantik is an Enterprise Architect at Infosys with 19+ years of experience in various technology fields and business domains. He has a proven track record helping large enterprises modernize digital platforms and delivering complex transformation programs. Prantik specializes in architecting modern data and analytics platforms in AWS. Prantik loves exploring new tech trends and enjoys cooking.CommentsView Comments
Data lakes and lake house architectures have become an integral part of a data platform for any organization. However, you may face multiple challenges while developing a lake house platform and integrating with various source systems. In this blog, we will address these challenges and show how our framework can help mitigate these issues. Lake […]
Ashutosh Dubey
2023-08-11T08:53:38-07:00
[ "Amazon EventBridge", "Amazon Managed Workflows for Apache Airflow (Amazon MWAA)", "Amazon Redshift", "Architecture", "AWS Glue", "AWS Lambda" ]
https://d2908q01vomqb2.c…ents-572x630.jpg
https://aws.amazon.com/blogs/architecture/how-thomson-reuters-monitors-and-tracks-aws-health-alerts-at-scale/
How Thomson Reuters monitors and tracks AWS Health alerts at scale
AWS Architecture BlogHow Thomson Reuters monitors and tracks AWS Health alerts at scaleby Srinivasa Shaik, Naveen Polamreddi, Russell Sprague, and Srikanth Athmaraman | on09 AUG 2023| inAmazon CloudWatch,Amazon EventBridge,Architecture,AWS Lambda,AWS Secrets Manager|Permalink|Comments|ShareThomson Reuters Corporationis a leading provider of business information services. The company’s products include highly specialized information-enabled software and tools for legal, tax, accounting and compliance professionals combined with the world’s most trusted global news service: Reuters.Thomson Reuters is committed to a cloud first strategy on AWS, with thousands of applications hosted on AWS that are critical to its customers, with a growing number of AWS accounts that are used by different business units to deploy the applications. Service Management in Thomson Reuters is a centralized team, who needs an efficient way to measure, monitor and track the health of AWS services across the AWS environment.AWS Healthprovides the required visibility to monitor the performance and availability of AWS services and scheduled changes or maintenance that may impact their applications.With approximately 16,000 AWS Health events received in 2022 alone due to the scale at which Thomson Reuters is operating on AWS, manually tracking AWS Health events is challenging. This necessitates a solution to provide centralized visibility of Health alerts across the organization, and an efficient way to track and monitor the Health events across the AWS accounts. Thomson Reuters requires retaining AWS Health event history for a minimum of 2 years to derive indicators affecting performance and availability of applications in the AWS environment and thereby ensuring high service levels to customers. Thomson Reuters utilizesServiceNowfor tracking IT operations andDatadogfor infrastructure monitoring which is integrated with AWS Health to measure and track all the events and estimate the health performance with key indicators. Before this solution, Thomson Reuters didn’t have an efficient way to track scheduled events, and no metrics to identify the applications impacted by these Health events.In this post, we will discuss how Thomson Reuters has implemented a solution to track and monitor AWS Health events at scale, automate notifications, and efficiently track AWS scheduled changes. This gives Thomson Reuters visibility into the health of AWS resources using Health events, and allows them to take proactive measures to minimize impact to their applications hosted on AWS.Solution overviewThomson Reuters leverages AWS Organizations to centrally govern their AWS environment. AWS Organization helps to centrally manage accounts and resources, optimize the cost, and simplify billing. The AWS environment in Thomson Reuters has a dedicated organizational management account to create Organizational Units (OUs), and policies to manage the organization’s member accounts. Thomson Reuters enabled organizational view within AWS Health, which once activated provides an aggregated view of AWS Health events across all their accounts (Figure 1).Figure 1. Architecture to track and monitor AWS Health eventsLet us walk through the architecture of this solution:Amazon CloudWatch SchedulerinvokesAWS Lambdaevery 10 minutes to fetch AWS Health API data from the Organization Management account.Lambda leverages execution role permissions to connect to the AWS Health API and send events toAmazon EventBridge. The loosely coupled architecture of Amazon EventBridge allows for storing and routing of the events to various targets based upon the AWS Health Event Type category.AWS Health Event is matched against the EventBridge rules to identify the event category and route to the target AWS Lambda functions that process specific AWS Health Event types.The AWS Health events are routed to ServiceNow and Datadog based on the AWS Health Event Type category.If the Health Event Type category is “Scheduled change“ or ” Issues“ then it is routed to ServiceNow.The event is stored in a DynamoDB table to track the AWS Health events beyond the 90 days history available in AWS Health.If the entity value of the affected AWS resource exists inside the Health Event, then tags associated with that entity value are used to identify the application and resource owner to notify. One of the internal policies mandates the owners to include AWS resource tags for every AWS resource provisioned. The DynamoDB table is updated with additional details captured based on entity value.Events that are not of interest are excluded from tracking.A ServiceNow ticket is created containing the details of the AWS Health event and includes additional details regarding the application and resource owner that are captured in the DynamoDB table. The ServiceNow credentials to connect are stored securely inAWS Secrets Manager. The ServiceNow ticket details are also updated back in DynamoDB table to correlate AWS Health event with a ServiceNow tickets.If the Health Event Type category is “Account Notification”, then it is routed to Datadog.All account notifications including public notifications are routed to Datadog for tracking.Datadog monitors are created to help derive more meaningful information from the account notifications received from the AWS Health events.AWS Health Event Type “Account Notification” provides information about the administration or security of AWS accounts and services. These events are mostly informative, but some of them need urgent action, and tracking each of these events within Thomson Reuters incident management is substantial. Thomson Reuters has decided to route these events to Datadog, which is monitored by the Global Command Center from the centralized Service Management team. All other AWS Health Event types are tracked using ServiceNow.ServiceNow to track scheduled changes and issuesThomson Reuters leverages ServiceNow for incident management and change management across the organization, including both AWS cloud and on-premises applications. This allows Thomson Reuters to continue using the existing proven process to track scheduled changes in AWS through the ServiceNow change management process and AWS Health issues and investigations by using ServiceNow incident management, notify relevant teams, and monitor until resolution. Any AWS service maintenance or issues reported through AWS Health are tracked in ServiceNow.One of the challenges while processing thousands of AWS Health events every month is also to identify and track events that has the potential to cause significant impact to the applications. Thomson Reuters decided to exclude events that are not relevant for Thomson Reuters hosted Regions, or specific AWS services. The process of identifying events to include is a continuous iterative effort, relying on the data captured in DynamoDB tables and from experiences of different teams. AWS EventBridge simplifies the process of filtering out events by eliminating the need to develop a custom application.ServiceNow is used to create various dashboards which are important to Thomson Reuters leadership to view the health of the AWS environment in a glance, and detailed dashboards for individual application, business units and AWS Regions are also curated for specific requirements. This solution allows Thomson Reuters to capture metrics which helps to understand the scheduled changes that AWS performs and identify the underlying resources that are impacted in different AWS accounts. The ServiceNow incidents created from Health events are used to take real-time actions to mitigate any potential issues.Thomson Reuters has a business requirement to persist AWS Health event history for a minimum of 2 years, and a need for customized dashboards for leadership to view performance and availability metrics across applications. This necessitated the creation of dashboards in ServiceNow. Figures 2, 3, and 4 are examples of dashboards that are created to provide a comprehensive view of AWS Health events across the organization.Figure 2. ServiceNow dashboard with a consolidated view of AWS Health eventsFigure 3. ServiceNow dashboard with a consolidated view of AWS Health eventsFigure 4. ServiceNow dashboard showing AWS Health eventsDatadog for account notificationsThomson Reuters leverages Datadog as its strategic platform to observe, monitor, and track the infrastructure, applications and more. Health events with the category type Account Notification are forwarded to Datadog and are monitored by Thomson Reuters Global Command Center part of the Service Management. Account notifications are important to track as they contain information about administration or security of AWS accounts. Like ServiceNow, Datadog is also used to curate separate dashboards with unique Datadog monitors for monitoring and tracking these events (Figure 5). Currently, the Thomson Reuters Service Management team are the main consumers of these Datadog alerts, but in the future the strategy would be to route relevant and important notifications only to the concerned application team by ensuring a mandatory and robust tagging standards on the existing AWS accounts for all AWS resource types.Figure 5. Datadog dashboard for AWS Health event type account notificationWhat’s next?Thomson Reuters will continue to enhance the logic for identifying important Health events that require attention, reducing noise by filtering out unimportant ones. Thomson Reuters plan to develop a self-service subscription model, allowing application teams to opt into the Health events related to their applications.The next key focus will also be to look at automating actions for specific AWS Health scheduled events wherever possible, such as responding to maintenance with AWS System Manager Automation documents.ConclusionBy using this solution, Thomson Reuters can effectively monitor and track AWS Health events at scale using the preferred internal tools ServiceNow and Datadog. Integration with ServiceNow allowed Thomson Reuters to measure and track all the events and estimate the health performance with key indicators that can be generated from ServiceNow. This architecture provided an efficient way to track the AWS scheduled changes, capture metrics to understand the various schedule changes that AWS is doing and resources that are getting impacted in different AWS accounts. This solution provides actionable insights from the AWS Health events, allowing Thomson Reuters to take real-time actions to mitigate impacts to the applications and thus offer high Service levels to Thomson Reuters customers.Srinivasa ShaikSrinivasa Shaik is a Solutions Architect based in Boston. He works with enterprise customers to architect and design solutions for their business needs. His core areas of focus is containers, serverless, and machine learning. In his spare time, he enjoys spending time with his family, cooking, and traveling.Naveen PolamreddiNaveen Polamreddi is an architect in Platform Engineering, helping Thomson Reuters run production workloads in a safe, scalable cloud environment, and has good experience building enterprise-level cloud environments. Naveen loves to talk on technology and how it can help solve various problems, and spends his free time with family, and is a fitness enthusiast.Russell SpragueRussell Sprague is a Principal Technical Account Manager at AWS, and is passionate about assisting customers with their operational excellence and resiliency needs. Outside of work he loves porching with his wife, playing games with their kids, hiking with their dogs, and dressing up like a pirate.Srikanth AthmaramanSrikanth Athmaraman is a Lead Cloud Engineer in Thomson Reuters under Service Management, with 9 years of experience in the tech industry. He is enthusiastic about DevOps, system administration and leverages cloud technologies to design and build services that enhance customer experience. Srikanth is passionate about photography and loves to travel in his free time.CommentsView Comments
Thomson Reuters Corporation is a leading provider of business information services. The company’s products include highly specialized information-enabled software and tools for legal, tax, accounting and compliance professionals combined with the world’s most trusted global news service: Reuters. Thomson Reuters is committed to a cloud first strategy on AWS, with thousands of applications hosted on AWS […]
Srinivasa Shaik
2023-08-09T06:16:58-07:00
[ "Amazon CloudWatch", "Amazon EventBridge", "Architecture", "AWS Lambda", "AWS Secrets Manager" ]
https://d2908q01vomqb2.c…6-PM-977x630.png
https://aws.amazon.com/blogs/architecture/building-serverless-endless-aisle-retail-architectures-on-aws/
Build a serverless retail solution for endless aisle on AWS
AWS Architecture BlogBuild a serverless retail solution for endless aisle on AWSby Sandeep Mehta and Shashank Shrivastava | on07 AUG 2023| inAmazon API Gateway,Amazon CloudFront,Architecture,AWS Lambda|Permalink|Comments|ShareIn traditional business models, retailers handle order-fulfillment processes from start to finish—including inventory management, owning or leasing warehouses, and managing supply chains. But many retailers aren’t set up to carry additional inventory.The “endless aisle” business model is an alternative solution for lean retailers that are carrying enough in-store inventory while wanting to avoid revenue loss. Endless aisle is also known as drop-shipping, or fulfilling orders through automated integration with product partners. Such automation results in a customer’s ability to place an order on a tablet or kiosk when they cannot find a specific product of their choice on in-store shelves.Why is the endless aisle concept important for businesses and customers alike? It means that:Businesses no longer need to stock products more than shelf deep.End customers can easily place an order at the store and get it shipped directly to their home or place of choice.Let’s explore these concepts further.Solution overviewWhen customers are in-store and looking to order items that are not available on shelves, a store associate can scan the SKU code on a tablet. The kiosk experience is similar, where the customer can search for the item themselves by typing in its name.For example, if a customer visits a clothing store that only stocks the items on shelves and finds the store is out of a product in their size, preferred color, or both, the associate can scan the SKU and check whether the item is available to ship. The application then raises a request with a store’s product partner. The request returns the available products the associate can show to the customer, who can then choose to place an order. When the order is processed, it is directly fulfilled by the partner.Serverless endless aisle reference architectureFigure 1 illustrates how to architect a serverless endless aisle architecture for order processing.Figure 1. Building endless aisle architecture for order processingWebsite hosting and securityWe’ll host the endless aisle website onAmazon Simple Storage Service(Amazon S3) withAmazon CloudFrontfor better response time. CloudFront is a content delivery network (CDN) service built for high performance and security. CloudFront can reduce the latency to other AWS services by providing access at the edge and by caching the static content, while dynamic content is provided byAmazon API Gatewayintegration for our use case. A Web Application Firewall (WAF) is used after CloudFront for protection against internet threats, such as cross-site scripting (XSS) and SQL injection.Amazon Cognitois used for managing the application user pool, and provides security for who can then access the application.Solution walkthroughLet’s review the architecture steps in detail.Step 1.The store associate logs into the application with their username and password. When the associate or customer scans the bar code/SKU, the following process flow is executed.Step 2.The front-end application translates the SKU code into a product number and invokes the Get Item API.Step 3.An invoked getItemAWS Lambdafunction handles the API call.This architecture’s design pattern supports multiple partner integration and allows reusability of the code. The design can be integrated with any partner with the ability to integrate using APIs, and the partner-specific transformation is built separately using Lambda functions.We’ll useAmazon DynamoDBfor storing partner information metadata—for example, partner_id, partner_name, partner APIs.Step 4.The getItem Lambda function fetches partner information from an DynamoDB table. It transforms the request body using a Transformation Lambda function.Step 5.The getItem Lambda function calls the right partner API. Upon receiving a request, the partner API returns the available product (based on SKU code) with details such as size, color, and any other variable parameter, along with images.It can also provide links to similar available products the customer may be interested in based on the selected product. This helps retail clients increase their revenue and offer products that aren’t available at a given time on their shelves.The customer then selects from the available products. Having selected the right product with specific details on parameters such as color, size, quantity, and more, they add them to the cart and begin the check-out process. The customer enters their shipping address and payment information to place an order.Step 6.The orders are pushed to anAmazon Simple Queue Service(Amazon SQS) queue named create-order-queue. Amazon SQS provides a straightforward and reliable way for customers to decouple and connect micro-services together using queues.Step 7.Amazon SQS ensures that there is no data loss and orders are processed from the queue by the orders API. The createOrder Lambda function pulls the messages from Amazon SQS and processes them.Step 8.The orders API body is then transformed into the message format expected by the partner API. This transformation can be done by a Lambda function defined in the configuration in the ‘partners-table’ DynamoDB table.Step 9.A partner API is called using the endpoint URL, which is obtained from the partners-table. When the order is placed, a confirmation will be returned by the partner API response. With this confirmation, order details are entered in another DynamoDB table called orders-table.Step 10.With DynamoDB stream, you can track any insert or update to the DynamoDB table.Step 11.A notifier Lambda function invokesAmazon Simple Email Service(Amazon SES) to notify the store about order activity.Step 12.The processed orders are integrated with the customer’s ERP application for the reconciliation process. This can be achieved byAmazon Eventbridgerule that invokes a dataSync Lambda function.PrerequisitesFor this walkthrough, you’ll need the following prerequisites:An AWS account with admin accessAWS Command Line Interface (AWS CLI). SeeGetting started with the AWS CLI.Node.js (16.x+) and npm. For more information, seeDownloading and installing Node.js and npm.aws-cdk (2.x+). SeeGetting started with the AWS CDK.The GitHubserverless-partner-integration-endless-aislerepository, cloned, and configured on your local machine.BuildLocally install CDK library:npm install -g aws-cdkBuild an Infrastructure package to create deployable assets, which will be used in CloudFormation template.cd serverless-partner-integration-endless-aisle && sh build.shSynthesize CloudFormation templateTo see the CloudFormation template generated by the CDK, execute the below steps.cd serveless-partner-integration-endless-aisle/infrastructurecdk bootstrap && cdk synthCheck the output files in the “cdk.out” directory. AWS CloudFormation template is created for deployment in your AWS account.DeployUse CDK to deploy/redeploy your stack to an AWS Account.Set store email address for notifications. If a store wants to get updates about customer orders, they can set STORE_EMAIL value with store email. You will receive a verification email in this account, after which SES can send you order updates.export STORE_EMAIL=”dummytest@someemail.com” - Put your email here.Set up AWS credentials with the information foundin this developer guide.Now run:cdk deployTestingAfter the deployment, CDK will output Amazon Cloudfront URL to use for testing.If you have provided STORE_EMAIL address during the set up, then approve the email link received from Amazon SES in your inbox. This will allow order notifications to your inbox.Create a sample user by using the following command, that you can use to login to the website.aws cognito-idp admin-create-user --user-pool-id <REACT_APP_USER_POOL_ID> --username <UserName> --user-attributes Name="email",Value="<USER_EMAIL>" Name="email_verified",Value=trueThe user will receive password in their email.Open CloudFront URL in a web browser. Login to the website with the username and password. It will ask you to reset your password.Explore different features such as Partner Lookup, Product search, Placing an order, and Order Lookup.Cleaning upTo avoid incurring future charges, delete the resources, delete the cloud formation stack when not needed.The following command will delete the infrastructure and website stack created in your AWS account:cdk destroyConclusionIn this blog, we demonstrated how to build an in-store digital channel for retail customers. You can now build your endless aisle application using the architecture described in this blog and integrate with your partners, orreach outto accelerate your retail business.Further readingServerless on AWSBuild a Serverless Web ApplicationServerless Architecture Design ExamplesAWS retail case studiesSandeep MehtaSandeep is a Senior Solutions Architect and part of Analytics TFC at AWS. Sandeep has passion to help customers design modern cloud architectures and recommend the right services for their requirements. He understands business use cases and translates them to secured, scalable, and resilient IT solutions. His focus is to build serverless architecture for solving customer pain points.Shashank ShrivastavaShashank Shrivastava is a Senior Cloud Application Architect and Serverless TFC member at AWS. He is passionate about helping customers and developers build modern applications on serverless architecture. As a pragmatic developer and blogger, he promotes community-driven learning and sharing of technology. His interests are software architecture, developer tools, and serverless computing.CommentsView Comments
In traditional business models, retailers handle order-fulfillment processes from start to finish—including inventory management, owning or leasing warehouses, and managing supply chains. But many retailers aren’t set up to carry additional inventory. The “endless aisle” business model is an alternative solution for lean retailers that are carrying enough in-store inventory while wanting to avoid revenue […]
Sandeep Mehta
2023-08-07T08:55:17-07:00
[ "Amazon API Gateway", "Amazon CloudFront", "Architecture", "AWS Lambda" ]
https://d2908q01vomqb2.c…nt-2-535x630.jpg
https://aws.amazon.com/blogs/architecture/aws-cloud-service-considerations-for-designing-multi-tenant-saas-solutions/
AWS Cloud service considerations when modernizing account-per-tenant solutions
AWS Architecture BlogAWS Cloud service considerations when modernizing account-per-tenant solutionsby Dennis Greene, Ignacio Fuentes, and Greg Pierce | on04 AUG 2023| inAmazon EC2,Amazon Elastic Block Store (Amazon EBS),Amazon Elastic File System (EFS),Amazon Elastic Kubernetes Service,Amazon RDS,Amazon VPC,Architecture|Permalink|Comments|ShareAn increasing number of software as a service (SaaS) providers are modernizing their architectures to utilize resources more efficiently and reduce operational costs. There are multiple strategies that can be used when refining your multi-tenant architecture. This blog will look at a specific scenario where SaaS providers move from an account-per-tenant to anAmazon Elastic Kubernetes Service(Amazon EKS) environment, taking advantage of some of Amazon EKS constructs to achieve better cost efficiencies and scaling strategies that align with multi-tenant workloads.Siloed accounts vs siloed Kubernetes namespacesIn SaaS environments, there aremultiple strategies that can be used to deploy tenants. Some of these environments share infrastructure and some do not. We refer to these models as pooled (shared) and siloed (dedicated). In this post, we examine two variations of the siloed model.Let’s consider a SaaS product that needs to support many customers, each with their own independent application, such as a web application. Using a siloed account-per-tenant model (Figure 1), a SaaS provider will utilize a dedicated AWS account to host each tenant’s workloads.To contain their respective workloads, each tenant would have their ownAmazon Elastic Compute Cloud(Amazon EC2) instances organized within anAuto Scaling group. Access to the applications running in these EC2 instances will be via anApplication Load Balancer. Each tenant is allocated their own database environment usingAmazon Relational Database Service(Amazon RDS). The website’s storage (consisting of PHP, JavaScript, CSS, and HTML files) is provided byAmazon Elastic Block Storevolumes attached to the EC2 instances. The SaaS provider has acontrol plane AWS accountused to create and modify these tenant-specific accounts. The account-per-tenant model makes each account the unit of scale and isolation.Figure 1. Single-tenant configurationThe account-per-tenant model makes each account the unit of scale and isolation. Now, let’s consider what would be required to transition this environment to a Siloed Namespace-Per-Tenant model where a SaaS provider could use containerization to package each website and a container orchestrator to deploy the websites across shared compute nodes (EC2 instances). Kubernetes can be employed as a container orchestrator, and a website would then be represented by a Kubernetes deployment and its associated pods. A Kubernetes namespace would serve as the logical encapsulation of the tenant-specific resources, as each tenant would be mapped to one Kubernetes namespace. The Kubernetes HorizontalPodAutoscaler can be utilized for autoscaling purposes, dynamically adjusting the number of replicas in the deployment on a given namespace based on workload demands.When additional compute resources are required, tools such as the Cluster Autoscaler or Karpenter can dynamically add more EC2 instances to the shared Kubernetes Cluster. An Application Load Balancer can be reused by multiple tenants to route traffic to the appropriate pods. For Amazon RDS, SaaS providers can use tenant-specific database schemas to separate tenant data. For static data,Amazon Elastic File System(Amazon EFS) and tenant-specific directories can be employed. The SaaS provider would still have a control plane AWS account that interacts with the Kubernetes and AWS APIs to create and update tenant-specific resources.This transition to Kubernetes using Amazon EKS and other managed services offers numerous advantages. It enables efficient resource utilization by leveraging the Amazon EKS scaling model to reduce costs and better align tenant consumption with tenant activity (Figure 2).Figure 2. Multi-tenant configurationAmazon EKS cluster sizing and customer segmentation considerations in multi-tenancy designsA high concentration of SaaS tenants hosted within the same system results in a large “blast radius.” This means a failure within the system has the potential to impact all resident tenants. This situation can lead to downtime for multiple tenants at once. To address this problem, SaaS providers should consider partitioning their customers amongst multiple AWS accounts or EKS clusters, each with their own deployments of this multi-tenant architecture. The number of tenants that can be present in a single cluster is a determination that can only be made by the SaaS provider after profiling the consumption activity of your tenants. Compare the shared risks of a subset of customers with the efficiency benefits of shared consumption of resources.Amazon EKS securitySaaS providers should evaluate whether it’s appropriate for them to make use of containers as aWorkload Isolation Boundary. This is of particular importance in multi-tenant Kubernetes architectures, given that containers running on a single EC2 instance share the underlying Linux kernel. Security vulnerabilities place this shared resource (the EC2 instance) at risk from attack vectors from the host Linux instance. Risk is elevated when any container running in a Kubernetes Pod cluster initiates untrusted code. This risk is heightened if SaaS providers permit tenants to “bring their code”.Kubernetes is a single-tenant orchestrator, but with a multi-tenant approach to SaaS architectures a single instance of the Amazon EKS control plane will be shared among all the workloads running within a cluster. Amazon EKS considers the cluster as the hard isolation security boundary. Every Amazon EKS managed Kubernetes cluster is isolated in a dedicated single-tenantAmazon Virtual Private Cloud. At present,hard multi-tenancycan only be implemented by provisioning a unique cluster for each tenant.Consider howAWS Fargatecould be used to address security needs. Also, explore how you can use Amazon EKS constructs to achieve tenant isolation. This includes applying policies to limit cross namespace access and associate IAM roles for services accounts with my namespaces to scope access to other tenant infrastructure.Amazon EFS considerationsA SaaS provider may consider Amazon EFS as the storage solution for the static content of the multiple tenants. This provides them with a straightforward, serverless, and elastic file system. Directories may be used to separate the content for each tenant.While this approach of creating tenant-specific directories in Amazon EFS provides many benefits, there may be challenges harvesting per-tenant utilization and performance metrics. This can result in operational challenges for providers that need to granularly meter per-tenant usage of resources. Consequently, noisy neighbors will be difficult to identify and remediate. To resolve this, SaaS providers should consider building a custom solution to monitor the individual tenants in the multi-tenant file system by leveraging storage and throughput/IOPS metrics.Amazon RDS considerationsMulti-tenant workloads, where data for multiple customers or end users is consolidated in the same Amazon RDS database cluster, can present operational challenges regarding per-tenant observability. Both MySQL Community Edition and open-source PostgreSQL have limited ability to provide per-tenant observability and resource governance. AWS customers operating multi-tenant workloads often use a combination of ‘database’ or ‘schema’ and ‘database user’ accounts as substitutes. AWS customers should use alternate mechanisms to establish a mapping between a tenant and these substitutes. This will give you the ability to process raw observability data from the database engine externally. You can then map these substitutes back to tenants, and distinguish tenants in the observability data.ConclusionIn this blog, we’ve shown what to consider when moving to a multi-tenancy SaaS solution in the AWS Cloud, how to optimize your cloud-based SaaS design, and some challenges and remediations. Invest effort early in your SaaS design strategy to explore your customer requirements for tenancy. Work backwards from your SaaS tenants end goals to determine: the level of computing performance and cyber security features required, and how the SaaS provider monitors and operates the platform with the target tenancy configuration. Your respective AWS account team is highly qualified to advise on these design decisions. Take advantage of reviewing and improving your design using theAWS Well-Architected Framework, specifically theSaaS Lens. The tenancy design process should be followed by extensive prototyping to validate functionality before production rollout.Related informationBuilding a Multi-Tenant SaaS Solution Using Amazon EKSMulti-tenant design considerations for Amazon EKS clustersBuilding a Multi-Tenant SaaS Solution Using AWS Serverless ServicesRe-defining multi-tenancyDennis GreeneDennis is an AWS Customer Solutions Manager aligned to the Americas ISV Segment. He is an accomplished technology leader with a 25+ year track record of delivering technology transformation programs spanning both application and infrastructure domains. His industry experience includes financial services, pharmaceuticals, and software. Dennis currently serves as a trusted CxO advisor for complex cloud migration and modernization programs. He has a particular focus areas in cloud architecture, journey planning, business value, and sustainability.Ignacio FuentesIgnacio is a Sr. Solutions Architect at AWS, specializing in assisting ISV customers with cloud and application development. With over 10 years of experience and a background in software engineering, he approaches each project with an innovative and pragmatic mindset. He is an advocate for technology as a means to address real-world challenges. He is dedicated to leveraging his expertise to create positive impacts in the tech industry and beyond.Greg PierceGreg is a Sr Manager, SA, at AWS, with over 20 years of management and software development experience designing, developing and deploying mission-critical applications. He has strong analytical, problem-solving, planning, and management skills in market research, feasibility studies, customer and public relations, technical management, and project budgeting. He's an AWS Certified Professional. Greg's specialties include C#, Python, Tensorflow, Unity, OpenGL, DirectX, Unity, Java, and Objective-C.CommentsView Comments
An increasing number of software as a service (SaaS) providers are modernizing their architectures to utilize resources more efficiently and reduce operational costs. There are multiple strategies that can be used when refining your multi-tenant architecture. This blog will look at a specific scenario where SaaS providers move from an account-per-tenant to an Amazon Elastic […]
Dennis Greene
2023-08-04T06:48:49-07:00
[ "Amazon EC2", "Amazon Elastic Block Store (Amazon EBS)", "Amazon Elastic File System (EFS)", "Amazon Elastic Kubernetes Service", "Amazon RDS", "Amazon VPC", "Architecture" ]
https://d2908q01vomqb2.c…itect_REVIEW.jpg
https://aws.amazon.com/blogs/architecture/lets-architect-resiliency-in-architectures/
Let’s Architect! Resiliency in architectures
AWS Architecture BlogLet’s Architect! Resiliency in architecturesby Luca Mezzalira, Federica Ciuffo, Vittorio Denti, and Zamira Jaupaj | on02 AUG 2023| inArchitecture,AWS Well-Architected,Thought Leadership|Permalink|Comments|ShareWhat is “resiliency”, and why does it matter? When we discussed this topic inan early 2022 edition ofLet’s Architect!, we referenced theAWS Well-Architected Framework, which defines resilience as having “the capability to recover when stressed by load, accidental or intentional attacks, and failure of any part in the workload’s components.” Businesses rely heavily on the availability and performance of their digital services. Resilience has emerged as critical for any efficiently architected system, which is why it is a fundamental role in ensuring the reliability and availability of workloads hosted on the AWS Cloud platform.In this newer edition ofLet’s Architect!, we share some best practices for putting together resilient architectures, focusing on providing continuous service and avoiding disruptions. Ensuring uninterrupted operations is likely a primary objective when it comes to building a resilient architecture.Understand resiliency patterns and trade-offs to architect efficiently in the cloudIn thisAWS Architecture Blogpost, the authors introduce five resilience patterns. Each of these patterns comes with specific strengths and trade-offs, allowing architects to personalize their resilience strategies according to the unique requirements of their applications and business needs. By understanding these patterns and their implications, organizations can design resilient cloud architectures that deliver high availability and efficient recovery from potential disruptions.Take me to this Architecture Blog post!Resilience patterns and tradeoffsTimeouts, retries, and backoff with jitterMarc Broker discusses the inevitability of failures and the importance of designing systems to withstand them. He highlights three essential tools for building resilience: timeouts, retries, and backoff. By embracing these three techniques, we can create robust systems that maintain high availability in the face of failures. Timeouts, backoff, and jitter are fundamental to spread the traffic coming from clients and avoid overloading your systems. Building resilience is a fundamental aspect of ensuring the reliability and performance of AWS services in the ever-changing and dynamic technological landscape.Take me to the Amazon Builders’ Library!The Amazon Builder’s Library is a collection of technical resources produced by engineers at AmazonPrepare & Protect Your Applications From Disruption With AWS Resilience HubTheAWS Resilience Hubnot only protects businesses from potential downtime risks but also helps them build a robust foundation for their applications, ensuring uninterrupted service delivery to customers and users.In this AWS Online Tech Talk, led by the Principal Product Manager of AWS Resilience Hub, the importance of a resilience hub to protect mission-critical applications from downtime risks is emphasized. The AWS Resilience Hub is showcased as a centralized platform to define, validate, and track application resilience. The talk includes strategies to avoid disruptions caused by software, infrastructure, or operational issues, plus there’s also a demo demonstrating how to apply these techniques effectively.If you are interested in delving deeper into the services discussed in the session,AWS Resilience Hubis a valuable resource for monitoring and implementing resilient architectures.Take me to this AWS Online Tech Talk!AWS Resilience Hub recommendationsData resiliency design patterns with AWSIn this re:Invent 2022 session, data resiliency, why it matters to customers, and how you can incorporate it into your application architecture is discussed in depth. This session kicks off with the comprehensive overview of data resiliency, breaking down its core components and illustrating its critical role in modern application development. It, then, covers application data resiliency and protection designs, plus extending from the native data resiliency capabilities of AWS storage through DR solutions using AWS Elastic Disaster Recovery.Take me to this re:Invent 2022 video!Asynchronous cross-region replicationSee you next time!Thanks for joining our discussion on architecture resiliency! See you in two weeks when we’ll talk about security on AWS.To find all the blogs from this series, visit theLet’s Architect!list of content on theAWS Architecture Blog.TAGS:Architecture,AWS Well-Architected Framework,jitter,Let's Architect,Resilience,Resilience Hub,retries and backoff,timeoutsLuca MezzaliraLuca is Principal Solutions Architect based in London. He has authored several books and is an international speaker. He lent his expertise predominantly in the solution architecture field. Luca has gained accolades for revolutionizing the scalability of front-end architectures with micro-frontends, from increasing the efficiency of workflows, to delivering quality in products.Federica CiuffoFederica is a Solutions Architect at Amazon Web Services. She is specialized in container services and is passionate about building infrastructure with code. Outside of the office, she enjoys reading, drawing, and spending time with her friends, preferably in restaurants trying out new dishes from different cuisines.Vittorio DentiVittorio Denti is a Machine Learning Engineer at Amazon based in London. After completing his M.Sc. in Computer Science and Engineering at Politecnico di Milano (Milan) and the KTH Royal Institute of Technology (Stockholm), he joined AWS. Vittorio has a background in distributed systems and machine learning. He's especially passionate about software engineering and the latest innovations in machine learning science.Zamira JaupajZamira is an Enterprise Solutions Architect based in the Netherlands. She is highly passionate IT professional with over 10 years of multi-national experience in designing and implementing critical and complex solutions with containers, serverless, and data analytics for small and enterprise companies.CommentsView Comments
What is “resiliency”, and why does it matter? When we discussed this topic in an early 2022 edition of Let’s Architect!, we referenced the AWS Well-Architected Framework, which defines resilience as having “the capability to recover when stressed by load, accidental or intentional attacks, and failure of any part in the workload’s components.” Businesses rely […]
Luca Mezzalira
2023-08-02T06:12:51-07:00
[ "Architecture", "AWS Well-Architected", "Thought Leadership" ]
https://d2908q01vomqb2.c…-AM-1260x512.png
https://aws.amazon.com/blogs/architecture/improving-medical-imaging-workflows-with-aws-healthimaging-and-sagemaker/
Improving medical imaging workflows with AWS HealthImaging and SageMaker
AWS Architecture BlogImproving medical imaging workflows with AWS HealthImaging and SageMakerby Sukhomoy Basak and Hassan Mousaid | on31 JUL 2023| inAmazon SageMaker,Amazon Simple Queue Service (SQS),Amazon Simple Storage Service (S3),Announcements,Artificial Intelligence|Permalink|Comments|ShareMedical imaging plays a critical role in patient diagnosis and treatment planning in healthcare. However, healthcare providers face several challenges when it comes to managing, storing, and analyzing medical images. The process can be time-consuming, error-prone, and costly.There’s also a radiologist shortage across regions and healthcare systems, making the demand for this specialty increases due to an aging population, advances in imaging technology, and the growing importance of diagnostic imaging in healthcare.As the demand for imaging studies continues to rise, the limited number of available radiologists results in delays in available appointments and timely diagnoses. And while technology enables healthcare delivery improvements for clinicians and patients, hospitals seek additional tools to solve their most pressing challenges, including:Professional burnout due to an increasing demand for imaging and diagnostic servicesLabor-intensive tasks, such as volume measurement or structural segmentation of imagesIncreasing expectations from patients expecting high-quality healthcare experiences that match retail and technology in terms of convenience, ease, and personalizationTo improve clinician and patient experiences, run your picture archiving and communication system (PACS) with an artificial intelligence (AI)-enabled diagnostic imaging cloud solution to securely gain critical insights and improve access to care.AI helps reduce the radiologist burndown rate through automation. For example, AI saves radiologist chest x-ray interpretation time. It is also a powerful tool to identify areas that need closer inspection, and helps capture secondary findings that weren’t initially identified. The advancement of interoperability and analytics gives radiologist a 360-degree, longitudinal view of patient health records to provide better healthcare at potentially lower costs.AWS offers services to address these challenges. This blog post discussesAWS HealthImaging(AWS AHI) andAmazon SageMaker, and how they are used together to improve healthcare providers’ medical imaging workflows. This ultimately accelerates imaging diagnostics and increases radiology productivity. AWS AHI enables developers to deliver performance, security, and scale to cloud-native medical imaging applications. It allows ingestion of Digital Imaging and Communication in Medicine (DICOM) images. Amazon SageMaker provides end-to-end solution for AI and machine learning.Let’s explore an example use case involving X-rays after an auto accident. In this diagnostic medical imaging workflow, a patient is in the emergency room. From there:The patient undergoes an X-ray to check for fractures.The scanned acquisition device images flow to the PACS system.The radiologist reviews the information gathered from this procedure and authors the report.The patient workflow continues as the reports are made available to the referring physician.Next-generation imaging solutions and workflowsHealthcare providers can use AWS AHI and Amazon SageMaker together to enable next-generation imaging solutions and improve medical imaging workflows. The following architecture illustrates this example.Figure 1: X-ray images are sent to AWS HealthImaging and an Amazon SageMaker endpoint extracts insights.Let’s review the architecture and the key components:1. Imaging Scanner: Captures the images from a patient’s body. Depending on the modality, this can be an X-ray detector; a series of detectors in a CT scanner; a magnetic field and radio frequency coils in an MRI scanner; or an ultrasound transducer. This example uses an X-ray device.AWS IoT Greengrass: Edge runtime and cloud service configured withDICOM C-Store SCPthat receives the images and sends it toAmazon Simple Storage Service(Amazon S3). The images along with the related metadata are sent to Amazon S3 andAmazon Simple Queue Service(Amazon SQS) respectively, that triggers the workflow.2. Amazon SQS message queue: Consumes event from S3 bucket and triggers anAWS Step Functionsworkflow orchestration.3. AWS Step Functions runs the transform and import jobs to further process and import the images into AWS AHI data store instance.4. The final diagnostic image—along with any relevant patient information and metadata—is stored in the AWS AHI datastore. This allows for efficient imaging date retrieval and management. It also enables medical imaging data access with sub-second image retrieval latencies at scale, powered by cloud-native APIs and applications from AWS partners.5. Radiologists responsible for ground truth for ML images perform medical image annotations usingAmazon SageMaker Ground Truth. They visualize and label DICOM images using a custom data labeling workflow—a fully managed data labeling service that supports built-in or custom data labeling workflows. They also leverage tools like3D Slicerfor interactive medical image annotations.6. Data scientists build or leverage built-in deep learning models using the annotated images on Amazon SageMaker. SageMaker offers a range of deployment options that vary from low latency and high throughput to long-running inference jobs. These options include considerations for batch, real-time, ornear real-time inference.7. Healthcare providers use AWS AHI and Amazon SageMaker to run AI-assisted detection and interpretation workflow. This workflow is used to identify hard-to-see fractures, dislocations, or soft tissue injuries to allow surgeons and radiologist to be more confident in their treatment choices.8. Finally, the image stored in AWS AHI is displayed on a monitor or other visual output device where it can be analyzed and interpreted by a radiologist or other medical professional.TheOpen Health Imaging Foundation(OHIF) Viewer is an open source, web-based, medical imaging platform. It provides a core framework for building complex imaging applications.Radical ImagingorArterysare AWS partners that provide OHIF-based medical imaging viewer.Each of these components plays a critical role in the overall performance and accuracy of the medical imaging system as well as ongoing research and development focused on improving diagnostic outcomes and patient care. AWS AHI uses efficient metadata encoding, lossless compression, and progressive resolution data access to provide industry leading performance for loading images. Efficient metadata encoding enables image viewers and AI algorithms to understand the contents of a DICOM study without having to load the image data.SecurityThe AWS shared responsibility model applies to data protection in AWS AHI and Amazon SageMaker.Amazon SageMaker is HIPAA-eligible and can operate with data containing Protected Health Information (PHI). Encryption of data in transit is provided by SSL/TLS and is used when communicating both with the front-end interface of Amazon SageMaker (to the Notebook) and whenever Amazon SageMaker interacts with any other AWS services.AWS AHI is also HIPAA-eligible service and provides access control at the metadata level, ensuring that each user and application can only see the images and metadata fields that are required based upon their role. This prevents the proliferation of Patient PHI. All access to AWS AHI APIs is logged in detail inAWS CloudTrail.Both of these services leverageAWS Key Management service(AWS KMS) to satisfy the requirement that PHI data is encrypted at rest.ConclusionIn this post, we reviewed a common use case for early detection and treatment of conditions, resulting in better patient outcomes. We also covered an architecture that can transform the radiology field by leveraging the power of technology to improve accuracy, efficiency, and accessibility of medical imaging.Further readingAWS HealthImaging PartnersAWS features AWS HealthImaging at RSNA22Annotate DICOM images and build an ML model using the MONAI framework on Amazon SageMakerMLOps deployment best practices for real-time inference model serving endpoints with Amazon SageMakerTAGS:artificial intelligence,healthcareSukhomoy BasakSukhomoy Basak is a Solutions Architect at Amazon Web Services, with a passion for Data and Analytics solutions. Sukhomoy works with enterprise customers to help them architect, build, and scale applications to achieve their business outcomes.Hassan MousaidHassan Mousaid, PhD is a Principal Solutions Architect at Amazon Web Services supporting Healthcare and Life Sciences (HCLS) customers to accelerate the process of bringing ideas to market using Amazon's mechanisms for innovation. He has 20 years experience in enterprise and cloud transformation, healthcare IT, and medical imaging.CommentsView Comments
Medical imaging plays a critical role in patient diagnosis and treatment planning in healthcare. However, healthcare providers face several challenges when it comes to managing, storing, and analyzing medical images. The process can be time-consuming, error-prone, and costly. There’s also a radiologist shortage across regions and healthcare systems, making the demand for this specialty increases […]
Sukhomoy Basak
2023-07-31T08:56:54-07:00
[ "Amazon SageMaker", "Amazon Simple Queue Service (SQS)", "Amazon Simple Storage Service (S3)", "Announcements", "Artificial Intelligence" ]
https://d2908q01vomqb2.c…lity-858x630.png
https://aws.amazon.com/blogs/architecture/content-repository-for-unstructured-data-with-multilingual-semantic-search-part-2/
Content Repository for Unstructured Data with Multilingual Semantic Search: Part 2
AWS Architecture BlogContent Repository for Unstructured Data with Multilingual Semantic Search: Part 2by Patrik Nagel and Sid Singh | on26 JUL 2023| inAmazon OpenSearch Service,Amazon SageMaker,Amazon Simple Storage Service (S3),Architecture,AWS Lambda,Technical How-to|Permalink|Comments|ShareLeveraging vast unstructured data poses challenges, particularly for global businesses needing cross-language data search. InPart 1of this blog series, we built the architectural foundation for the content repository. The key component of Part 1 was the dynamic access control-based logic with a web UI to upload documents.In Part 2, we extend the content repository with multilingual semantic search capabilities while maintaining the access control logic from Part 1. This allows users to ingest documents in content repository across multiple languages and then run search queries to get reference to semantically similar documents.Solution overviewBuilding on the architectural foundation from Part 1, we introduce four new building blocks to extend the search functionality.Optical character recognition (OCR) workflow:To automatically identify, understand, and extract text from ingested documents, we useAmazon Textractand a sample review dataset of.pngformat documents (Figure 1). We use Amazon Textract synchronous application programming interfaces (APIs) to capture key-value pairs for thereviewidandreviewBodyattributes. Based on your specific requirements, you can choose to capture either the complete extracted text or parts the text.Figure 1. Sample document for ingestionEmbedding generation:To capture the semantic relationship between the text, we use a machine learning (ML) model that maps words and sentences to high-dimensional vector embeddings. You can useAmazon SageMaker, a fully-managed ML service, to build, train, and deploy your ML models to production-ready hosted environments. You can also deploy ready-to-use pre-trained models from multiple avenues such asSageMaker JumpStart. For this blog post, we use the open-source pre-traineduniversal-sentence-encoder-multilingual modelfrom TensorFlow Hub. The model inference endpoint deployed to a SageMaker endpoint generates embeddings for the document text and the search query. Figure 2 is an example of n-dimensional vector that is generated as the output of thereviewBodyattribute text provided to the embeddings model.Figure 2. Sample embedding representation of the value ofreviewBodyEmbedding ingestion:To make the embeddings searchable for the content repository users, you can use thek-Nearest Neighbor(k-NN) search feature ofAmazon OpenSearch Service. The OpenSearch k-NN plugin providesdifferent methods. For this blog post, we use theApproximate k-NNsearch approach, based on theHierarchical Navigable Small World(HNSW) algorithm. HNSW uses a hierarchical set of proximity graphs in multiple layers to improve performance when searching large datasets to find the “nearest neighbors” for the search query text embeddings.Semantic search:We make the search service accessible as an additional backend logic onAmazon API Gateway. Authenticated content repository users send their search query using the frontend to receive the matching documents. The solution maintains end-to-end access control logic by using the user’s enrichedAmazon Cognitoprovided identity (ID) token claim with thedepartmentattribute to compare it with the ingested documents.Technical architectureThe technical architecture includes two parts:Implementing multilingual semantic search functionality:Describes the processing workflow for the document that the user uploads; makes the document searchable.Running input search query:Covers the search workflow for the input query; finds and returns the nearest neighbors of the input text query to the user.Part 1. Implementing multilingual semantic search functionalityOur previous blog post discussed blocks A through D (Figure 3), including user authentication, ID token enrichment,Amazon Simple Storage Service(Amazon S3) object tags for dynamic access control, and document upload to the source S3 bucket. In the following section, we cover blocks E through H. The overall workflow describes how an unstructured document is ingested in the content repository, run through the backend OCR and embeddings generation process and finally the resulting vector embedding are stored in OpenSearch service.Figure 3. Technical architecture for implementing multilingual semantic search functionalityThe OCR workflow extracts text from your uploaded documents.The source S3 bucket sends an event notification toAmazon Simple Queue Service(Amazon SQS).The document transformationAWS Lambdafunction subscribed to the Amazon SQS queue invokes an Amazon Textract API call to extract the text.The document transformation Lambda function makes aninference requestto the encoder model hosted on SageMaker. In this example, the Lambda function submits thereviewBodyattribute to the encoder model to generate the embedding.The document transformation Lambda function writes an output file in the transformed S3 bucket. The text file consists of:ThereviewidandreviewBodyattributes extracted from Step 1An additionalreviewBody_embeddingsattribute from Step 2Note:The workflow tags the output file with the same S3 object tags as the source document for downstream access control.The transformed S3 bucket sends an event notification to invoke the indexing Lambda function.The indexing Lambda function reads the text file content. Then indexing Lambda function makes an OpenSearch index API call along with source document tag as one of the indexing attributes for access control.Part 2. Running user-initiated search queryNext, we describe how the user’s request produces query results (Figure 4).Figure 4. Search query lifecycleThe user enters a search string in the web UI to retrieve relevant documents.Based on the active sign-in session, the UI passes the user’s ID token to the search endpoint of the API Gateway.The API Gateway uses Amazon Cognito integration to authorize the search API request.Once validated, the search API endpoint request invokes the search document Lambda function.The search document function sends the search query string as the inference request to the encoder model to receive the embedding as the inference response.The search document function uses the embedding response to build an OpenSearch k-NN search query. The HNSW algorithm is configured with theLucene engine and its filter optionto maintain the access control logic based on the customdepartmentclaim from the user’s ID token. The OpenSearch query returns the following to the query embeddings:Top three Approximate k-NNOther attributes, such asreviewidandreviewBodyThe workflow sends the relevant query result attributes back to the UI.PrerequisitesYou must have the following prerequisites for this solution:An AWS account. Sign up tocreate and activate one.The following software installed on your development machine, or use anAWS Cloud9environment:AWS Command Line Interface(AWS CLI);configure it to point to your AWS accountTypeScript;use a package manager, such asnpmAWS Cloud Development Kit(AWS CDK)Docker; ensure it’s runningAppropriateAWS credentialsfor interacting with resources in your AWS account.WalkthroughSetupThe following steps deploy two AWS CDK stacks into your AWS account:content-repo-search-stack(blog-content-repo-search-stack.ts) creates the environment detailed in Figure 3, except for the SageMaker endpoint, which you create in a spearate step.demo-data-stack(userpool-demo-data-stack.ts) deploys sample users, groups, and role mappings.To continue setup, use the following commands:Clone the project Git repository:git clone https://github.com/aws-samples/content-repository-with-multilingual-search content-repositoryInstall the necessary dependencies:cd content-repository/backend-cdk npm installConfigure environment variables:export CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text) export CDK_DEFAULT_REGION=$(aws configure get region)Bootstrap your account for AWS CDK usage:cdk bootstrap aws://$CDK_DEFAULT_ACCOUNT/$CDK_DEFAULT_REGIONDeploy the code to your AWS account:cdk deploy --allThe complete stack set-up may take up to 20 minutes.Creation of SageMaker endpointFollow below steps to create the SageMaker endpoint in the same AWS Region where you deployed the AWS CDK stack.Sign in to theSageMaker console.In the navigation menu, selectNotebook, thenNotebookinstances.ChooseCreate notebook instance.Under theNotebook instance settings, entercontent-repo-notebookas the notebook instance name, and leave other defaults as-is.Under thePermissions and encryptionsection (Figure 5), you need to set the IAM role section to the role with the prefixcontent-repo-search-stack. In case you don’t see this role automatically populated, select it from the drop-down. Leave the rest of the defaults, and chooseCreate notebook instance.Figure 5. Notebook permissionsThe notebook creation status changes toPendingbefore it’s available for use within 3-4 minutes.Once the notebook is in theAvailablestatus, chooseOpen Jupyter.Choose theUploadbutton and upload thecreate-sagemaker-endpoint.ipynbfile in thebackend-cdkfolder of the root of the blog repository.Open thecreate-sagemaker-endpoint.ipynbnotebook. Select the optionRun Allfrom theCellmenu (Figure 6). This might take up to 10 minutes.Figure 6. Runcreate-sagemaker-endpointnotebook cellsAfter all the cells have successfully run, verify that theAWS Systems Managerparametersagemaker-endpointis updated with the value of the SageMaker endpoint name. An example of value as the output of the cell is in Figure 7. In case you don’t see the output, check if the preceding steps were run correctly.Figure 7. SSM parameter updated with SageMaker endpointVerify in the SageMaker console that the inference endpoint with the prefixtensorflow-inferencehas been deployed and is set to statusInService.Upload sample data to the content repository:Update theS3_BUCKET_NAMEvariable in theupload_documents_to_S3.shscript in the root folder of the blog repository with thes3SourceBucketNamefrom the AWS CDK output of thecontent-repo-search-stack.Runupload_documents_to_S3.sh scriptto upload 150 sample documents to the content repository. This takes 5-6 minutes. During this process, the uploaded document triggers the workflow described in theImplementing multilingual semantic search functionality.Using the search serviceAt this stage, you have deployed all the building blocks for the content repository in your AWS account. Next, as part of the upload sample data to the content repository, you pushed a limited corpus of 150 sample documents (.png format). Each document is in one of the four different languages – English, German, Spanish and French. With the added multilingual search capability, you can query in one language and receive semantically similar results across different languages while maintaining the access control logic.Access the frontend application:Copy theamplifyHostedAppUrlvalue of the AWS CDK output from thecontent-repo-search-stackshown in the terminal.Enter the URL in your web browser to access the frontend application.A temporary page displays until the automated build and deployment of the React application completes after 4-5 minutes.Sign into the application:The content repository provides two demo users with credentials as part of thedemo-data-stackin the AWS CDK output. Copy the password from the terminal associated with thesales-user, which belongs to thesalesdepartment.Follow the prompts from the React webpage to sign in with the sales-user and change the temporary password.Enter search queries and verify results. The search action invokes the workflow described inRunning input search query. For example:Enterworks wellas the search query. Note the multilingual output and the semantically similar results (Figure 8).Figure 8. Positive sentiment multilingual search result for the sales-userEnterbad qualityas the search query. Note the multilingual output and the semantically similar results (Figure 9).Figure 9. Negative sentiment multi-lingual search result for the sales-userSign out as thesales-userwith theLog Outbutton on the webpage.Sign in using themarketing-usercredentials to verify access control:Follow the sign in procedure in step 2 but with themarketing-user.This time withworks wellas search query, you find different output. This is because the access control only allowsmarketing-userto search for the documents that belong to themarketingdepartment (Figure 10).Figure 10. Positive sentiment multilingual search result for the marketing-userCleanupIn thebackend-cdksubdirectory of the cloned repository, delete the deployed resources:cdk destroy --all.Additionally, you need to access the Amazon SageMaker console todelete the SageMaker endpoint and notebook instancecreated as part of theWalkthroughsetup section.ConclusionIn this blog, we enriched the content repository with multi-lingual semantic search features while maintaining the access control fundamentals that we implemented inPart 1. The building blocks of the semantic search for unstructured documents—Amazon Textract, Amazon SageMaker, and Amazon OpenSearch Service—set a foundation for you to customize and enhance the search capabilities for your specific use case. For example, you can leverage the fast developments inLarge Language Models (LLM) to enhance the semantic search experience. You can replace the encoder model with an LLM capable of generating multilingual embeddings while still maintaining the OpenSearch service to store and index data and perform vector search.Patrik NagelPatrik is a Principal Solutions Architect at AWS helping Global Financial Services customers to innovate and transform through modern software and practices. He has over 15 years of industry experience ranging from small startups to large enterprises covering a wide range of technologies. Outside of work, you find Patrik on the ice rink as an avid hockey player.Sid SinghSid is a Solutions Architect with Amazon Web Services. He works with the global financial services customers and has more than 10 years of industry experience covering a wide range of technologies. Outside of work, he loves traveling, is an avid foodie, and also a Bavarian beer enthusiast.CommentsView Comments
Leveraging vast unstructured data poses challenges, particularly for global businesses needing cross-language data search. In Part 1 of this blog series, we built the architectural foundation for the content repository. The key component of Part 1 was the dynamic access control-based logic with a web UI to upload documents. In Part 2, we extend the […]
Patrik Nagel
2023-07-26T05:56:14-07:00
[ "Amazon OpenSearch Service", "Amazon SageMaker", "Amazon Simple Storage Service (S3)", "Architecture", "AWS Lambda", "Technical How-to" ]
https://d2908q01vomqb2.c…/Fig3-fanout.png
https://aws.amazon.com/blogs/architecture/best-practices-for-implementing-event-driven-architectures-in-your-organization/
Best practices for implementing event-driven architectures in your organization
AWS Architecture BlogBest practices for implementing event-driven architectures in your organizationby Emanuele Levi | on24 JUL 2023| inAmazon EventBridge,Amazon Managed Streaming for Apache Kafka (Amazon MSK),Amazon Simple Notification Service (SNS),Amazon Simple Queue Service (SQS),Architecture,Kinesis Data Streams|Permalink|Comments|ShareEvent-driven architectures (EDA) are made up of components that detect business actions and changes in state, and encode this information in event notifications. Event-driven patterns are becoming more widespread in modern architectures because:they are the main invocation mechanism in serverless patterns.they are the preferred pattern for decoupling microservices, where asynchronous communications and event persistence are paramount.they are widely adopted as a loose-coupling mechanism between systems in different business domains, such as third-party or on-premises systems.Event-driven patterns have the advantage of enabling team independence through the decoupling and decentralization of responsibilities. This decentralization trend in turn, permits companies to move with unprecedented agility, enhancing feature development velocity.In this blog, we’ll explore the crucial components and architectural decisions you should consider when adopting event-driven patterns, and provide some guidance on organizational structures.Division of responsibilitiesThe communications flow in EDA (seeWhat is EDA?) is initiated by the occurrence of an event. Most production-grade event-driven implementations have three main components, as shown in Figure 1: producers, message brokers, and consumers.Figure 1. Three main components of an event-driven architectureProducers, message brokers, and consumers typically assume the following roles:ProducersProducersare responsible for publishing the events as they happen. They are the owners of the event schema (data structure) and semantics (meaning of the fields, such as the meaning of the value of an enum field). As this is the only contract (coupling) between producers and the downstream components of the system, the schema and its semantics are crucial in EDA. Producers are responsible for implementing a change management process, which involves both non-breaking and breaking changes. With introduction of breaking changes, consumers are able to negotiate the migration process with producers.Producers are “consumer agnostic”, as their boundary of responsibility ends when an event is published.Message brokersMessage brokersare responsible for the durability of the events, and will keep an event available for consumption until it is successfully processed. Message brokers ensure that producers are able to publish events for consumers to consume, and they regulate access and permissions to publish and consume messages.Message brokers are largely “events agnostic”, and do not generally access or interpret the event content. However, some systems provide a routing mechanism based on the event payload or metadata.ConsumersConsumersare responsible for consuming events, and own the semantics of theeffectof events. Consumers are usually bounded to one business context. This means the same event will have differenteffectsemantics for different consumers. Crucial architectural choices when implementing a consumer involve the handling of unsuccessful message deliveries or duplicate messages. Depending on the business interpretation of the event, when recovering from failure a consumer might permit duplicate events, such as with an idempotent consumer pattern.Crucially, consumers are “producer agnostic”, and their boundary of responsibility begins when an event is ready for consumption. This allows new consumers to onboard into the system without changing the producer contracts.Team independenceIn order to enforce the division of responsibilities, companies should organize their technical teams by ownership of producers, message brokers, and consumers. Although the ownership of producers and consumers is straightforward in an EDA implementation, the ownership of the message broker may not be. Different approaches can be taken to identify message broker ownership depending on your organizational structure.Decentralized ownershipFigure 2. Ownership of the message broker in a decentralized ownership organizational structureIn adecentralized ownershiporganizational structure (see Figure 2), the teams producing events are responsible for managing their own message brokers and the durability and availability of the events for consumers.The adoption oftopic fanoutpatterns based onAmazon Simple Queue Service(SQS) andAmazon Simple Notification Service(SNS) (see Figure 3), can help companies implement a decentralized ownership pattern. A bus-based pattern usingAmazon EventBridgecan also be similarly utilized (see Figure 4).Figure 3. Topic fanout pattern based on Amazon SQS and Amazon SNSFigure 4. Events bus pattern based on Amazon EventBridgeThe decentralized ownership approach has the advantage of promoting team independence, but it is not a fit for every organization. In order to be implemented effectively, a well-established DevOps culture is necessary. In this scenario, the producing teams are responsible for managing the message broker infrastructure and the non-functional requirements standards.Centralized ownershipFigure 5. Ownership of the message broker in a centralized ownership organizational structureIn acentralized ownershiporganizational structure, a central team (we’ll call it theplatform team) is responsible for the management of the message broker (see Figure 5). Having a specialized platform team offers the advantage of standardized implementation of non-functional requirements, such as reliability, availability, and security. One disadvantage is that the platform team is a single point of failure in both the development and deployment lifecycle. This could become a bottleneck and put team independence and operational efficiency at risk.Figure 6. Streaming pattern based on Amazon MSK and Kinesis Data StreamsOn top of the implementation patterns mentioned in the previous section, the presence of a dedicated team makes it easier to implement streaming patterns. In this case, a deeper understanding on how the data is partitioned and how the system scales is required. Streaming patterns can be implemented using services such asAmazon Managed Streaming for Apache Kafka(MSK) orAmazon Kinesis Data Streams(see Figure 6).Best practices for implementing event-driven architectures in your organizationThe centralized and decentralized ownership organizational structures enhance team independence or standardization of non-functional requirements respectively. However, they introduce possible limits to the growth of the engineering function in a company. Inspired by the two approaches, you can implement a set of best practices which are aimed at minimizing those limitations.Figure 7. Best practices for implementing event-driven architecturesIntroduce a cloud center of excellence (CCoE).A CCoE standardizes non-functional implementation across engineering teams. In order to promote a strong DevOps culture, the CCoE should not take the form of an external independent team, but rather be a collection of individual members representing the various engineering teams.Decentralize team ownership.Decentralize ownership and maintenance of the message broker to producing teams. This will maximize team independence and agility. It empowers the team to use the right tool for the right job, as long as they conform to the CCoE guidelines.Centralize logging standards and observability strategies.Although it is a best practice to decentralize team ownership of the components of an event-driven architecture, logging standards and observability strategies should be centralized and standardized across the engineering function. This centralization provides for end-to-end tracing of requests and events, which are powerful diagnosis tools in case of any failure.ConclusionIn this post, we have described the main architectural components of an event-driven architecture, and identified the ownership of the message broker as one of the most important architectural choices you can make. We have described a centralized and decentralized organizational approach, presenting the strengths of the two approaches, as well as the limits they impose on the growth of your engineering organization. We have provided some best practices you can implement in your organization to minimize these limitations.Further reading:To start your journey building event-driven architectures in AWS, explore the following:For bus-based and topic-based patterns, seeWhat is an Event-Driven Architecture?For streaming patterns, seeWhat is Apache Kafka?, andAmazon Kinesis.Emanuele LeviEmanuele is a Solutions Architect in the Enterprise Software and SaaS team, based in London. Emanuele helps UK customers on their journey to refactor monolithic applications into modern microservices SaaS architectures. Emanuele is mainly interested in event-driven patterns and designs, especially when applied to analytics and AI, where he has expertise in the fraud-detection industry.CommentsView Comments
Event-driven architectures (EDA) are made up of components that detect business actions and changes in state, and encode this information in event notifications. Event-driven patterns are becoming more widespread in modern architectures because: they are the main invocation mechanism in serverless patterns. they are the preferred pattern for decoupling microservices, where asynchronous communications and event […]
Emanuele Levi
2023-07-24T07:23:36-07:00
[ "Amazon EventBridge", "Amazon Managed Streaming for Apache Kafka (Amazon MSK)", "Amazon Simple Notification Service (SNS)", "Amazon Simple Queue Service (SQS)", "Architecture", "Kinesis Data Streams" ]
https://d2908q01vomqb2.c…ure-1260x588.png
https://aws.amazon.com/blogs/architecture/temporal-data-lake-architecture-for-benchmark-and-indices-analytics/
Temporal data lake architecture for benchmark and indices analytics
AWS Architecture BlogTemporal data lake architecture for benchmark and indices analyticsby Krishna Gogineni, Adam Glinianowicz, Sreenivas Adiki, Kavita Mittal, Mahesh Kotha, and Narsimhan Bramadesam | on21 JUL 2023| inAmazon Kinesis,Amazon Simple Storage Service (S3),Architecture,AWS Lambda,Kinesis Data Analytics,Kinesis Data Streams|Permalink|Comments|ShareFinancial trading houses and stock exchanges generate enormous volumes of data in near real-time, making it difficult to perform bi-temporal calculations that yield accurate results. Achieving this requires a processing architecture that can handle large volumes of data during peak bursts, meet strict latency requirements, and scale according to incoming volumes.In this post, we’ll describe a scenario for an industry leader in the financial services sector and explain how AWS services are used for bi-temporal processing with state management and scale based on variable workloads during the day, all while meeting strict service-level agreement (SLA) requirements.Problem statementTo design and implement a fully temporal transactional data lake with the repeatable read isolation level for queries is a challenge, particularly with burst events that need the overall architecture to scale accordingly. The data store in the overall architecture needs to record the value history of data at different times, which is especially important for financial data. Financial data can include corporate actions, annual or quarterly reports, or fixed-income securities, like bonds that have variable rates. It’s crucial to be able to correct data inaccuracies during the reporting period.The example customer seeks a data processing platform architecture to dynamically scale based on the workloads with a capacity of processing 150 million records under 5 minutes. Their platform should be capable of meeting the end-to-end SLA of 15 minutes, from ingestion to reporting, with lowest total cost of ownership. Additionally, managing bi-temporal data requires a database that has critical features, such asACID(atomicity, consistency, isolation, durability) compliance, time-travel capability, full-schema evolution, partition layout and evolution, rollback to prior versions, and SQL-like query experience.Solution overviewThe solution architecture key building blocks areAmazon Kinesis Data Streamsfor streaming data,Amazon Kinesis Data AnalyticswithApache Flinkas processing engine, Flink’sRocksDBfor state management, andApache IcebergonAmazon Simple Storage Service(Amazon S3) as the storage engine (Figure 1).Figure 1. End-to-end data-processing architectureData processingHere’s how it works:A publisher application receives the data from the source systems and publishes data into Kinesis Data Streams using a well-defined JSON format structure.Kinesis Data Streams holds the data for a duration that is configurable so data is not lost and can auto scale based on the data volume ingested.Kinesis Data Analytics runs an Apache Flink application, with state management (RocksDB), to handle bi-temporal calculations. The Apache Flink application consumes data from Kinesis Data Streams and performs the following computations:Transforms the JSON stream into a row-type record, compatible with a SQL table-like structure, resolving nesting and parent–child relationships present within the streamChecks whether the record has already an existing state in in-memory RocksDB or disk attached to Kinesis Data Analytics computational node to avoid read latency from the database, which is critical for meeting the performance requirementsPerforms bi-temporal calculations and creates the resultant records in an in-memory data structure before invoking the Apache Iceberg sink operatorThe Apache Flink application sink operator appends the temporal states, expressed as records into existing Apache Iceberg data store. This will comply with key principles of time series data, which is immutable, and the ability to time-travel along with ACID compliance, schema evolution, and partition evolutionKinesis Data Analytics is resilient and provides a no-data-loss capability, with features like periodic checkpoints and savepoints. They are used to store the state management in a secure Amazon S3 location that can be accessed outside of Kinesis Data Analytics. This savepoints mechanism can be used to programmatically to scale the cluster size based on the workloads using time-driven scheduling andAWS Lambdafunctions.If the time-to-live feature of RocksDB is implemented, old records are stored in Apache Iceberg on Amazon S3. When performing temporal calculations, if the state is not found in memory, data is read from Apache Iceberg into RocksDB and the processing is completed. However, this step is optional and can be circumvented if the Kinesis Data Analytics cluster is initialized with right number of Kinesis processing units to hold the historical information, as per requirements.Because the data is stored in an Apache Iceberg table format in Amazon S3, data is queried usingTrino, which supports Apache Iceberg table format.The end user queries data using any SQL tool that supports the Trino query engine.Apache Iceberg maintenance jobs, such as data compaction, expire snapshot, delete orphan files, can be launched usingAmazon Athenato optimize performance out of Apache Iceberg data store. Details of each processing step performed in Apache Flink application are captured usingAmazon CloudWatch, which logs all the events.ScalabilityAmazon EventBridgescheduler invokes a Lambda function to scale the Kinesis Data Analytics. Kinesis Data Analytics has a short outage during rescaling that is proportional to the amount of data stored in RocksDB, which is why a state management strategy is necessary for the proper operation of the system.Figure 2 shows the scaling process, which depicts:Before peak load:The Kinesis Data Analytics cluster is processing off-peak records with minimum configuration before the peak load. A scheduled event is launched from EventBridge that invokes a Lambda function, which shuts down the cluster using the savepoint mechanism and scales up the Kinesis Data Analytics cluster to required Kinesis processing units.During peak load:When the peak data burst happens, the Kinesis Data Analytics cluster is ready to handle the volume of data from Kinesis Data Stream, and processes it within the SLA of 5 minutes.After peak load:A scheduled event from EventBridge invokes a Lambda function to scale down the Kinesis Data Analytics cluster to the minimum configuration that holds the required state for the entire volume of records.Figure 2. Cluster scaling before, during, and after peak data volume processingPerformance insightsWith the discussed architecture, we want to demonstrate that the we are able to meet the SLAs, in terms of performance and processing times. We have taken a subset of benchmarks and indices data and processed the same with the end-to-end architecture. During the process, we observed some very interesting findings, which we would like to share.Processing time for Apache Iceberg Upsert vs Append operations:During our tests, we expected Upsert operation to be faster than append. But on the contrary, we noticed that Append operations were faster compared to Upsert even though more computations are performed in the Apache Flink application. In our test with 3,500,000 records, Append operation took 1556 seconds while Upsert took 1675 seconds to process the data (Figure 3).Figure 3. Processing times for Upsert vs. AppendCompute consumption for Apache Iceberg Upsert vs. Append operations:Comparing the compute consumption for 10,000,000 records, we noticed that Append operation was able to process the data in the same amount of time as Upsert operation but with less compute resources. In our tests, we have noted that Append operation only consumed 64 Kinesis processing units, whereas Upsert consumed 78 Kinesis processing units (Figure 4).Figure 4. Comparing consumption for Upsert vs. AppendScalability vs performance:To achieve the desired data processing performance, we need a specific configuration of Kinesis processing units, Kinesis Data Streams, and Iceberg parallelism. In our test with the data that we chose, we started with four Kinesis processing units and four Kinesis data streams for data processing. We observed an 80% performance improvement in data processing with 16 Kinesis data processing units. An additional 6% performance improvement was demonstrated when we scaled to 32 Kinesis processing units. When we increased the Kinesis data streams to 16, we observed an additional 2% performance improvement (Figure 5).Figure 5. Scalability vs. performanceData volume processing times for Upsert vs. Append:For this test, we started with 350,000 records of data. When we increased data volume to 3.5M records, we observed that Append performing better than Upsert, demonstrating a five-fold increase in processing time (Figure 6).Figure 6. Data volume processing times for Upsert vs. AppendConclusionThe architecture we explored today scales based on the data-volume requirements of the customer and is capable of meeting the end-to-end SLA of 15 minutes, with a potential lowered total cost of ownership. Additionally, the solution is capable of handling high-volume, bi-temporal computations with ACID compliance, time travel, full-schema evolution, partition layout evolution, rollback to prior versions and SQL-like query experience.Further readingEnhanced monitoring and automatic scaling for Apache FlinkResilience in Amazon Kinesis Data Analytics for Apache FlinkKinesis Data Analytics for Apache Flink: How It WorksCreating a Kinesis Data Analytics for Apache Flink ApplicationState TTL in Flink 1.8.0: How to Automatically Cleanup Application State in Apache FlinkApplication Scaling in Kinesis Data Analytics for Apache FlinkKrishna GogineniKrishna Gogineni is a Principal Solutions Architect at AWS helping financial services customers. Krishna is Cloud-Native Architecture evangelist helping customers transform the way they build software. Krishna works with customers to learn their unique business goals, and then super-charge their ability to meet these goals through software delivery that leverages industry best practices/tools such as DevOps, Data Lakes, Data Analytics, Microservices, Containers, and Continuous Integration/Continuous Delivery.Adam GlinianowiczAdam is a Solution Architect with a specialty in data storage and management, plus full-text Search technologies built on AWS Cloud. Currently, he is building the core data storage solutions at London Stock Exchange Group to standardize the persistence and retrieval of temporal data from various financial domains. With the R&D background, he is interested in learning of and experimenting with all technologies from the data domain.Sreenivas AdikiSreenivas Adiki is a Sr. Customer Delivery Architect in ProServe, with a focus on data and analytics. He ensures success in designing, building, optimizing, and transforming in the area of Big Data/Analytics. Ensuring solutions are well-designed for successful deployment, Sreenivas participates in deep architectural discussions and design exercises. He has also published several AWS assets, such as whitepapers and proof-of-concept papers.Kavita MittalKavita is Principal Solutions Architect at London Stock Exchange Group, having nearly 20 years of experience in data and architecture. She is passionate about solving complex business problems with data and transforming data to deliver business value. She is always curious about new technology and architecture design/patterns and how to incorporate these into day to day challenges to make this world a better place!Mahesh KothaMahesh Kotha is a Data & Machine Learning Engineer at AWS Data Analytics specialty group. He has 17 years of experience architecting, building data lake platforms using Apache Spark/Flink/Iceberg. Mahesh helps customers to solve the complex industry problems using AWS data lake/streaming solutions. He likes to explore new AWS features/services so customers get the best price/performance.Narsimhan BramadesamNarsimhan Bramadesam is the Director of the Data Platform in the Data & Analytics group at London Stock Exchange Group. He focuses on developing a comprehensive data platform that grants access to a wide range of market, reference, end-of-day, time series, and alternative data in the Cloud. With a background as a former data engineer, he is deeply passionate about all aspects of data and AI.CommentsView Comments
Financial trading houses and stock exchanges generate enormous volumes of data in near real-time, making it difficult to perform bi-temporal calculations that yield accurate results. Achieving this requires a processing architecture that can handle large volumes of data during peak bursts, meet strict latency requirements, and scale according to incoming volumes. In this post, we’ll […]
Krishna Gogineni
2023-07-21T06:07:38-07:00
[ "Amazon Kinesis", "Amazon Simple Storage Service (S3)", "Architecture", "AWS Lambda", "Kinesis Data Analytics", "Kinesis Data Streams" ]
https://d2908q01vomqb2.c…itect_REVIEW.jpg
https://aws.amazon.com/blogs/architecture/lets-architect-devops-best-practices-on-aws/
Let’s Architect! DevOps Best Practices on AWS
AWS Architecture BlogLet’s Architect! DevOps Best Practices on AWSby Luca Mezzalira, Federica Ciuffo, Laura Hyatt, Vittorio Denti, and Zamira Jaupaj | on19 JUL 2023| inArchitecture,DevOps,Thought Leadership|Permalink|Comments|ShareDevOps has revolutionized software development and operations by fostering collaboration, automation, and continuous improvement. By bringing together development and operations teams, organizations can accelerate software delivery, enhance reliability, and achieve faster time-to-market.In this blog post, we will explore the best practices and architectural considerations for implementing DevOps withAmazon Web Services(AWS), enabling you to build efficient and scalable systems that align with DevOps principles. The Let’s Architect! team wants to share useful resources that help you to optimize your software development and operations.DevOps revolutionDistributed systems are adopted from enterprises more frequently now. When an organization wants to leverage distributed systems’ characteristics, it requires a mindset and approach shift, akin to a new model for software development lifecycle.In this re:Invent 2021 video, Emily Freeman, now Head of Community Engagement at AWS, shares with us the insights gained in the trenches when adapting a new software development lifecycle that will help your organization thrive using distributed systems.Take me to this re:Invent 2021 video!Operationalizing the DevOps revolutionMy CI/CD pipeline is my release captainDesigning effective DevOps workflows is necessary for achieving seamless collaboration between development and operations teams. The Amazon Builders’ Library offers a wealth of guidance on designing DevOps workflows that promote efficiency, scalability, and reliability. From continuous integration and deployment strategies to configuration management and observability, this resource covers various aspects of DevOps workflow design. By following the best practices outlined in the Builders’ Library, you can create robust and scalable DevOps workflows that facilitate rapid software delivery and smooth operations.Take me to this resource!A pipeline coordinates multiple inflight releases and promotes them through three stagesUsing Cloud Fitness Functions to Drive Evolutionary ArchitectureCloud fitness functions provide a powerful mechanism for driving evolutionary architecture within your DevOps practices. By defining and measuring architectural fitness goals, you can continuously improve and evolve your systems over time.ThisAWS Architecture Blogpost delves into how AWS services, likeAWS Lambda,AWS Step Functions, andAmazon CloudWatchcan be leveraged to implement cloud fitness functions effectively. By integrating these services into your DevOps workflows, you can establish an architecture that evolves in alignment with changing business needs: improving system resilience, scalability, and maintainability.Take me to this AWS Architecture Blog post!Fitness functions provide feedback to engineers via metricsMulti-Region Terraform Deployments with AWS CodePipeline using Terraform Built CI/CDAchieving consistent deployments across multiple regions is a common challenge. ThisAWS DevOps Blogpost demonstrates how to use Terraform,AWS CodePipeline, and infrastructure-as-code principles to automate Multi-Region deployments effectively. By adopting this approach, you can demonstrate the consistent infrastructure and application deployments, improving the scalability, reliability, and availability of your DevOps practices.The post also provides practical examples and step-by-step instructions for implementing Multi-Region deployments with Terraform and AWS services, enabling you to leverage the power of infrastructure-as-code to streamline DevOps workflows.Take me to this AWS DevOps Blog post!Multi-Region AWS deployment with IaC and CI/CD pipelinesSee you next time!Thanks for joining our discussion on DevOps best practices! Next time we’ll talk about how to create resilient workloads on AWS.To find all the blogs from this series, check out theLet’s Architect!list of content on theAWS Architecture Blog. See you soon!TAGS:AWS CodeCommit,AWS CodePipeline,Let's Architect,microservices,serverlessLuca MezzaliraLuca is Principal Solutions Architect based in London. He has authored several books and is an international speaker. He lent his expertise predominantly in the solution architecture field. Luca has gained accolades for revolutionizing the scalability of front-end architectures with micro-frontends, from increasing the efficiency of workflows, to delivering quality in products.Federica CiuffoFederica is a Solutions Architect at Amazon Web Services. She is specialized in container services and is passionate about building infrastructure with code. Outside of the office, she enjoys reading, drawing, and spending time with her friends, preferably in restaurants trying out new dishes from different cuisines.Laura HyattLaura Hyatt is a Solutions Architect for AWS Public Sector and helps Education customers in the UK. Laura helps customers not only architect and develop scalable solutions but also think big on innovative solutions facing the education sector at present. Laura's specialty is IoT, and she is also the Alexa SME for Education across EMEA.Vittorio DentiVittorio Denti is a Machine Learning Engineer at Amazon based in London. After completing his M.Sc. in Computer Science and Engineering at Politecnico di Milano (Milan) and the KTH Royal Institute of Technology (Stockholm), he joined AWS. Vittorio has a background in distributed systems and machine learning. He's especially passionate about software engineering and the latest innovations in machine learning science.Zamira JaupajZamira is an Enterprise Solutions Architect based in the Netherlands. She is highly passionate IT professional with over 10 years of multi-national experience in designing and implementing critical and complex solutions with containers, serverless, and data analytics for small and enterprise companies.CommentsView Comments
DevOps has revolutionized software development and operations by fostering collaboration, automation, and continuous improvement. By bringing together development and operations teams, organizations can accelerate software delivery, enhance reliability, and achieve faster time-to-market. In this blog post, we will explore the best practices and architectural considerations for implementing DevOps with Amazon Web Services (AWS), enabling you […]
Luca Mezzalira
2023-07-19T06:13:39-07:00
[ "Architecture", "DevOps", "Thought Leadership" ]
https://d2908q01vomqb2.c…ting-828x630.png
https://aws.amazon.com/blogs/architecture/ibm-consulting-creates-innovative-aws-solutions-in-french-hackathon/
IBM Consulting creates innovative AWS solutions in French Hackathon
AWS Architecture BlogIBM Consulting creates innovative AWS solutions in French Hackathonby Diego Colombatto and Selsabil Gaied | on12 JUL 2023| inAmazon API Gateway,Amazon CloudFront,Amazon DynamoDB,Amazon Simple Storage Service (S3),Architecture,AWS Lambda|Permalink|ShareIn March 2023, IBM Consulting delivered an Innovation Hackathon in France, aimed at designing and building new innovative solutions for real customer use cases using the AWS Cloud.In this post, we briefly explore six of the solutions considered and demonstrate the AWS architectures created and implemented during the Hackathon.Hackathon solutionsSolution 1: Optimize digital channels monitoring and management for MarketingMonitoring Marketing campaign impact can require a lot of effort, such as customers and competitors’ reactions on digital media channels. Digital campaign managers need this data to evaluate customer segment penetration and overall campaign effectiveness. Information can be collected via digital-channel API integrations or on the digital channel user interface (UI): digital-channel API integrations require frequent maintenance, while UI data collection can be labor-intensive.On the AWS Cloud, IBM designed an augmented digital campaign manager solution, to assist digital campaign managers with digital-channel monitoring and management. This solution monitors social media APIs and, when APIs change, automatically updates the API integration, ensuring accurate information collection (Figure 1).Figure 1. Optimize digital channels monitoring and management for MarketingAmazon Simple Storage Service(Amazon S3) andAWS Lambdaare used to garner new digital estates, such as new social media APIs, and assess data quality.Amazon Kinesis Data Streamsis used to decouple data ingestion from data query and storage.Lambda retrieves the required information fromAmazon DynamoDB, like the most relevant brands; natural language processing (NLP) is applied to retrieved data, like URL, bio, about, verification status.Amazon S3 andAmazon CloudFrontare used to present a dashboard where end-users can check, enrich, and validate collected data.When graph API calls detect an error/change, Lambda checks API documentation to update/correct the API call.A new Lambda function is generated, with updated API call.Solution 2: 4th party logistics consulting service for a greener supply chainLogistics companies have a wealth of trip data, both first- and third-party, and can leverage these data to provide new customer services, such as options for trips booking with optimized carbon footprint, duration, or costs.IBM designed an AWS solution (Figure 2) enabling the customer to book goods transport by selecting from different route options, combining transport modes, selecting departure-location, arrival, cargo weight and carbon emissions. Proposed options include the greenest, fastest, and cheapest routes. Additionally, the user can provide financial and time constraints.Figure 2. Optimized transport booking architectureUser connects to web-app UI, hosted on Amazon S3.Amazon API Gateway receives user requests from web app; requests are forwarded to Lambda.Lambda calculates the best trip options based on the user’s prerequisites, such as carbon emissions.Lambda estimates carbon emissions; estimates are combined with trip options at Step 3.Amazon Neptunegraph database is used to efficiently store and query trip data.Different Lambda instances are used to ingest data from on-premises data sources and send customer bookings through the customer ordering system.Solution 3: Purchase order as a serviceIn the context of vendor-managed inventory and vendor-managed replenishment, inventory and logistics companies want to check on warehouse stock levels to identify the best available options for goods transport. Their objective is to optimize the availability of warehouse stock for order fulfillment; therefore, when a purchase order (PO) is received, required goods are identified as available in the correct warehouse, enabling swift delivery with minimal lead time and costs.IBM designed an AWS PO as a service solution (Figure 3), using warehouse data to forecast future customer’s POs. Based on this forecast, the solution plans and optimizes warehouse goods availability and, hence, logistics required for the PO fulfillment.Figure 3. Purchase order as a service AWS solutionAWS Amplifyprovides web-mobile UI where users can set constraints (such as warehouse capacity, minimum/maximum capacity) and check: warehouses’ states, POs in progress. Additionally, UI proposes possible optimized POs, which are automatically generated by the solution. If the user accepts one of these solution-generated POs, the user will benefit from optimized delivery time, costs and carbon-footprint.Lambda receivesAmazon Forecastinferences and reads/writes PO information on Amazon DynamoDB.Forecast provides inferences regarding the most probable future POs. Forecast uses POs, warehouse data, and goods delivery data to automatically train a machine learning (ML) model that is used to generate forecast inferences.Amazon DynamoDB stores PO and warehouse information.Lambda pushes PO, warehouse, and goods delivery data from Amazon DynamoDB into Amazon S3. These data are used in the Forecast ML-model re-train, to ensure high quality forecasting inferences.Solution 4: Optimize environmental impact associated with engineers’ interventions for customer fiber connectionsTelco companies that provide end-users’ internet connections need engineers executing field tasks, like deploying, activating, and repairing subscribers’ lines. In this scenario, it’s important to identify the most efficient engineers’ itinerary.IBM designed an AWS solution that automatically generates engineers’ itineraries that consider criteria such as mileage, carbon-emission generation, and electric-/thermal-vehicle availability.The solution (Figure 4) provides:Customer management teams with a mobile dashboard showing carbon-emissions estimates for all engineers’ journeys, both in-progress and plannedEngineers with a mobile application including an optimized itinerary, trip updates based on real time traffic, and unexpected eventsFigure 4. AWS telco solution for greener customer serviceManagement team and engineers connect to web/mobile application, respectively. Amazon Cognito provides authentication and authorization, Amazon S3 stores application static content, and API Gateway receives and forwards API requests.AWS Step Functions implements different workflows. Application logic is implemented in Lambda, which connects to DynamoDB to get trip data (current route and driver location); Amazon Location Service provides itineraries, and Amazon SageMaker ML model implements itinerary optimization engine.Independently from online users, trip data are periodically sent to API Gateway and stored in Amazon S3.SageMaker notebook periodically uses Amazon S3 data to re-train the trip optimization ML model with updated data.Solution 5: Improve the effectiveness of customer SAP level 1 support by reducing response times for common information requestsCompanies using SAP usually provide first-level support to their internal SAP users. SAP users engage the support (usually via ticketing system) to ask for help when facing SAP issues or to request additional information. A high number of information requests requires significant effort to retrieve and provide the available information on resources like SAP notes/documentation or similar support requests.IBM designed an AWS solution (Figure 5), based on support request information, that can automatically provide a short list of most probable solutions with a confidence score.Figure 5. SAP customer support solutionLambda receives ticket information, such as ticket number, business service, and description.Lambda processes ticket data andAmazon Translatetranslates text into country native-language and English.SageMaker ML model receives the question and provides the inference.If the inference has a high confidence score, Lambda provides it immediately as output.If the inference has a low confidence score,Amazon Kendrareceives the question, searches automatically through indexed company information and provides the best answer available. Lambda then provides the answer as output.Solution 6: Improve contact center customer experience providing faster and more accurate customer supportInsured customers often interact with insurer companies using contact centers, requesting information and services regarding their insurance policies.IBM designed an AWS solution improving end-customer experience and contact center agent efficiency by providing automated customer-agent call/chat summarization. This enables:The agent to quickly recall the customer need in following interactionsContact center supervisor to quickly understand the objective of each case (intervening if necessary)Insured customers to quickly have the information required, without repeating information already providedFigure 6. Improving contact center customer experienceSummarization capability is provided by generative AI, leveraging large language models (LLM) on SageMaker.Pretrained LLM model fromHugging Faceis stored on Amazon S3.LLM model is fine-tuned and trained using Amazon SageMaker.LLM model is made available as SageMaker API endpoint, ready to provide inferences.Insured user contact customer support; the user request goes through voice/chatbot, then reachesAmazon Connect.Lambda queries the LLM model. The inference is provided by LLM and it’s sent to an Amazon Connect instance, where inference is enriched with knowledge-based search, usingAmazon Connect Wisdom.If the user–agent conversation was a voice interaction (like a phone call), then the call recording is transcribed usingAmazon Transcribe. Then, Lambda is called for summarization.ConclusionIn this blog post, we have explored how IBM Consulting delivered an Innovation Hackathon in France. During the Hackathon, IBM worked backward from real customer use cases, designing and building innovative solutions using AWS services.Diego ColombattoDiego Colombatto is a Senior Partner Solutions Architect at AWS. He brings more than 15 years of experience in designing and delivering Digital Transformation projects for enterprises. At AWS, Diego works with partners and customers advising how to leverage AWS technologies to translate business needs into solutions. IT architectures and algorithmic trading are some of his passions and he's always open to start a conversation on these topics.Selsabil GaiedSelsabil Gaied is a Senior Architect at IBM Consulting in France. With 8 years of experience, she has successfully delivered large-scale data and AI transformation programs for clients across various sectors. Selsabil excels in developing strategic roadmaps and technical architectures that harness the capabilities of AWS for data and AI solutions. Her expertise includes establishing and fostering productive relationships and teams, translating business needs into innovative solutions, and ensuring successful solution delivery.
In March 2023, IBM Consulting delivered an Innovation Hackathon in France, aimed at designing and building new innovative solutions for real customer use cases using the AWS Cloud. In this post, we briefly explore six of the solutions considered and demonstrate the AWS architectures created and implemented during the Hackathon. Hackathon solutions Solution 1: Optimize […]
Diego Colombatto
2023-07-12T06:05:15-07:00
[ "Amazon API Gateway", "Amazon CloudFront", "Amazon DynamoDB", "Amazon Simple Storage Service (S3)", "Architecture", "AWS Lambda" ]

Dataset Card for "aws_arch"

More Information needed

Downloads last month
40

Data Sourcing report

powered
by Spawning.ai

No elements in this dataset have been identified as either opted-out, or opted-in, by their creator.