For audio versions: Podcast RSS Feed
Podcast version also available on iTunes
Remember to subscribe to the AWS Podcast too!
Andy Jassy, CEO of Amazon Web Services, delivers his AWS re:Invent 2017 keynote, featuring the latest news and announcements, including the launches of Amazon Elastic Containers for Kubernetes (EKS), AWS Fargate, Aurora Multi-Master, Aurora Serverless, DynamoDB Global Tables, Amazon Neptune, S3 Select, Amazon Sagemaker, AWS DeepLens, Amazon Rekognition Video, Amazon Kinesis Video Streams, Amazon Transcribe, Amazon Translate, Amazon Comprehend, AWS IoT 1-Click, AWS IoT Device Management, AWS IoT Device Defender, AWS IoT Analytics, Amazon FreeRTOS, and Greengrass ML Inference. Guest speakers include Dr. Matt Wood, of AWS; Roy Joseph, of Goldman Sachs; Mark Okerstrom, of Expedia; and Michelle McKenna-Doyle, of the NFL.
Watch Werner Vogels deliver his AWS re:Invent 2017 keynote, featuring the launch of Alexa for Business, AWS Cloud9, new AWS Lambda features, and Serverless App Repository.
Watch Peter DeSantis, VP, AWS Global Infrastructure, in the Tuesday Night Live keynote, featuring Brian Mathews, of Autodesk, and Greg Peters, of Netflix.
Sessions recommended at the end of this Keynote are:
Also available as a YouTube playlist.
Analytics & Big Data
In this session, we simplify big data processing as a data bus comprising various stages: collect, store, process, analyze, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architectures, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost.
Serverless technologies let you build and scale applications and services rapidly without the need to provision or manage servers. In this session, we show you how to incorporate serverless concepts into your big data architectures. We explore the concepts behind and benefits of serverless architectures for big data, looking at design patterns to ingest, store, process, and visualize your data. Along the way, we explain when and how you can use serverless technologies to streamline data processing, minimize infrastructure management, and improve agility and robustness and share a reference architecture using a combination of cloud and open source technologies to solve your big data problems. Topics include: use cases and best practices for serverless big data applications; leveraging AWS technologies such as Amazon DynamoDB, Amazon S3, Amazon Kinesis, AWS Lambda, Amazon Athena, and Amazon EMR; and serverless ETL, event processing, ad hoc analysis, and real-time analytics.
To win in the marketplace and provide differentiated customer experiences, businesses need to be able to use live data in real time to facilitate fast decision making. In this session, you learn common streaming data processing use cases and architectures. First, we give an overview of streaming data and AWS streaming data capabilities. Next, we look at a few customer examples and their real-time streaming applications. Finally, we walk through common architectures and design patterns of top streaming data use cases.
Data speaks. Discover how Ivy Tech, the nation's largest singly accredited community college, uses AWS to gather, analyze, and take action on student behavioral data for the betterment of over 3,100 students. This session outlines the process from inception to implementation across the state of Indiana and highlights how Ivy Tech's model can be applied to your own complex business problems.
Just as a picture is worth a thousand words, a visual is worth a thousand data points. A key aspect of our ability to gain insights from our data is to look for patterns, and these patterns are often not evident when we simply look at data in tables. The right visualization will help you gain a deeper understanding in a much quicker timeframe. In this session, we will show you how to quickly and easily visualize your data using Amazon QuickSight. We will show you how you can connect to data sources, generate custom metrics and calculations, create comprehensive business dashboards with various chart types, and setup filters and drill downs to slice and dice the data.
Banks aren't known to share data and collaborate with one another. But that is exactly what the Mid-Sized Bank Coalition of America (MBCA) is doing to fight digital financial crime—and protect national security. Using the AWS Cloud, the MBCA developed a shared data analytics utility that processes terabytes of non-competitive customer account, transaction, and government risk data. The intelligence produced from the data helps banks increase the efficiency of their operations, cut labor and operating costs, and reduce false positive volumes. The collective intelligence also allows greater enforcement of Anti-Money Laundering (AML) regulations by helping members detect internal risks—and identify the challenges to detecting these risks in the first place. This session demonstrates how the AWS Cloud supports the MBCA to deliver advanced data analytics, provide consistent operating models across financial institutions, reduce costs, and strengthen national security. Session sponsored by Accenture
In this session, learn how Cox Automotive is using Splunk Cloud for real time visibility into its AWS and hybrid environments to achieve near instantaneous MTTI, reduce auction incidents by 90%, and proactively predict outages. We also introduce a highly anticipated capability that allows you to ingest, transform, and analyze data in real time using Splunk and Amazon Kinesis Firehose to gain valuable insights from your cloud resources. It's now quicker and easier than ever to gain access to analytics-driven infrastructure monitoring using Splunk Enterprise & Splunk Cloud. Session sponsored by Splunk
Historically, silos of data, analytics, and processes across functions, stages of development, and geography created a barrier to R&D efficiency. Gathering the right data necessary for decision-making was challenging due to issues of accessibility, trust, and timeliness. In this session, learn how Takeda is undergoing a transformation in R&D to increase the speed-to-market of high-impact therapies to improve patient lives. The Data and Analytics Hub was built, with Deloitte, to address these issues and support the efficient generation of data insights for functions such as clinical operations, clinical development, medical affairs, portfolio management, and R&D finance. In the AWS hosted data lake, this data is processed, integrated, and made available to business end users through data visualization interfaces, and to data scientists through direct connectivity. Learn how Takeda has achieved significant time reductions—from weeks to minutes—to gather and provision data that has the potential to reduce cycle times in drug development. The hub also enables more efficient operations and alignment to achieve product goals through cross functional team accountability and collaboration due to the ability to access the same cross domain data. Session sponsored by Deloitte
As the nation's only high-speed intercity passenger rail provider, Amtrak needs to know critical information to run their business such as: Who's onboard any train at any time? How are booking and revenue trending? Amtrak was faced with unpredictable and often slow response times from existing databases, ranging from seconds to hours; existing booking and revenue dashboards were spreadsheet-based and manual; multiple copies of data were stored in different repositories, lacking integration and consistency; and operations and maintenance (O&M) costs were relatively high. Join us as we demonstrate how Deloitte and Amtrak successfully went live with a cloud-native operational database and analytical datamart for near-real-time reporting in under six months. We highlight the specific challenges and the modernization of architecture on an AWS native Platform as a Service (PaaS) solution. The solution includes cloud-native components such as AWS Lambda for microservices, Amazon Kinesis and AWS Data Pipeline for moving data, Amazon S3 for storage, Amazon DynamoDB for a managed NoSQL database service, and Amazon Redshift for near-real time reports and dashboards. Deloitte's solution enabled “at scale” processing of 1 million transactions/day and up to 2K transactions/minute. It provided flexibility and scalability, largely eliminate the need for system management, and dramatically reduce operating costs. Moreover, it laid the groundwork for decommissioning legacy systems, anticipated to save at least $1M over 3 years. Session sponsored by Deloitte
In this session, we detail Sysco's journey from a company focused on hindsight-based reporting to one focused on insights and foresight. For this shift, Sysco moved from multiple data warehouses to an AWS ecosystem, including Amazon Redshift, Amazon EMR, AWS Data Pipeline, and more. As the team at Sysco worked with Tableau, they gained agile insight across their business. Learn how Sysco decided to use AWS, how they scaled, and how they became more strategic with the AWS ecosystem and Tableau. Session sponsored by Tableau
Learn how customers are leveraging AWS to better position their enterprises for the digital transformation journey. In this session, you hear about: operations and process; the SAP transformation journey including architecting, migrating, running SAP on AWS; complete automation and management of the AWS layer using AWS native services; and a customer example. We also discuss the challenges of migration to the cloud and a managed services environment; the benefits to the customer of the new operating model; and lessons learned. By the end of the session, you understand why you should consider AWS for your next SAP platform, how to get there when you are ready and some best practices to manage your SAP systems on AWS. session sponsored by DXC Technology
ABD213: How to Build a Data Lake with AWS Glue Data Catalog
As data volumes grow and customers store more data on AWS, they often have valuable data that is not easily discoverable and available for analytics. The AWS Glue Data Catalog provides a central view of your data lake, making data readily available for analytics. We introduce key features of the AWS Glue Data Catalog and its use cases. Learn how crawlers can automatically discover your data, extract relevant metadata, and add it as table definitions to the AWS Glue Data Catalog. We will also explore the integration between AWS Glue Data Catalog and Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum.
With customers demanding relevant and real-time experiences across a range of devices, digital businesses are looking to gather user data at scale, understand this data, and respond to customer needs instantly. This requires tools that can record large volumes of user data in a structured fashion, and then instantly make this data available to generate insights. In this session, we demonstrate how you can use Amazon Pinpoint to capture user data in a structured yet flexible manner. Further, we demonstrate how this data can be set up for instant consumption using services like Amazon Kinesis Firehose and Amazon Redshift. We walk through example data based on real world scenarios, to illustrate how Amazon Pinpoint lets you easily organize millions of events, record them in real-time, and store them for further analysis.
Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), and other processing. In this session, we introduce Kinesis Video Streams and its key features, and review common use cases including smart home, smart city, industrial automation, and computer vision. We also discuss how you can use the Kinesis Video Streams parser library to work with the output of video streams to power popular deep learning frameworks. Lastly, Abeja, a leading Japanese artificial intelligence (AI) solutions provider, talks about how they built a deep-learning system for the retail industry using Kinesis Video Streams to deliver better shopping experience.
Reducing the time to get actionable insights from data is important to all businesses, and customers who employ batch data analytics tools are exploring the benefits of streaming analytics. Learn best practices to extend your architecture from data warehouses and databases to real-time solutions. Learn how to use Amazon Kinesis to get real-time data insights and integrate them with Amazon Aurora, Amazon RDS, Amazon Redshift, and Amazon S3. The Amazon Flex team describes how they used streaming analytics in their Amazon Flex mobile app, used by Amazon delivery drivers to deliver millions of packages each month on time. They discuss the architecture that enabled the move from a batch processing system to a real-time system, overcoming the challenges of migrating existing batch data to streaming data, and how to benefit from real-time analytics.
IoT and big data have made their way out of industrial applications, general automation, and consumer goods, and are now a valuable tool for improving consumer engagement across a number of industries, including media, entertainment, and sports. The low cost and ease of implementation of AWS analytics services and AWS IoT have allowed AGT, a leader in IoT, to develop their IoTA analytics platform. Using IoTA, AGT brought a tailored solution to EuroLeague Basketball for real-time content production and fan engagement during the 2017-18 season. In this session, we take a deep dive into how this solution is architected for secure, scalable, and highly performant data collection from athletes, coaches, and fans. We also talk about how the data is transformed into insights and integrated into a content generation pipeline. Lastly, we demonstrate how this solution can be easily adapted for other industries and applications.
Where are you on the spectrum of IT leaders? Are you confident that you're providing the technology and solutions that consistently meet or exceed the needs of your internal customers? Do your peers at the executive table see you as an innovative technology leader? Innovative IT leaders understand the value of getting data and analytics directly into the hands of decision makers, and into their own. In this session, Daren Thayne, Domo's Chief Technology Officer, shares how innovative IT leaders are helping drive a culture change at their organizations. See how transformative it can be to have real-time access to all of the data that' is relevant to YOUR job (including a complete view of your entire AWS environment), as well as understand how it can help you lead the way in applying that same pattern throughout your entire company. Session sponsored by Domo
Companies of all sizes are looking for technology to efficiently leverage data and their existing IT investments to stay competitive and understand where to find new growth. Regardless of where companies are in their data-driven journey, they face greater demands for information by customers, prospects, partners, vendors and employees. All stakeholders inside and outside the organization want information on-demand or in “real time”, available anywhere on any device. They want to use it to optimize business outcomes without having to rely on complex software tools or human gatekeepers to relevant information. Learn how IT innovators at companies such as MasterCard, Jefferson Health, and TELUS are using Domo's Business Cloud to help their organizations more effectively leverage data at scale. Session sponsored by Domo
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. In this session, we present an end-to-end streaming data solution using Kinesis Streams for data ingestion, Kinesis Analytics for real-time processing, and Kinesis Firehose for persistence. We review in detail how to write SQL queries using streaming data and discuss best practices to optimize and monitor your Kinesis Analytics applications. Lastly, we discuss how to estimate the cost of the entire system.
In this session, we use Apache web logs as example and show you how to build an end-to-end analytics solution. First, we cover how to configure an Amazon ES cluster and ingest data using Amazon Kinesis Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data. Then we demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we review approaches for generating custom, ad-hoc reports.
Sysco has nearly 200 operating companies across its multiple lines of business throughout the United States, Canada, Central/South America, and Europe. As the global leader in food services, Sysco identified the need to streamline the collection, transformation, and presentation of data produced by the distributed units and systems, into a central data ecosystem. Sysco's Business Intelligence and Analytics team addressed these requirements by creating a data lake with scalable analytics and query engines leveraging AWS services. In this session, Sysco will outline their journey from a hindsight reporting focused company to an insights driven organization. They will cover solution architecture, challenges, and lessons learned from deploying a self-service insights platform. They will also walk through the design patterns they used and how they designed the solution to provide predictive analytics using Amazon Redshift Spectrum, Amazon S3, Amazon EMR, AWS Glue, Amazon Elasticsearch Service and other AWS services.
Most companies are over-run with data, yet they lack critical insights to make timely and accurate business decisions. They are missing the opportunity to combine large amounts of new, unstructured big data that resides outside their data warehouse with trusted, structured data inside their data warehouse. In this session, we take an in-depth look at how modern data warehousing blends and analyzes all your data, inside and outside your data warehouse without moving the data, to give you deeper insights to run your business. We will cover best practices on how to design optimal schemas, load data efficiently, and optimize your queries to deliver high throughput and performance.
Amazon EMR is one of the largest Hadoop operators in the world, enabling customers to run ETL, machine learning, real-time processing, data science, and low-latency SQL at petabyte scale. In this session, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters, and other Amazon EMR architectural best practices. We talk about lowering cost with Auto Scaling and Spot Instances, and security best practices for encryption and fine-grained access control. Finally, we dive into some of our recent launches to keep you current on our latest features.
To meet the needs of the global marketing organization, the AWS marketing analytics team built a scalable platform that allows the data science team to deliver custom econometric and machine learning models for end user self-service. To meet data security standards, we use end-to-end data encryption and different AWS services such as Amazon Redshift, Amazon RDS, Amazon S3, Amazon EMR with Apache Spark and Auto Scaling. In this session, you see real examples of how we have scaled and automated critical analysis, such as calculating the impact of marketing programs like re:Invent and prioritizing leads for our sales teams.
As a leading cloud communications platform, Twilio has always been strongly data-driven. But as headcount and data volumes grew—and grew quickly—they faced many new challenges. One-off, static reports work when you're a small startup, but how do you support a growth stage company to a successful IPO and beyond? Today, Twilio's data team relies on AWS and Looker to provide data access to 700 colleagues. Departments have the data they need to make decisions, and cloud-based scale means they get answers fast. Data delivers real-business value at Twilio, providing a 360-degree view of their customer, product, and business. In this session, you hear firsthand stories directly from the Twilio data team and learn real-world tips for fostering a truly data-driven culture at scale. Session sponsored by Looker
FINRA uses big data and data science technologies to detect fraud, market manipulation, and insider trading across US capital markets. As a financial regulator, FINRA analyzes highly sensitive data, so information security is critical. Learn how FINRA secures its Amazon S3 Data Lake and its data science platform on Amazon EMR and Amazon Redshift, while empowering data scientists with tools they need to be effective. In addition, FINRA shares AWS security best practices, covering topics such as AMI updates, micro segmentation, encryption, key management, logging, identity and access management, and compliance.
One of the biggest tradeoffs customers usually make when deploying BI solutions at scale is agility versus governance. Large-scale BI implementations with the right governance structure can take months to design and deploy. In this session, learn how you can avoid making this tradeoff using Amazon QuickSight. Learn how to easily deploy Amazon QuickSight to thousands of users using Active Directory and Federated SSO, while securely accessing your data sources in Amazon VPCs or on-premises. We also cover how to control access to your datasets, implement row-level security, create scheduled email reports, and audit access to your data.
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop, Spark, and data warehouse appliances from on-premise deployments to AWS in order to save costs, increase availability, and improve performance. AWS offers a broad set of analytics services, including solutions for batch processing, stream processing, machine learning, data workflow orchestration, and data warehousing. This session will focus on identifying the components and workflows in your current environment; and providing the best practices to migrate these workloads to the right AWS data analytics product. We will cover services such as Amazon EMR, Amazon Athena, Amazon Redshift, Amazon Kinesis, and more. We will also feature Vanguard, an American investment management company based in Malvern, Pennsylvania with over $4.4 trillion in assets under management. Ritesh Shah, Sr. Program Manager for Cloud Analytics Program at Vanguard, will describe how they orchestrated their migration to AWS analytics services, including Hadoop and Spark workloads to Amazon EMR. Ritesh will highlight the technical challenges they faced and overcame along the way, as well as share common recommendations and tuning tips to accelerate the time to production.
Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), APIs, clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Additionally, Merck will share how they built an end-to-end ETL pipeline for their application release management system, and launched it in production in less than a week using AWS Glue.
Combining disparate datasets and making them accessible to data scientists and researchers is a prevalent challenge for many organizations, not just in healthcare research. American Heart Association (AHA) has built a data science platform using Amazon EMR, Amazon Elasticsearch Service, and other AWS services, that corrals multiple datasets and enables advanced research on phenotype and genotype datasets, aimed at curing heart diseases. In this session, we present how AHA built this platform and the key challenges they addressed with the solution. We also provide a demo of the platform, and leave you with suggestions and next steps so you can build similar solutions for your use cases.
Learn how to architect a data lake where different teams within your organization can publish and consume data in a self-service manner. As organizations aim to become more data-driven, data engineering teams have to build architectures that can cater to the needs of diverse users - from developers, to business analysts, to data scientists. Each of these user groups employs different tools, have different data needs and access data in different ways. In this talk, we will dive deep into assembling a data lake using Amazon S3, Amazon Kinesis, Amazon Athena, Amazon EMR, and AWS Glue. The session will feature Mohit Rao, Architect and Integration lead at Atlassian, the maker of products such as JIRA, Confluence, and Stride. First, we will look at a couple of common architectures for building a data lake. Then we will show how Atlassian built a self-service data lake, where any team within the company can publish a dataset to be consumed by a broad set of users.
At Netflix, we have traditionally approached cloud efficiency from a human standpoint, whether it be in-person meetings with the largest service teams or manually flipping reservations. Over time, we realized that these manual processes are not scalable as the business continues to grow. Therefore, in the past year, we have focused on building out tools that allow us to make more insightful, data-driven decisions around capacity and efficiency. In this session, we discuss the DIY applications, dashboards, and processes we built to help with capacity and efficiency. We start at the ten thousand foot view to understand the unique business and cloud problems that drove us to create these products, and discuss implementation details, including the challenges encountered along the way. Tools discussed include Picsou, the successor to our AWS billing file cost analyzer; Libra, an easy-to-use reservation conversion application; and cost and efficiency dashboards that relay useful financial context to 50+ engineering teams and managers.
Over 100 million subscribers from over 190 countries enjoy the Netflix service. This leads to over a trillion events, amounting to 3 PB, flowing through the Keystone infrastructure to help improve customer experience and glean business insights. The self-serve Keystone stream processing service processes these messages in near real-time with at-least once semantics in the cloud. This enables the users to focus on extracting insights, and not worry about building out scalable infrastructure. In this session, I share the benefits and our experience building the platform.
In this session, we discuss the latest features of Amazon Redshift and Redshift Spectrum, and take a deep dive into its architecture and inner workings. We share many of the recent availability, performance, and management enhancements and how they improve your end user experience. You also hear from 21st Century Fox, who presents a case study of their fast migration from an on-premises data warehouse to Amazon Redshift. Learn how they are expanding their data warehouse to a data lake that encompasses multiple data sources and data formats. This architecture helps them tie together siloed business units and get actionable 360-degree insights across their consumer base.
Amazon's consumer business continues to grow, and so does the volume of data and the number and complexity of the analytics done in support of the business. In this session, we talk about how Amazon.com uses AWS technologies to build a scalable environment for data and analytics. We look at how Amazon is evolving the world of data warehousing with a combination of a data lake and parallel, scalable compute engines such as Amazon EMR and Amazon Redshift.
Today, many architects and developers are looking to build solutions that integrate batch and real-time data processing, and deliver the best of both approaches. Lambda architecture (not to be confused with the AWS Lambda service) is a design pattern that leverages both batch and real-time processing within a single solution to meet the latency, accuracy, and throughput requirements of big data use cases. Come join us for a discussion on how to implement Lambda architecture (batch, speed, and serving layers) and best practices for data processing, loading, and performance tuning.
Expedia uses Amazon Elasticsearch Service (Amazon ES) for a variety of mission-critical use cases, ranging from log aggregation to application monitoring and pricing optimization. In this session, the Expedia team reviews how they use Amazon ES and Kibana to analyze and visualize Docker startup logs, AWS CloudTrail data, and application metrics. They share best practices for architecting a scalable, secure log analytics solution using Amazon ES, so you can add new data sources almost effortlessly and get insights quickly.
Amazon Kinesis Analytics offers a built-in machine learning algorithm that you can use to easily detect anomalies in your VPC network traffic and improve security monitoring. Join us for an interactive discussion on how to stream your VPC flow Logs to Amazon Kinesis Streams and identify anomalies using Kinesis Analytics.
Agility is the cornerstone of the DevOps movement. Developers are working to continuously integrate and deploy (CI/CD) code to the cloud, to ensure applications are seamlessly updated and current. But what about secure? Security best practices and compliance are now the responsibility of everyone in the development lifecycle, and continuous security is a critical component of the ongoing deployment process. Discover how to incorporate security best practices into your current DevOps operations, gain visibility into compliance posture, and identify potential risks and threats in your AWS environment. We demonstrate how to leverage the CIS AWS Foundation Benchmarks within Sumo to trigger alerts from your AWS CloudTrail and Amazon CloudWatch log when risks or violations occur, such as unauthorized API calls, IAM policy changes, AWS Config configuration changes, and many more. Session sponsored by Sumo Logic
MirrorWeb offers automated website and social media archiving services with full text search capability for all content. The UK government hired MirrorWeb to provide search services across 20 years of archived data from over 4,800 websites. In this session, MirrorWeb discusses the technology stack they built using Amazon Elasticsearch Service (Amazon ES) to search across the 333 million unique documents (over 120 TB) that they indexed within a 10-hour period. They discuss how they moved data from on-premises to Amazon S3 using AWS Snowball and then processed that data using Amazon EC2 Spot Instances, reducing costs by over 90%. They also talk about how they used AWS Lambda to ingest data into Amazon ES. Finally, they share best practices for building a large-scale document search architecture.
ABD339: Deep Dive and Best Practices for Amazon Athena
Amazon Athena is an interactive query service that enables you to process data directly from Amazon S3 without the need for infrastructure. Since its launch at re:invent 2016, several organizations have adopted Athena as the central tool to process all their data. In this talk, we dive deep into the most common use cases, including working with other AWS services. We review the best practices for creating tables and partitions and performance optimizations. We also dive into how Athena handles security, authorization, and authentication. Lastly, we hear from a customer who has reduced costs and improved time to market by deploying Athena across their organization.
Thousands of services work in concert to deliver millions of hours of video streams to Netflix customers every day. These applications vary in size, function, and technology, but they all make use of the Netflix network to communicate. Understanding the interactions between these services is a daunting challenge both because of the sheer volume of traffic and the dynamic nature of deployments. In this session, we first discuss why Netflix chose Kinesis Streams to address these challenges at scale. We then dive deep into how Netflix uses Kinesis Streams to enrich network traffic logs and identify usage patterns in real time. Lastly, we cover how Netflix uses this system to build comprehensive dependency maps, increase network efficiency, and improve failure resiliency. From this session, youl learn how to build a real-time application monitoring system using network traffic logs and get real-time, actionable insights.
Petabyte scale archives of satellites, planes, and drones imagery continue to grow exponentially. They mostly exist as semi-structured data, but they are only valuable when accessed and processed by a wide range of products for both visualization and analysis. This session provides an overview of how ArcGIS indexes and structures data so that any part of it can be quickly accessed, processed, and analyzed by reading only the minimum amount of data needed for the task. In this session, we share best practices for structuring and compressing massive datasets in Amazon S3, so it can be analyzed efficiently. We also review a number of different image formats, including GeoTIFF (used for the Public Datasets on AWS program, Landsat on AWS), cloud optimized GeoTIFF, MRF, and CRF as well as different compression approaches to show the effect on processing performance. Finally, we provide examples of how this technology has been used to help image processing and analysis for the response to Hurricane Harvey.
This session, we focus on common use cases and design patterns for predictive analytics using Amazon EMR. We address accessing data from a data lake, extraction and preprocessing with Apache Spark, analytics and machine learning code development with notebooks (Jupyter, Zeppelin), and data visualization using Amazon QuickSight. We cover other operational topics, such as deployment patterns for ad hoc exploration and batch workloads using Spot and multi-user notebooks. The intended audience for this session includes technical users who are building statistical and data analytics models for the business using tools, such as Python, R, Spark, Presto, Amazon EMR, Notebooks.
In this session, we will teach you the technology behind Alexa Gadgets – a new category of connected products and developer tools that enable you to create your own Alexa-connected product or game skill that work with Echo Buttons. You will hear from the GM of Alexa Gadgets, as well as early Alexa Gadget developers, Musicplode Media (the makers of Beat the Intro) and Gemmy Industries (the makers of Big Mouth Billy Bass).
In this session, we'll teach you how to use the Alexa Voice Service (AVS) and its suite of development tools to bring your first Alexa-enabled product to market. You'll learn how commercial device manufacturers are getting to market faster using the new AVS Device SDK. To ensure your customers have the best voice experience, we'll teach you how to choose an Audio Front End and client-side hardware from a range of commercial-grade Development Kits. You'll walk out of this session with the knowledge required to design products with optimized Alexa-enabled voice experiences around your unique design requirements.
In this presentation, hear from John Rome, Arizona State University's Deputy CIO, and Jared Stein, Instructure's VP of Higher Ed Strategy, on how voice technology is bringing higher education to a new era. Come learn how institutions are adopting Alexa on campus and in their curriculum to serve students in new, innovative ways and how Instructure is rethinking the delivery of education for millions of customers through their Canvas skill for Alexa.
Alexa for Business makes it possible for businesses to create Alexa skills designed specifically for employees or customers. With Alexa for Business, devices can be managed and provisioned to be used by employees in conference rooms, at employees' desks, or around the workplace. You can also create skills that can be used by customers, in places like hotel rooms, restaurants, hospitality suites, or even stores. In this session, we'll provide an overview of Alexa for Business, and show you how Alexa for Business creates business value for both customers and employees.
It used to be the case that we only spoke to computers in their language. But more and more often, we're interacting with them in ours. We are moving quickly into a world of computer conversation, and one in which, for many applications, the most natural interactions will be through spoken language. But how do you create engaging narrative and compelling, organic conversational interactions using the imprecise tools of speech recognition and intent resolution? In this session, we look at the experience as a whole and take you through key learnings that you can use when building your skills. We cover issues like knowing your audience, creating compelling storylines, using a cast of characters, integrating voiceover, designing a soundscape, and finding those “magic moments”. For each of these, we share the design pattern, the backing AI or physiological science, and how to implement the experience with Alexa.
Last year, Capital One joined Alexa on stage to talk about their experience building their successful Alexa skill. Since that time, many lessons have been learned through customer feedback and new enhancements to the Alexa Skills Kit (ASK) such as the skills beta testing tool and the Alexa skill builder. How can you evolve your Alexa skill with more meaningful data sets outside of the existing intents? As the Alexa Skills Kit has grown its built-in library, what does it mean for your skill to support both ordinal (list) and numerical values? How can you handle new specifications without requiring wholesale code changes? Capital One has tackled all of these issues as well as embracing additional programming languages like TypeScript to ensure that response structures are validated against all schemas. With the arrival of multimodal devices such as the Echo Show, the opportunity for seamless customer interaction models across voice and visual has also arrived (big fonts, touch, video). Your customers can now transition back and forth between using their voice and their hands while engaging with your skill. Come learn direct from Capital One on the best way of providing extra contextual information using the new Alexa Skills Kit display directives but in more convenient ways to get things done.
In this advanced session, learn how to build Alexa-enabled devices that combine voice and visual responses in a meaningful way for consumers. The session covers the design methods and the hardware and software development resources for interactive multi-modal design. We also present some examples of products that are leading with such implementations.
Garbage in, garbage out. The quality of all machine learning solutions depends on the data used in training. Alexa developers are able to use advanced natural language understanding capabilities like built-in slot and intent training, entity resolution, and dialog management. This utterance data behind your skills is the most important contributor to the voice input experience. This session discusses how utterance data is processed by our systems, and what you can do as a developer to improve accuracy.
In this session, scientists from the Alexa team explore and discuss some of the AI challenges behind the Alexa Prize. Learn about the challenges of Automatic Speech Recognition (ASR), Natural Language Understanding (NLU) and conversational interaction through stories from the founding members of the team that also built Amazon Echo and Alexa. We'll address the early difficulties of designing the algorithms for noise reduction for close-talk, near field, and far-field Alexa devices, and methods and frameworks they use for ASR, NLU and conversational interaction.
Join us for the Golden Age of AI. The way that humans interact with machines is at an inflection point and conversational artificial intelligence (AI) is at the center of the transformation. Learn how Amazon is using machine learning and cloud computing to help fuel innovation in AI, making Alexa smarter every day. Alexa VP and Head Scientist Rohit Prasad presents the state of the science behind Amazon Alexa. He addresses advances in spoken language understanding and machine learning in Alexa, and shares how Amazon thinks about building the next generation of user experiences. He will announce the inaugural winner of the Alexa Prize and award the winning student team a check for $500,000.
Your Alexa skill could become the voice of your company to customers. How do you make sure that it conveys rich information, delivered with your brand's personality? In this session, Adam Long, VP of Product Management at Automated Insights, discusses natural language generation (NLG) techniques and how to make your Alexa response more insightful and engaging. Rob McCauley, Solutions Architect with Amazon Alexa, shows you how to put those techniques into action.
Join Alexa SVP Tom Taylor as we cover the state of the Alexa business, describe some early challenges, and share how we are approaching emerging trends. Voice experiences have transformed the way that customers interact with the world around them. We will introduce new capabilities to help developers better address opportunities in devices, the smart home, and voice. You will leave with an understanding of the vision behind Alexa that ties together the deep dives going on throughout re:Invent.
This session covers the technical and design challenges that the Earplay team overcame when they built their highly engaging Alexa experience. Leave this session with an understanding of how to use the Alexa Service, AWS Lambda, Amazon DynamoDB, SSML, and testing tools to deliver similar experiences to your customers.
In this session, we will give you a complete picture of all the tools and techniques required to build complex, production-quality Alexa skills. You will leave this session knowing how to use Alexa's dialog management, entity resolution, and slot elicitation capabilities as well as how to process the results through a microservice with AWS Lambda.
In this session, we cover Alexa's reach into smart devices integration, both inside and outside the home. Learn how your product can become part of the Alexa smart devices family and how you can easily bring Alexa to your business or home.
Automotive & Manufacturing
Manufacturing companies collect vast troves of process data for tracking purposes. Using this data with advanced analytics can optimize operations, saving time and money. In this session, we explore the latest analytics capabilities to support your goals for optimizing the manufacturing plant floor. Learn how to build dashboards that connect to prediction models driven by sensors across manufacturing processes. Learn how to build a data lake on AWS, using services and techniques such as AWS CloudFormation, Amazon EC2, Amazon S3, AWS Identity and Access Management, and AWS Lambda. We also review a reference architecture that supports data ingestion, event rules, analytics, and the use of machine learning for manufacturing analytics.
Today's trends in auto technology are all about connecting cars and their occupants to the outside world in a seamless and safe manner. In this session, we discuss how automotive companies are leveraging AWS for a variety of connected vehicle use cases. You'll leave this session with source code, architecture diagrams, and an understanding of how to apply the AWS Connected Vehicle Reference Architecture to build your own prototypes. You'll also learn how car companies can leverage Amazon services such as Alexa and AWS services such as AWS IOT, AWS Greengrass, AWS Lambda and Amazon API Gateway to rapidly develop and deploy innovative connected vehicle services.
Manufacturing companies in all sectors—including automotive, aerospace, semiconductor, and industrial manufacturing—rely on design and engineering software in their product development processes. These computationally-intensive applications—such as computer-aided design and engineering (CAD and CAE), electronic design automation (EDA), other performance-critical applications—require immense scale and orchestration to meet the demands of today's manufacturing requirements. In this session, you learn how to achieve the maximum possible performance and throughput from design and engineering workloads running on Amazon EC2, elastic GPUs, and managed services such as AWS Batch and Amazon AppStream 2.0. We demonstrate specific optimization techniques and share samples on how to accelerate batch and interactive workloads on AWS. We also demonstrate how to extend and migrate on-premises, high performance compute workloads with AWS, and use a combination of On-Demand Instances, Reserved Instances, and Spot Instances to minimize costs.
Over the next decade, accelerating autonomous driving technology—including advances in artificial intelligence, sensors, cameras, radar and data analytics—are set to transform how we commute. In this session, you learn how to use Amazon AI for a highly productive, on demand, and scalable autonomous driving development environment. We compare the most popular AI frameworks including TensorFlow and MXNet for use in autonomous driving workloads. You learn about the AWS optimizations on MXNet that yield near linear scalability for training deep neural networks and convolutional neural networks. We demonstrate the ease of getting started on AWS AI by using a sample training dataset for building an object detection model on AWS. This session is intended for audiences who have some exposure to the underlying concepts for AI-based autonomous driving development. After attending the session, you can get started with AI development on AWS by using a sample dataset for building an object detection model.
Cloud computing gives you a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from one to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
This presentation compares three modern architecture patterns that startups are building their businesses around. It includes a realistic analysis of cost, team management, and security implications of each approach. It covers AWS Elastic Beanstalk, Amazon ECS, Amazon API Gateway, AWS Lambda, Amazon DynamoDB, and Amazon CloudFront. Attendees will also hear from venture capital investor Third Rock Ventures (TRV) who has launched 40+ biotech startups over the last 10 years. TRV will outline how it launches cloud native startups that turn bleeding edge science into new treatments across the spectrum of disease, with highlights drawn Relay Therapeutics and Tango Therapeutics.
Creating a comprehensive, accelerated cloud strategy for a complex or federated organization requires a disciplined approach—one that balances the need for centralized governance with the opportunity to innovate across all engineering segments within the enterprise. In this session, will follow the Walt Disney Company's journey to create an initial cloud value hypothesis and cloud business case, and then develop a structured approach towards cloud migrations and a "cloud-first" operating model. Attendees learn more about the key implications, risks, and considerations of the company's cloud transformation program; see examples of reference architectures and implementation guides; and understand the required activities that contributed to the success of the program. The patterns presented are broadly applicable to complex organizations with global aspirations to make the journey to the Cloud. Session sponsored by Accenture
Applications running in a typical data center are static entities. But applications aren't static in the cloud. Dynamic scaling and resource allocation is the norm on AWS. Technologies such as Amazon EC2, AWS Lambda, and Auto Scaling provide flexibility in building dynamic applications and with this flexibility comes an opportunity to learn how an enterprise application functions optimally. New Relic helps manage these applications without sacrificing simplicity. In this session, we discuss changes in monitoring dynamic cloud resources. We'll share best practices we've learned working with New Relic customers on managing applications running in this environment to understand and optimize how they are performing. Session sponsored by New Relic
At Netflix, we make explicit tradeoffs to balance our four key focus domains of innovation, reliability, security, and efficiency to ensure our customers, shareholders, and internal engineering stakeholders are happy. In this talk, learn the strategies behind each of our focus domains to optimize for one without detracting from another.
Netflix is a large, ever changing ecosystem system serving millions of customers across the globe through cloud-based systems and a globally distributed CDN. This entertaining romp through the tech stack serves as an introduction to how we think about and design systems, the Netflix approach to operational challenges, and how other organizations can apply our thought processes and technologies. In this session, we discuss the technologies used to run a global streaming company, scaling at scale, billions of metrics, benefits of chaos in production, and how culture affects your velocity and uptime.
Many companies use Amazon Simple Email Service (Amazon SES) to build applications that enable their users to send millions of emails every day. In this session, you learn how to build applications using the scalable, reliable Amazon SES infrastructure. You also learn how to monitor email sending and enforce compliance rules on individual accounts without impacting other accounts. Zendesk discusses the architecture of its multitenant email sending platform, the historical challenges it faced, its phased approach to platform migration, and the ways Amazon SES helped them meet their goals.
Startups and enterprises are increasingly using open source projects for architectures. AWS customers and partners also run their own open source programs and contribute key technologies to the industry (see DCS201). At AWS, we engage with open source projects in several ways. Through bug fixes and enhancements to popular projects, including work with the Hadoop ecosystem (see BDM401), Chromium (see BAP305) and Boto, and standalone projects like the security library s2n (see NET405) and machine learning project MXNet (see MAC401). We have services like Amazon ECS for Docker (see CON316) and Amazon RDS for MySQL and PostgreSQL (see DAT305) that make open source easier to use. In this session, learn more about existing AWS open source work and our next steps.
As one of the thought leaders in Expedia's cloud migration, the Expedia Global Payments Business Intelligence group architected, designed and built a complete cloud data mart solution from the ground up using the AWS and Tableau online. In this session, we will discuss our business challenge, the journey to the solution, high-level technical architecture (using S3, EMR, data pipelines, Redshift, Tableau Online) and lessons learned along the way, including best practices and optimization methods, etc. Session sponsored by Tableau
Many industries are going through a digital transformation as their existing business models are being disrupted and new competitors emerge. The key driver is a need for faster time-to-value as a direct relationship with customers provides analytics that drive personalization and rapid product development. There's a cultural aspect to the change, as well as new organizational patterns that go along with a migration to cloud native services. Application architectures are evolving from monoliths to microservices and serverless deployments, and they becoming more distributed, highly available, and resilient. The highly automated practices that have built up around DevOps are moving to the mainstream, and some new techniques are emerging around security red teams and chaos engineering.
Whether you're a cash-strapped startup or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. This session reviews a wide range of cost planning, monitoring, and optimization strategies, featuring real-world experience from AWS customers. We cover how to effectively combine Amazon EC2 On-Demand, Reserved, and Spot Instances to handle different use cases; leveraging Auto Scaling to match capacity to workload; and choosing the optimal instance type through load testing. We discuss taking advantage of tiered storage and caching, offloading content to Amazon CloudFront to reduce back-end load, and getting rid of your back end entirely by serverless. Even if you already enjoy the benefits of serverless architectures, we show you how to select the optimal AWS Lambda memory class and how to maximize networking throughput in order to minimize Lambda run-time and therefore execution cost. We also showcase simple tools to help track and manage costs, including Cost Explorer, billing alerts, and AWS Trusted Advisor. This session is your pocket guide for running cost effectively in the AWS Cloud.
As more customers adopt Amazon VPC architectures, the features and flexibility of the service are squaring off against evolving design requirements. This session follows this evolution of a single regional VPC into a multi-VPC, multi-region design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, securing private access to Amazon S3, managing multi-tenant VPCs, integrating existing customer networks through AWS Direct Connect, and building a full VPC mesh network across global regions.
With increase in popularity of online engagement as a means of entertainment, broad use of wide range of communities have become popular. These communities need to be highly available and resilient at scale. Failure of availability could be fatal to the product that are used by the customer. We will share the process you should use to develop your architectural principles that will allow you to reap the benefits of reduced complexity.
Migrating workloads to the cloud requires detailed planning and execution. When you're an established business with users that rely on your cloud workloads, this can seem like an insurmountable task. A complete migration is a victory often celebrated as the end of the journey, when in reality it is just the first step in a process of continual optimization and evolution. To truly optimize the power of AWS and reap the financial and performance benefits of cloud computing, it is critical that you evaluate your workloads for opportunities to continue to evolve to drive business value and embrace the innovative nature of the AWS Cloud. In this session, join Rackspace to learn about key components to consider in order to execute a successful migration to AWS, the importance of optimizing your AWS environment over time, and how customers are leveraging Rackspace's Fanatical Support for AWS to help them migrate and transform their workloads on AWS. Session sponsored byRackspace
With more companies entering the OTT market, AWS sees customer demand for ways to decrease the time it takes to get content into their users' hands, while increasing operational efficiency and lowering IT infrastructure costs. Using deep learning-based image analysis can provide users actionable feedback about the content they view. When combining a new serverless architecture approach using Amazon Elastic Transcoder with AWS' deep learning technology Amazon Rekognition, companies can provide near real-time, on-demand encoding of assets and content moderation. This session covers serverless versus virtualized infrastructure, handling encoding jobs with AWS Lambda, encoding dynamic media assets with Elastic Transcoder (or Elemental), moderating content with Amazon Rekognition, and storing metadata with Amazon DynamoDB. We also provide a demo to test a production-ready serverless encoding architecture.
Learn how Netflix efficiently manages costs associated with 150K instances spread across multiple regions and heterogenous workloads. By leveraging internal Netflix tools, the Netflix capacity team is able to provide deep insights into how optimize our end users' workload placements based on financial and business requirements. In this session, we discuss the efficiency strategies and practices we picked up operating at scale using AWS since 2011, along with best practices used at Netflix. Because many of our strategies revolve around Reserved Instances, we focus on the evolution of our Reserved Instance strategy and the recent changes after the launch of regional reservations. Regional Reserved Instances provide tremendous financial flexibility by being agnostic to instance size and Availability Zone. However, it's anything but simple to adopt regional Reserved Instances in an environment with over 1,000 services, that have varying degrees of criticality combined with a global failover strategy.
Blockchain has become a hot topic for enterprises, start-ups, entrepreneurs, and regulatory bodies. Born from bitcoin in 2008, blockchain's promise of a distributed ledger has far greater implications than cryptocurrency. Companies are now beginning to understand its disruptive potential and are experimenting with its most promising applications. But, few companies have asked the more fundamental question: Are we ready to adopt a shared public database for financial transactions? In this session, we cover the concepts of blockchain and use cases in the enterprise. We also demonstrate blockchain in use and show how to implement it using AWS services.
Bots are leading the next disruptive wave of how people and companies communicate. Companies can use bots for internal communications, such as facilities management or support, or for external communications, such as selling products, helping customers with searches, and acting as a trusted advisor in other ways. In this session, we show how easy it is to deploy a bot and how it improves customer interactions. Further, most bot solutions operate with a single language. We show how to build a language-agnostic bot solution using AWS Lambda and other AWS services.
Fed up with stop and go in your data center? Shift into overdrive and pull into the fast lane! Learn how AutoScout24, the largest online car marketplace Europe-wide, is building its Autobahn in the Cloud. The secret ingredient? Culture! Because “cloud” is only half of the digital transformation story. The other half is how your organization deals with cultural change as you transition from the old world of IT into building microservices on AWS, with agile DevOps teams in a true „you build it, you run it“ fashion. Listen to stories from the trenches, powered by Amazon Kinesis, Amazon DynamoDB, AWS Lambda, Amazon ECS, Amazon API Gateway and much more, backed by AWS Partners, AWS Professional Services, and AWS Enterprise Support. Learn how to become cloud native, evolve your architecture, drive cultural change across teams, and manage your company's transformation for the future.
In this session, go on a journey from traditional, on-premises applications and architecture to pure cloud-native environments. This transformative approach highlights the steps required to incrementally move to AWS technologies while increasing resiliency and efficiency and reducing operational overhead. We challenge traditional understanding and show you how different types of workloads can be migrated using real-world examples. Additionally, we demonstrate how you can assemble and use the AWS building blocks available today to bolster your success and position yourself to inherit the power of our managed services, such as Amazon API Gateway, AWS Lambda, Amazon Cognito, Amazon S3, Amazon Simple Queue Service (SQS), Amazon SNS and our AWS CodeStar suite. You leave this session armed with the knowledge you need to begin your own voyage towards serverless architecture.
Cloud is the new normal, and organizations are deploying different types workloads on AWS. Understanding the performance efficiency and overall application performance is critical to ensuring that you can scale your workload to meet the demands of your customers. Understanding how well your application performs over time helps you to continuously improve and innovate your software to get the most out of the AWS platform. If you aren't measuring custom application metrics, you are operating your software blindly and cannot pinpoint areas of improvement. Learn how to use Amazon CloudWatch custom metrics, alerts, dashboards and AWS X-Ray to architect an application monitoring service to provide insight to your workload's performance.
In this session, we first look at common approaches to refactoring common legacy .NET applications to microservices and AWS serverless architectures. We also look at modern approaches to .NET-based architectures on AWS. We then elaborate on running .NET Core microservices in Docker containers natively on Linux in AWS while examining the use of AWS SDK and .NET Core platform. We also look at the use of the various AWS services such as Amazon SNS, Amazon SQS, Amazon Kinesis, and Amazon DynamoDB, which provide the backbone of the platform. For example, Experian Consumer Services runs a large ecommerce platform that is now cloud based in the AWS. We look at how they went from monolithic platform to microservices, primarily in .NET Core. With a heavy push to move to Java and open source, we look at the development process, which started in the beta days of .NET Core, and how the direction Microsoft was going allowed them to use existing C# skills while pushing themselves to innovate in AWS. The large, single team of Windows based developers was broken down into several small teams to allow for rapid development into an all Linux environment.
Many customers want a disaster recovery environment, and they want to use this environment daily and know that it's in sync with and can support a production workload. This leads them to an active-active architecture. In other cases, users like Netflix and Lyft are distributed over large geographies. In these cases, multi-region active-active deployments are not optional. Designing these architectures is more complicated than it appears, as data being generated at one end needs to be synced with data at the other end. There are also consistency issues to consider. One needs to make trade-off decisions on cost, performance, and consistency. Further complicating matters is the variety of data stores used in the architecture results in a variety replication methods. In this session, we explore how to design an active-active multi-region architecture using AWS services, including Amazon Route 53, Amazon RDS multi-region replication, AWS DMS, and Amazon DynamoDB Streams. We discuss the challenges, trade-offs, and solutions.
Reinforcement Learning (RL) can be used to solve real-world problems in robotics and conversational engines without supervision. AI algorithms that observe their surroundings and learn are considered to be the ultimate forms of AI. The RL use cases shines in multi-agent scenarios where each agent reacts in real-time to the changing situation. In this session, we explain RL, the theory, and the algorithms used. We show an MXNet-based demo that will automatically learn to play a game. We use a game and show how an agent powered by MXNet takes actions to win. Initially, you notice that the agent making very little progress, but after a few dozen iterations, it can play the game better than any human being. You can generalize this to real world problems. RL is currently used today in robotics, gaming, autonomous vehicle control, spoken language systems and many more. In this talk, I will be using Amazon EC2 P2 instances, AWS deep learning AMI, MXnet deep learning framework, Amazon EBS, and Amazon S3.
When engineering teams take on a new project, they often optimize for performance, availability, or fault tolerance. More experienced teams can optimize for these variables simultaneously. Netflix adds an additional variable: feature velocity. Most companies try to optimize for feature velocity through process improvements and engineering hierarchy, but Netflix optimizes for feature velocity through explicit architectural decisions. Mental models of approaching availability help us understand the tension between these engineering variables. For example, understanding the distinction between accidental complexity and essential complexity can help you decide whether to invest engineering effort into simplifying your stack or expanding the surface area of functional output. The Chaos team and the Traffic team interact with other teams at Netflix under an assumption of Essential Complexity. Incident remediation, approaches to automation, and diversity of engineering can all be understood through the perspective of these mental models. With insight and diligence, these models can be applied to improve availability over time and drift into success.
Every day, systems architects and cloud architects have to size cloud workloads for performance and efficiency. Do you choose T2, C3, C4, M3, or something else for your Amazon Elastic Compute Cloud (Amazon EC2) instance type? Do you need more CPUs, memory, or both? What about distributed applications across regions and Availability Zones? How do IT teams determine the right instance family and size for AWS workloads? Turbonomic solves these challenges with you. Their real-time hybrid cloud management platform can ensure that your workloads get the right resources in real time to assure performance across the compute, storage, network, application, and database layers of AWS, and across your hybrid cloud infrastructure. Get a crash course in understanding workload performance characteristics, and how Turbonomic matches to AWS resources to assure real-time, efficient performance for your AWS environment, with the ability to fully automate these processes. Whether you're new to the platform or regular users of Amazon EC2, learn to take the guesswork out of what makes each Amazon EC2 instance family unique and appropriate for your business and technical requirements. Session sponsored by Turbonomic, Inc.
The BBC iPlayer is the biggest audio and video-on-demand service in the UK. Over one-third of the country submits 10 million video playback requests every day, and the service publishes over 10,000 hours of media every week. Moving iPlayer to the cloud has enabled the BBC to shorten the time-to-market of content from 10 hours to 15 minutes. In this session, the BBC's lead architect describes the approach behind creating iPlayer architecture, which uses Amazon SQS and Amazon SNS in several ways to improve elasticity, reliability, and maintainability. You see how BBC uses AWS messaging to choreograph the 200 microservices in the iPlayer pipeline, maintain data consistency as media traverses the pipeline, and refresh caches to ensure timely delivery of media to users. This is a rare opportunity to see the internal workings and best practices of one of the largest on-demand content delivery systems operating today.
This talk includes a story and a recipe. The story is about a nerd who bought his first motorbike, got a license for it, and started hacking to make it interact and talk, all in two months. The recipe is a technical one that explains how to use Amazon Lex and Amazon Lambda to quickly prototype and deploy a serverless chatbot connected with an embedded device in order to realize an Internet of Things (IoT) application. We discuss how you can integrate your IoT application with Amazon Lex using AWS Lambda and the Amazon API Gateway, how to exchange session data to have a contextual conversation, and how to provide a successful bot experience. Expect to leave this session knowing how to build, deploy, and publish a bot, and how to attach it to an IoT device—with the potential to bringing to life any object that surrounds you.
As serverless architectures become more popular, customers need a framework of patterns to help them identify how they can leverage AWS to deploy their workloads without managing servers or operating systems. This session describes reusable serverless patterns while considering costs. For each pattern, we provide operational and security best practices and discuss potential pitfalls and nuances. We also discuss the considerations for moving an existing server-based workload to a serverless architecture. The patterns use services like AWS Lambda, Amazon API Gateway, Amazon Kinesis Streams, Amazon Kinesis Analytics, Amazon DynamoDB, Amazon S3, AWS Step Functions, AWS Config, AWS X-Ray, and Amazon Athena. This session can help you recognize candidates for serverless architectures in your own organizations and understand areas of potential savings and increased agility. What's new in 2017: using X-Ray in Lambda for tracing and operational insight; a pattern on high performance computing (HPC) using Lambda at scale; how a query can be achieved using Athena; Step Functions as a way to handle orchestration for both the Automation and Batch patterns; a pattern for Security Automation using AWS Config rules to detect and automatically remediate violations of security standards; how to validate API parameters in API Gateway to protect your API back-ends; and a solid focus on CI/CD development pipelines for serverless, which includes testing, deploying, and versioning (SAM tools).
The recent launch of VMware Cloud on AWS gives customers new options for addressing several use cases, including cloud migration, hybrid deployments, and disaster recovery. We introduce and describe design patterns for incorporating VMware Cloud on AWS into existing architecture and detail how the service's capabilities can influence future architectural plans. We explore design considerations and nuances for integrating VMware Cloud on AWS Software Defined Data Centers (SDDCs) with native AWS services, enabling you to use each platform's benefits. Architects, system operators, and anyone looking to understand VMware Cloud on AWS will walk away with examples and options for solving challenging use cases with this new, exciting service.
4K video has resulted in a huge uptick in resource requirements, which is difficult to scale in a traditional environment. The cloud is perfect to handle problems of this scale. However, many unanswered questions remain around best practices and suitable architectures for dealing with massive, high-quality assets. We define problem cases and discuss practical architectural patterns to handle these challenges by using AWS services such as Amazon EC2 (graphical instances), Amazon EMR, Amazon S3, Amazon S3 Transfer Acceleration, Amazon Glacier, AWS Snowball, and magnetic Amazon EBS volumes. The best practices we discuss can also help architects and engineers dealing with non-video data. Also, Amazon Studios presents how, powered by AWS, they solved many of these problems and can create, manage, and distribute Emmy and Oscar Award-winning content.
AWS Metering provides customers with detailed usage information (down to a specific Amazon EC2 instance or Amazon S3 bucket used in a single hour), enabling them to gain deep insights into their utilization of cloud resources. However, this level of transparency is not available across most customers' traditional IT infrastructure, making it difficult to understand what resources are being used, when, and by whom. Join us in this session to learn how to meter, measure, and understand your usage from AWS, on-premises data centers, containers, serverless compute, even other clouds across your IT infrastructure. We show you how to meter your non-AWS resources to make smarter decisions about your business and investment in the cloud.
WebGL has made great improvements over the past years. However, it still can't provide photorealistic experiences alone. In order to provide products with the best look and feel, we decided to use server-side 3D rendering. In this session, we show you how we built our real-time 3D configurator stack using Amazon EC2 Elastic GPUs, RESTful microservices, Lambda@Edge, Amazon CloudFront and other services.
When customers across the globe place orders on Amazon.com, those orders are processed through many different backend systems, including Herd, a workflow-orchestration engine developed by the Amazon eCommerce Foundation team. A mission-critical system used by more than 300 Amazon engineering teams, Herd executes over four billion workflows every day. Beginning in 2013, Herd's workflow traffic was doubling year-over-year, and scaling its then dozens of horizontally-partitioned Oracle databases was becoming a nightmare, and this number kept increasing. To support Herd's increasing scaling needs, and to provide a better customer experience, the Herd team had to re-architect its storage system and move its primary data storage from Oracle to Amazon DynamoDB. In this session, we discuss how we moved from Oracle to Amazon DynamoDB, walk through the biggest challenges we faced and how we overcame them, and share the lessons we learned along the way.
SaaS presents developers with a unique blend of architectural challenges. While the concepts of multi-tenancy are straightforward, the reality of making all the moving parts work together can be daunting. In this session, we move beyond the conceptual bits of SaaS and look under the hood of an SaaS application. Our goal is to examine the fundamentals of identity, data partitioning, and tenant isolation through the lens of a working solution and to highlight the challenges and strategies associated with building a next generation SaaS application on AWS. We look at the full lifecycle of registering new tenants, applying security policies to prevent cross-tenant access, and leveraging tenant profiles to effectively distribute and partition tenant data. We intend to connect many of the conceptual dots of an SaaS implementation, highlighting the tradeoffs and considerations that can shape your approach to SaaS architecture.
Real-time bidding applications are designed for very high scale and performance. A typical RTB deployment needs to be designed to handle at least a million queries per second with TP99 query processing latency of 25 ms. In this session, we feature Bidder-as-a-Service™ by Beeswax and discover how AWS enables their core technology. We will begin by examining the end to end architecture of a real-time bidder application on AWS. Next, we will talk about the challenges and best practices for implementing a durable and high-performing system. Finally, we will conclude the talk with some recommendations on minimizing infrastructure cost while operating a RTB platform at a very large scale.
In this session, you'll learn how AdTech companies use AWS services like Glue, Athena, Quicksight, and EMR to analyze your Google DoubleClick Campaign Manager data at scale without the burden of infrastructure or worries about server maintenance. We'll live-process a click stream so you can see how Machine Learning can help maximize your revenue by finding the most optimal path of a campaign and we'll look at a real world demo from A9's Advertising Science Team of how they use the data to build Look-alike Model in their projects.
From CloudFront to ElastiCache to DynamoDB Accelerator (DAX), this is your one-stop shop for learning how to apply caching methods to your AdTech workload: What data to cache and why? What are common side effects and pitfalls when caching? What is negative caching and how can it help you maximize your cache hit rate? How to use DynamoDB Accelerator in practice? How can you ensure that data always stays current in your cache? These and many more topics will be discussed in depth during this talk and we'll share lessons learned from Team Internet, the leading provider in domain monetization.
Interested in learning how to integrate the Internet of Things into your advertising platform and combine it with AWS Greengrass, AWS Lambda, Amazon DynamoDB, and Amazon API Gateway to send context-aware advertisements to users at the point of buying? In this session, Mobiquity, the leader in digital engagements servicing the world's top brands, and their Innovation Partner Flomio discuss how they've been able to use AWS to create compelling digital experiences for their clients. We deep-dive on the technology behind Mobiquity's innovative shopping system that uses RFID, Bluetooth, captive Wifi, and a mobile app to provide real-time context for understanding how and where your customers interact with your products and services, allowing you to better tailor your ads to their particular preferences.
You want to build something innovative. You want to deliver applications in a flexible and agile environment. Most of all, you want to embrace the performance, efficiency, and cost benefits of cloud services. Sounds amazing, but many still struggle with the challenges of getting there. KAR Auction Services, together with its subsidiaries, has embraced a cloud-native approach to providing services in a quick, innovative, and simplified way. Their latest greenfield project? Build an end-to-end vehicle auction website on the AWS Cloud. Join Capgemini, AWS and Gary Watkins, chief information officer for KAR Auction Services' IT Shared Services department, to hear real-life examples on how to get started, how to overcome the struggles, and how to take advantage of the cloud for added benefits. Session sponsored by Capgemini
Join us for an overview and demonstration of Amazon Connect, a self-service, cloud-based contact center based on the same technology used by Amazon customer service associates worldwide to power millions of conversations. The self-service graphical interface in Amazon Connect makes it easy to design contact flows for self and assisted call-handling experiences, manage agents, and track performance metrics – no specialized skills required. In this session, you will hear from Capital One and T-Mobile on how they are using Amazon Connect to provide their customers with dynamic, natural, and personalized experiences. See how quickly you can get started with Amazon Connect and build your contact center.
The rate at which employees collaborate and create content continues to grow. With this, organizations are challenged to make collaboration easy, keep file management simple, and maintain a secure and compliant environment. Amazon WorkDocs is a fully managed, secure collaboration and file management service with rich feedback capabilities, strong administrative controls, and an extensible API. In this session, we demonstrate how you can use Amazon WorkDocs as a full-fledged collaboration tool for users and easily secure and manage files across your organization.
Amazon is a global company with over 300,000 employees worldwide. Easy and efficient communication is critical, so earlier this year, we made Amazon Chime available company-wide. Amazon Chime is a modern communications service that runs securely on AWS. It simplifies online meetings, video conferencing, and chats in one straightforward application. In this session, we provide an overview of Amazon Chime and follow with a discussion on how Amazon is rolling out this service.
BAP206: NEW LAUNCH! Bring Alexa to Work! Voice-enable Your Organization with Alexa for Business
In this session, we'll introduce you to the voice-enabled workplace, and show you how Alexa can help employees work smarter by acting as their personal digital assistant. We'll also show you how Alexa transforms your conference rooms, and provides a better telephony experience. And we'll talk through how custom voice skills can be used by employees and customers alike. Finally, we'll explain how Alexa for Business allows you to do all this in a scalable and secure way.
Amazon Connect is a cloud-based contact center service that allows you to create dynamic contact flows and personalized caller experiences by using their history and responses to anticipate their needs. Learn how with Amazon Lex, an AI service that allows you to create intelligent conversational “chatbots,” turning your contact flows into natural conversations using the same technology behind Amazon Alexa. Routine tasks such as password resets, order status, and balance inquiries can be automated without an agent. In this session, you will hear from Asurion about their Amazon Connect contact center environment and how they enhanced the customer and agent experience with Amazon Lex.
You've successfully moved your desktops to AWS using Amazon WorkSpaces. Now, you'd like to start automating your operations. In this session, we show you how to use the Amazon WorkSpaces APIs to automate common tasks, such as provisioning and deprovisioning WorkSpaces, building self-service portals to allow your users to perform basic support tasks themselves, and integrating WorkSpace operations into your existing workflow and helpdesk tools.
Are you tired of maintaining and upgrading the PC infrastructure for your organization? Do you want to provide your users with a fast, fluid desktop that is accessible from anywhere, on any device? With Amazon WorkSpaces, you can do both simultaneously by running your desktops on AWS. In this session, we demonstrate the flexibility of Amazon WorkSpaces and show you how easy it is to get started. We also cover more advanced topics, including using Microsoft Active Directory for end-user management and authentication, and using Amazon WorkSpaces to implement a bring-your-own-device policy.
Learn how to use Amazon Connect with AWS IoT and AWS Lambda to proactively resolve customer issues before they occur. In this session, we show you how to configure an AWS IoT device to proactively place a phone call to a customer using the Amazon Connect API when an impending problem is detected. From there, we demonstrate how Amazon Connect contact flows make the customer interaction personal, more satisfying, and less costly.
With Alexa for Business, your employees and customers can access a variety of different voice skills that relate to your business. Alexa for Business allows you to easily manage where and how these voice skills can be accessed, and by whom. In this session, we'll walk through how you can use Alexa for Business to deploy and manage access to the custom skills you build for your organization. We'll walk through how employees "enroll" to use Alexa at work, and how the permissions model for your voice skills works. This session will include a demo showing the deployment of a pre-built custom skill, and the enrollment process for employees.
Alexa for Business allows you to use Alexa-enabled devices to transform your conference rooms. Using simple voice skills, you can control the conference room environment, start online meetings, turn on video projectors, and more. In thiAlexa for Business allows you to use Alexa-enabled devices to transform your conference rooms. Using simple voice skills, you can control the conference room environment, start online meetings, turn on video projectors, and more. In this session, we'll walk through the Alexa-enabled conference room, and show you how you can use Alexa for Business to specify device locations, connect to conference room calendars, and provide access to meeting-specific skills. session, we'll walk through the Alexa-enabled conference room, and show you how you can use Alexa for Business to specify device locations, connect to conference room calendars, and provide access to meeting-specific skills.
In this session, you'll learn how to migrate your virtualized desktop apps to the cloud using Amazon AppStream 2.0, and stream them to a desktop browser. We discuss how to assess your existing virtualized application environment, map to concepts in Amazon AppStream 2.0, and start the planning and architecture process. We demo the building blocks you use to create your AppStream 2.0 environment, and provide tips for achieving the best performance and user experience.
In this session, we explore how enterprises are rethinking their graphics workstation strategy, and moving their 3D apps to the cloud using Amazon AppStream 2.0. We discuss common use cases for delivering 3D apps to users and how to implement them. You'll learn about the benefits of integration with other AWS resources for driving simulations and storing data, while lowering your costs by avoiding upfront investments, and only paying for what you use. Our guest speaker from Cornell University will share his experience delivering industry-standard simulation tools such as ANSYS FLUENT into courses. We will also demonstrate popular 3D Graphics apps running on AppStream 2.0 using newer graphics design and pro instances.
Auto Scaling allows cloud resources to scale automatically in reaction to the dynamic needs of customers. This session shows how Auto Scaling offers an advantage to everyone—whether it's basic fleet management to keep instances healthy as an Amazon EC2 best practice, or dynamic scaling to manage extremes. We share examples of how Auto Scaling helps customers of all sizes and industries unlock use cases and value. We also discuss how Auto Scaling is evolving to scaling different types of elastic AWS resources beyond EC2 instances. Data Scientist & Principal Investigator, Hook Hua, from NASA Jet Propulsion Laboratory (JPL) / California Institute of Technology will share how Auto Scaling is used to scale science data processing of remote sensing data from earth-observing satellite missions, and reduce response times during hazard response events such as those from earthquakes, hurricanes, floods, and volcanoes. JPL will also discuss how they are integrating their science data systems with the AWS ecosystem to expand into NASA's next two large-scale missions with remote-sensing radar-based observations. Learn how Auto Scaling is being used at a global scale – and beyond!
What if I told you that you could improve your EC2 performance and availability and save money… Interested? Want to learn how to use all the latest functionality including [NEW] EC2 features launched at re:Invent to optimize your spend… How about now? In this session, you'll learn how to seamlessly combine On-Demand, Spot and Reserved Instances, and how to use the best practices deployed by customers all over the world for the most common applications and workloads. After just one hour you'll leave armed with multiple ways to grow your compute capacity and to enable new types of cloud computing applications - without it costing you an arm and a leg.
Amazon EC2 provides resizable compute capacity in the cloud and makes web scale computing easier for customers. It offers a wide variety of compute instances is well suited to every imaginable use case, from static websites to high performance supercomputing on-demand, all available via highly flexible pricing options. This session covers the latest EC2 features and capabilities, including new instance families available in Amazon EC2, the differences among their hardware types and capabilities, and their optimal use cases. We also will cover some best practices on how you can optimize your spend on EC2 to make the most of your EC2 instances, saving time and money.
High-performance computing (HPC) in the cloud enables high scale compute- and graphics-intensive workloads across a range of industries—from aerospace, automotive, and manufacturing to life sciences, financial services, and energy. AWS provides application developers and end users with unprecedented computational power for massively parallel applications in areas such as large-scale fluid and materials simulations, 3-D content rendering, financial computing, and deep learning. In this session, we provide an overview of HPC capabilities on AWS. We describe the newest generation of accelerated computing instances, and we highlight customer and partner use cases across industries. Attendees learn best practices for running HPC workflows in the cloud, including graphical pre- and post-processing, workflow automation, and optimization. Attendees also learn about new and emerging HPC use cases, in particular, deep learning training and inference, large-scale simulations, and high-performance data analytics.
AWS provides unprecedented processing power for graphics-intensive applications in areas such as design, engineering simulations, and 3D content rendering. With Amazon EC2 Elastic GPUs, you can easily attach low-cost graphics acceleration to a wide range of EC2 instances over the network, without the constraints of fixed instance types. In this session, learn more about Elastic GPUs architecture, and how you can build powerful graphics-intensive solutions with great flexibility, high quality, and low cost. Our guest speakers discuss their experience building application streaming and SaaS products on top of Elastic GPUs. You also hear from our ISV partners about certifying Elastic GPUs for their 3D applications.
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers. In this session, you learn the benefits of containers, learn about the Amazon EC2 Container Service, and understand how to use Amazon ECS to run containerized applications at scale in production.
Serverless architectures let you build and deploy applications and services with infrastructure resources that require zero administration. In the past, you had to provision and scale servers to run your application code, install and operate distributed databases, and build and run custom software to handle API requests. Now, AWS provides a stack of scalable, fully-managed services that eliminates these operational complexities. In this session, you will learn about serverless architectures, their benefits, and the basics of the AWS's serverless stack (e.g., AWS Lambda, Amazon API Gateway, and AWS Step Functions). We will discuss how to use serverless architectures for a variety of use cases including data processing, website backends, serverless applications, and “operational glue.” You will also get practical tips and tricks, best practices, and architecture patterns that you can take back and implement immediately.
Whether you are launching a simple website or a scaled application, time to go live is a key consideration for your business. Amazon Lightsail is the easiest way to get started on AWS, letting you build and scale your infrastructure faster. In this session, we will walk you through how to use Lightsail to launch your application with a few clicks and scale it as needed for redundancy, traffic spikes, or intergalactic attack. With in-browser SSH and RDP access, easy server management, and in-console guidance, Lightsail provides all the tools needed for builders of all levels – no prior AWS experience required.
GPUs have a large application in Media and Entertainment workloads. From backend video processing and creation workloads such as VFX/Rendering, transcoding and broadcast playout to high-end creatives as well as video editing workloads. Backed by the NVIDIA Tesla M60 GPUs, G3 instances offer unparalleled power and flexibility to do complex modeling, 3D visualization, computer aided design, seismic visualization, video encoding. G3 instances are the first Amazon EC2 instances to support NVIDIA GRID Virtual Workstation capabilities, with streaming support for four monitors each with up to 4K resolution, and hardware encoding to support up to 10 High Efficiency Video Coding (HEVC) H.265 1080p30 streams or up to 18 H.264 1080p30 streams per GPU for faster video frame processing and improved image fidelity. In this session we will highlight two criticial Media workloads Video Editing via remote application streaming and Broadcast Playout origination from the AWS cloud. We will have Pop Media discuss their remote video editing in the cloud that enables secure remote, real-time editorial and image processing session views. This will be followed by Evertz regarding Discovery Channel's broadcast Playout application for several live Discovery channels currently.
More customers are moving their Microsoft applications to AWS to become more agile, improve their security posture, and dramatically lower costs. Attend this session to learn how to architect fully available and scalable Microsoft environments on AWS. Find out how Microsoft solutions can leverage various AWS services to achieve more resiliency, reduce complexity, improve security, and increase scalability. We discuss how you can leverage AWS services to meet compliance and governance requirements for your Microsoft applications. We introduce DevOps concepts that you can deploy to help implement automation and repeatability. Learn how to plan authentication and authorization for hybrid cloud scenarios between your AWS and on-premises environments. Learn about common architecture patterns for network design, Active Directory, and business productivity solutions such as Dynamics AX, CRM, and SharePoint, and common scenarios for custom .NET and .NET Core with SQL deployments.
Amazon EC2 P3 instances offer up to eight of the latest NVIDIA Tesla V100 GPUs, with up to 13X the speed of previous generation GPU instances. In this session, learn from Airbnb how they use machine learning to make their services smarter and more engaging for their customers and how they are using P3 instances to dramatically lower training time of their machine learning models while optimize costs.
Amazon EC2 X1 and X1e instances are designed to demand memory-optimized enterprise workloads, including production installations of SAP HANA, Microsoft SQL Server, Apache Spark, and Presto. Just recently released, x1e.32xlarge are our largest cloud-native instances yet, offering 4 TB of DDR4 memory per instance. Join this session for a detailed look into this new instance, and learn how enterprise customers are using these instances to run mission-critical workloads, such as SAP HANA, to realize greater speed and agility.
Matt Garman, Vice President of AWS Compute Services, will introduce the latest innovations in the Compute space. At this session, we will be announcing new Compute capabilities, as well as insights into some of the underlying thinking of what makes the AWS Compute business unique. This session will cover new announcements around capabilities for EC2 instances, EC2 networking, EC2 Spot Instances, Amazon Lightsail, Containers and Serverless. Matt will also be joined by executives from our customers and partners, including GE CTO Chris Drumgoole, Heroku CEO Adam Gross, and Autodesk Chief of Product and Cloud Security Reeny Sondhi, who will share valuable success stories of how Amazon EC2 has helped their journey to digital transformation.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Salesforce and AWS are leading the next revolution in building engaging customer experiences. Behind every great experience is an app, and the power of the Salesforce platform lets you build apps fast. Adding to that experience is the AWS platform that extends the ability to rapidly deliver apps by offering developers a collection of purpose-built and ready-to-consume set of services. Salesforce Heroku—built entirely on AWS—enables developers to focus on building business apps fast instead of spending cycles on the monotonous heavy-lifting. In this session, you learn how Salesforce Heroku and AWS accelerate developer productivity and lower operational complexity to deliver solutions around Salesforce data integration, media delivery, web security, big data analytics and warehousing. Session sponsored by Salesforce
With Amazon EBS, you can easily make a simple point-in-time backup for your Amazon EC2 instances. In this deep dive session, you learn how to use Amazon EBS snapshots to back up your Amazon EC2 environment. We review the basics of how snapshots work as well as how to tag snapshots, track costs, and automate snapshots using AWS Lambda. We describe best practices and share tips for success throughout.
Researchers and IT professionals who use high-performance computing (HPC) and high-throughput computing (HTC) need a large scale infrastructure to move their research forward. This session provides reference architectures for running your workloads on AWS, which enable you to achieve scale on-demand and reduce your time to science. We debunk myths about HPC in the cloud and demonstrate techniques for running common on-premises workloads in the cloud.
In this session, you learn how to effectively harness Spot Instances for production workloads. Amazon EC2 Spot Instances enable you to use spare EC2 computing capacity, capacity that is often 90% less than on-demand prices. We explore application requirements to use Spot Instances, best practices learned from thousands of customers, and the services that make it easy. Finally, we run through practical examples of how to use Spot for the most common production workloads, the common pitfalls customers run into, and how to avoid them.
Amazon EC2 F1 instances with field programmable gate arrays (FPGAs), combined with improved cloud-based FPGA programming tools, provides researchers, application developers, and startups with a well-tested, standardized, and accessible platform for hardware-accelerated computing. This session introduces you to Amazon EC2 F1 instances with FPGAs, walks you through a typical development and deployment process, and highlights a number of use cases in different domains, including genomics, video processing, text search, and financial computing.
The Netflix encoding team is responsible for transcoding different types of media sources to a large number of media formats to support all Netflix devices. Transcoding these media sources has compute needs ranging from running compute-intensive video encodes to low-latency, high-volume image and text processing. The encoding service may require hundreds of thousands of compute hours to be distributed at moment's notice where they are needed most. In this session, we explore the various strategies employed by the encoding service to automate management of a heterogenous collection of Amazon EC2 Reserved Instances, resolve compute contention, and distribute them based on priority and workload.
In this popular session, discover how Amazon EBS can take your application deployments on Amazon EC2 to the next level. Learn about Amazon EBS features and benefits, how to identify applications that are appropriate for use with Amazon EBS, best practices, and details about its performance and volume types. The target audience is storage administrators, application developers, applications owners, and anyone who wants to understand how to optimize performance for Amazon EC2 using the power of Amazon EBS.
Auto Scaling allows cloud resources to scale automatically in reaction to the dynamic needs of customers, which helps to improve application availability and reduce costs. New target tracking scaling policies for Auto Scaling make it easy to set up dynamic scaling for your application in just a few steps. With target tracking, you simply select a load metric for your application, set the target value, and Auto Scaling adjusts resources as needed to maintain that target. In this session, you will learn how you can use target tracking to setup sound scaling policies “without the fuss”, and improve availability under fluctuating demand. Netflix is spending $6 billion on original content this year, with shows like The Crown, Narcos, and Stranger Things, and plenty more in the future. Hear how they're using target tracking scaling policies to improve performance, reliability and availability around the world at prime times, without over-provisioning - and without guesswork. They will share some best practices examples of how target tracking allows their infrastructure to automatically adapt to changing traffic patterns in order to keep their audience entertained and their costs on target.
Join us as we explore how SigOpt is using AWS to optimize machine learning and AI pipelines. SigOpt is an Optimization-as-a-Service platform that seamlessly tunes model configuration parameters through via an ensemble of optimization algorithms behind a simple API. This results in captured performance that may otherwise be left on the table by conventional techniques while also reducing the time and cost for developing and optimizing new models. In this session, you will learn how SigOpt has optimized ML and AI pipelines on Amazon EC2 instances for various algorithms (including the latest generation of GPU optimized instances), as well as how they internally leverage AWS to build their flexible and scalable platform and evaluation framework
Many customers are using Amazon EC2 instances to run applications with high performance networking requirements. In this session, we provide an overview of Amazon EC2 network performance features—such as enhanced networking, ENA, and placement groups—and discuss how we are innovating on behalf of our customers to improve networking performance in a scalable and cost-effective manner. We share best practices and performance tips for getting the best networking performance out of your Amazon EC2 instances.
Do you have daily, weekly, or monthly tasks that you would like to automate? Looking to link two or more AWS Lambda functions in long-running processes? Are you building applications using microservices or containers? AWS Step Functions makes it easy to coordinate the components of distributed applications using visual workflows. In this session, we will share how AWS customers like Yelp are using Step Functions to reliably build and scale multi-step applications such as order processing, report generation, and data transformation. You will learn how to reduce the time to deploy and change microservices and serverless applications, and automate IT infrastructure for improved resilience and security.
AWS Batch is a fully managed service that enables developers to easily and efficiently run batch computing workloads of any scale on AWS. AWS Batch automatically provisions the right quantity and type of compute resources needed to run your jobs. With AWS Batch, you don't need to install or manage batch computing software, so you can focus on analyzing results and solving problems. In this session, the principal product manager for AWS Batch, Jamie Kinney, describes the core concepts behind AWS Batch and details of how the service functions. The presenter then demonstrates the latest features of AWS Batch with relevant use cases and sample code before describing some of the upcoming features for the service. Finally, hear from AWS Batch customers as they describe why and how they are using AWS Batch. This portion of the talk is delivered by representatives from the University of Utah, Autodesk, and AdRoll.
CMP325: How Netflix Tunes Amazon EC2 Instances for Performance
At Netflix, we make the best use of Amazon EC2 instance types and features to create a high- performance cloud, achieving near bare-metal speed for our workloads. This session summarizes the configuration, tuning, and activities for delivering the fastest possible EC2 instances, and helps you improve performance, reduce latency outliers, and make better use of EC2 features. We show how to choose EC2 instance types, how to choose between Xen modes (HVM, PV, or PVHVM), and the importance of EC2 features such SR-IOV for bare-metal performance. We also cover basic and advanced kernel tuning and monitoring, including the use of Java and Node.js flame graphs and performance counters.
When Amazon EC2 launched in 2006 there was a single instance size: m1.small. Over the past eleven years EC2 has evolved to provide an extensive selection of compute resources to customers including specialized resources such as NVMe SSDs, GPUs, and FPGAs. Under the hood, the servers used to host EC2 instances have transformed from off the shelf designs running virtualization software on the host CPUs to purpose built servers with AWS network and storage components implemented in hardware. Now we are happy to announce a new category of EC2 instances: Amazon EC2 Bare Metal Instances. These instances provide customers access to the physical compute resources of the host processors along with the security, scale, and services of EC2. This session will provide an overview of Bare Metal instances, how VMware used EC2 Bare Metal instances to build VMware Cloud on AWS, and other customer use cases for this new EC2 capability.
Over the last 11 years, the Amazon EC2 virtualization platform has quietly evolved to take advantage of unique hardware and silicon, an accelerated network and storage architecture, and with the launch of C5 instances, a bespoke hypervisor to deliver the maximum amount of resources and performance to instances. Come to this deep dive to get a behind-the-scenes look at how our virtualization stack has evolved, including a peak at how our latest generation platform works under the covers.
HPC Cloud services built on the latest Intel architecture, Skylake Xeon processor, are now powering the C5 compute intensive instance at AWS and can serve as your next-generation HPC platform. Hear how customers are starting to consider hybrid strategies to increase productivity and lower their capital expenditure and maintenance costs. Also learn how to adapt this model to meet the increasing HPC and data analytics needs for your applications with the new technologies incorporated into the platform. Also find out how high performance computing via Rescale's cloud platform using Intel's latest technology seamlessly brings these advantages to HPC management and users. To access team management and throughput, all can benefit from cloud platform adoption as demand increases. Learn how customers are already benefiting. Richard Childress Racing (RCR) discusses its use of AWS C5 via the Rescale platform and how the combination is giving it an edge in the highly competitive field of motorsport. Session sponsored by Intel
Just over four years after the first public release of Docker, and three years to the day after the launch of Amazon EC2 Container Service, the use of containers has surged to run a significant percentage of production workloads at startups and enterprise organizations. Join Deepak Singh, General Manager of Amazon Container Services, as we cover the state of containerized application development and deployment trends, new container capabilities on AWS that are available now, options for running containerized applications on AWS, and how AWS customers successfully run container workloads in production.
By packaging software into standardized units, Docker gives code everything it needs to run, ensuring consistency from your laptop all the way into production. But once you have your code ready to ship, how do you run and scale it in the cloud? In this session, you become comfortable running containerized services in production using Amazon EC2 Container Service. We cover container deployment, cluster management, service auto-scaling, service discovery, secrets management, logging, monitoring, security, and other core concepts. We also cover integrated AWS services and supplementary services that you can take advantage of to run and scale container-based services in the cloud.
Containers allow you to easily package an application's code, configurations, and dependencies into easy to use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control. But how can developers leverage containers to drive innovation for their applications, their team, and organization? In this session, Asif Khan Technical Business Manager for AWS will discuss how containers are becoming a new cloud native compute primitive, and how your organization can use containers as a building block to accelerate innovation. WeWork's Christopher Tava, Joshua Davis, and OpsLine's Radek Wierzbicki will show how they adopted containers as discipline in code development, and how they refactored their production architecture into containers running on Amazon ECS in under 8 months.
Cloud native architectures take advantage of on-demand delivery, global deployment, elasticity, and higher-level services to enable developer productivity and business agility. Open source is a core part of making cloud native possible for everyone. In this session, we welcome thought leaders from the CNCF, Docker, and AWS to discuss the cloud's direction for growth and enablement of the open source community. We also discuss how AWS is integrating open source code into its container services and its contributions to open source projects.
CON206: Docker on AWS
In this session, Docker Technical Staff Member Patrick Chanezon will discuss how Finnish Rail, the national train system for Finland, is using Docker on Amazon Web Services to modernize their customer facing applications, from ticket sales to reservations. Patrick will also share the state of Docker development and adoption on AWS, including explaining the opportunities and implications of efforts such as Project Moby, Docker EE, and how developers can use and contribute to Docker projects.
Red Hat OpenShift Containers Platform uses Kubernetes, Docker, Amazon EC2, Elastic Load Balancing, and persistent storage to provide a high performance, optimized, and scalable Linux container infrastructure leveraging Red Hat Enterprise Linux. In this session, we discuss best practices on successfully designing, implementing, and managing distributed microservices applications at any scale. We also explore other innovative strategies, including integrated analytic techniques to anticipate and predict scaling operations to allow infrastructures to scale elastically on-demand. Session sponsored by Red Hat
Increasingly, organizations are turning to microservices to help them empower autonomous teams, letting them innovate and ship software faster than ever before. But implementing a microservices architecture comes with a number of new challenges that need to be dealt with. Chief among these finding an appropriate platform to help manage a growing number of independently deployable services. In this session, Sam Newman, author of Building Microservices and a renowned expert in microservices strategy, will discuss strategies for building scalable and robust microservices architectures, how to choose the right platform for building microservices, and common challenges and mistakes organizations make when they move to microservices architectures.
AWS Fargate is a technology for Amazon ECS and EKS* that allows you to run containers without having to manage servers or clusters. Join us to learn more about how Fargate works, why we built it, and how you can get started using it to run containers today.
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a new managed service for running Kubernetes on AWS. This session will provide an overview of Amazon EKS, why we built it, and how it works.
Containers can make it easier to scale applications in the cloud, but how do you set up your CI/CD workflow to automatically test and deploy code to containerized apps? In this session, we explore how developers can build effective CI/CD workflows to manage their containerized code deployments on AWS. Ajit Zadgaonkar, director of engineering and operations at Edmunds, walks through best practices for CI/CD architectures used by his team to deploy containers. We also deep dive into topics such as how to create an accessible CI/CD platform and architect for safe Blue-Green deployments.
Batch processing is useful to analyze large amounts of data. But configuring and scaling a cluster of virtual machines to process complex batch jobs can be difficult. In this talk, we'll show how to use containers on AWS for batch processing jobs that can scale quickly and cost-effectively. We will also discuss AWS Batch, our fully managed batch-processing service. You'll also hear from GoPro and Here about how they use AWS to run batch processing jobs at scale including best practices for ensuring efficient scheduling, fine-grained monitoring, compute resource automatic scaling, and security for your batch jobs.
Sick of getting paged at 2am and wondering "where did all my disk space go?" New Docker users often start with a stock image in order to get up and running quickly, but this can cause problems as your application matures and scales. Creating efficient container images is important to maximize resources, and deliver critical security benefits. In this session, AWS Sr. Technical Evangelist Abby Fuller will cover how to create effective images to run containers in production. This includes an in-depth discussion of how Docker image layers work, things you should think about when creating your images, working with Amazon EC2 Container Registry, and mise-en-place for install dependencies. Prakash Janakiraman, Co-Founder and Chief Architect at Nextdoor will discuss high-level and language specific best practices for with building images and how Nextdoor uses these practices to successfully scale their containerized services with a small team.
A lot of progress has been made on how to bootstrap a cluster since Kubernetes' first commit and is now only a matter of minutes to go from zero to a running cluster on Amazon Web Services. However, evolving a simple Kubernetes architecture to be ready for production in a large enterprise can quickly become overwhelming with options for configuration and customization. In this session, Arun Gupta, Open Source Strategist for AWS and Raffaele Di Fazio, software engineer at leading European fashion platform Zalando, will show common practices for running Kubernetes on AWS and share insights from experience in operating tens of Kubernetes clusters in production on AWS. We will cover options and recommendations on how to install and manage clusters, configure high availability, perform rolling upgrades and handle disaster recovery, as well as continuous integration and deployment of applications, logging, and security.
Image recognition is a field of deep learning that uses neural networks to recognize the subject and traits for a given image. In Japan, Cookpad uses Amazon ECS to run an image recognition platform on clusters of GPU-enabled EC2 instances. In this session, hear from Cookpad about the challenges they faced building and scaling this advanced, user-friendly service to ensure high-availability and low-latency for tens of millions of users.
If you've ever considered moving part of your application stack to containers, don't miss this session. Amazon ECS Software Engineer Uttara Sridhar will cover best practices for containerizing your code, implementing automated service scaling and monitoring, and setting up automated CI/CD pipelines with fail-safe deployments. Manjeeva Silva and Thilina Gunasinghe will show how McDonalds implemented their home delivery platform in four months using Docker containers and Amazon ECS to serve tens of thousands of customers.
As containers become more embedded in the platform tools, debug tools, traces and logs become increasingly important. Nare Hayrapetyan, Senior Software Engineer and Calvin French-Owen, Senior Technical Officer for Segment will discuss the principals of monitoring and debugging containers and the tools Segment has implemented and built for logging, alerting, metric collection, and debugging of containerized services running on Amazon ECS.
AWS Fargate makes running containerized workloads on AWS easier than ever before. This session will provide a technical background for using Fargate with your existing containerized services, including best practices for building images, configuring task definitions, task networking, secrets management, and monitoring.
If you ask 10 teams why they migrated to containers, you will likely get answers like ‘developer productivity', ‘cost reduction', and ‘faster scaling'. But teams often find there are several other ‘hidden' benefits to using containers for their services. In this talk, Franziska Schmidt, Platform Engineer at Mapbox and Yaniv Donenfeld from AWS will discuss the obvious, and not so obvious benefits of moving to containerized architecture. These include using Docker and ECS to achieve shared libraries for dev teams, separating private infrastructure from shareable code, and making it easier for non-ops engineers to run services.
Deep dive into how Amazon ECS can enable secure, natively addressable, and highly performant network interfaces for containers using the recently launched the awsvpc task networking mode. In this session, we focus on how CNI plugins were integrated with the Amazon ECS container agent and discuss the backend changes necessary to enable elastic network interface provisioning for tasks. Shakeel Sorathia, VP of engineering at FOX Digital, discusses best practices for working with Amazon ECS to enable such use cases as network isolation and IP-based routing for service discovery.
Scaling a microservice-based infrastructure can be challenging in terms of both technical implementation and developer workflow. In this talk, AWS Solutions Architect Pierre Steckmeyer will be joined by Will McCutchen, Architect at BuzzFeed, to discuss Amazon ECS as a platform for building a robust infrastructure for microservices. We will look at the key attributes of microservice architectures and how Amazon ECS supports these requirements in production, from configuration to sophisticated workload scheduling to networking capabilities to resource optimization. We will also examine what it takes to build an end-to-end platform on top of the wider AWS ecosystem, and what it's like to migrate a large engineering organization from a monolithic approach to microservices.
CON403: Introducing Service Discovery for Amazon ECS
Starting January 2018, Amazon ECS will have a native Service Discovery experience for container-based applications. This feature enables developers to look up service dependencies using a friendly and predictable DNS name. In this session, we'll deep dive into ECS Service Discovery, how it will work, and why we built it.
As your application's infrastructure grows and scales, well-managed container scheduling is critical to ensuring high-availability and resource optimization. In this session, we will deep dive into the challenges and opportunities around container scheduling, as well as the different tools available within Amazon ECS and AWS to carry out efficient container scheduling. We will discuss patterns for container scheduling available with Amazon ECS and the Blox scheduling framework
While organizations gain agility and scalability when they migrate to containers and microservices, they also benefit from compliance and security, advantages that are often overlooked. In this session, Kelvin Zhu, lead software engineer at Okta, joins Mitch Beaumont, enterprise solutions architect at AWS, to discuss security best practices for containerized infrastructure. Learn how Okta built their development workflow with an emphasis on security through testing and automation. Dive deep into how containers enable automated security and compliance checks throughout the development lifecycle. Also understand best practices for implementing AWS security and secrets management services for any containerized service architecture.
CON409: Deep Dive into Amazon EKS
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a new managed service for running Kubernetes on AWS. Get a sneak peek into how Amazon EKS works, from provisioning nodes, launching pods, and integrations with AWS services such as Elastic Load Balancing and Auto Scaling.
End users expect to be able to view static, dynamic, and streaming content anytime, anywhere, and on any device. Amazon CloudFront is a web service that accelerates delivery of your websites, APIs, video content, or other web assets to end users around the globe with low latency, high data transfer speeds, and no commitments. In this session, learn what a content delivery network (CDN) such as Amazon CloudFront is and how it works, the benefits it provides, common challenges and needs, performance, recently released features and examples of how customers are using CloudFront. You will also learn about recustomizing content delivery through AWS Lambda@Edge - a serverless compute service that lets you execute functions to customize the content delivered through CloudFront.
AWS provides the building blocks for modern broadcast and OTT video workflows. In this session, we show how the broad array of AWS services can be used to build world class video workflows that are resilient, cost effective, and easy to manage. Both live and file-based video workflows are highlighted, and advanced monetization techniques are discussed.
CTD203: NEW LAUNCH! Hear how OwnZones is using AWS Elemental MediaConvert to help media customers deliver world class VOD experiences
Explore server-less transcoding workflows using the latest AWS services. Learn how to implement a wide array of use cases and how to combine AWS and 3rd party services to create a complete end-to-end file-based transcoding solution.
CTD204: NEW LAUNCH! Hear how the Pac-12 is using AWS Elemental MediaStore and explore video workflows with MediaLive and MediaPackage
A shift is taking place in how live events are broadcast to audiences. Although there will always be a place for on-premises video infrastructure, cloud-based services for live broadcast now match the capabilities of on-premises hardware, are easy to use, and save time and money. This session highlights AWS services for live broadcast workflows and the components required for successful delivery of live events to video consumers. Topics include: • Video delivery • Workflow resiliency in the cloud • Customer applications and use cases • Live broadcast of periodic events and 24/7 channels
CTD206: NEW LAUNCH! Learn how Fubo is monetizing their content with server side ad insertion using AWS Elemental MediaTailor
In this session, we will introduce server-side ad insertion, also known as ad stitching. Server side ad insertion helps you to deliver ads that are more relevant to your customers, and at the same time, helps bypass ad blockers and lower latency.
Join NASA behind the scenes of the advanced video and cloud workflows that power deep space research. Create the foundation for space commerce, and ignite our imagination about the future. Gain firsthand knowledge of the lessons learned from the first-ever 4K live stream from space.
In this series of technical flash talks, learn directly from Amazon CloudFront engineers about best practices on security, caching, measuring performance using Real User Monitoring (RUM), and customizing content delivery with Lambda@Edge.
In this session, learn how Hulu launched a new Live TV service and Cloud DVR platform using Amazon CloudFront, Amazon S3, Amazon Aurora, and Amazon EC2 to enable the necessary scale for live video ingest, storage, and delivery of content streams. Learn best practices for geographic redundancy, index and naming challenges, hitting performance targets, optimizing for cost, and measuring the user experience to ensure smooth playback for when you build your own video streaming solution.
POOQ - Content Alliance Platform is the primary OTT service provider in Korea. In this session, we discuss how Content Alliance Platform migrated its workloads to AWS this year using AWS Elemental Cloud, Amazon CloudFront, and other AWS services. After their initial migration, Content Alliance Platform has continuously optimized video transcoding for better quality and cost-effective delivery using CloudFront. To achieve this, they developed various video profiles using AWS Elemental Live, according to the content, by using various CloudFront features, such as geo-blocking, signed cookies and URLs, and HTTPS with ACM, and by adopting various new media technologies, such as H.265, UHD, VBR, and HFR. Finally, we discuss how Content Alliance Platform is developing its next generation of OTT services using microservice architecture with Docker.
Dow Jones, which produces the Wall Street Journal, engaged AWS Enterprise Support to plan for peak website usage during the United States presidential election in 2016. This preparation ensured that the Wall Street Journal website could scale to meet peak demands as election returns came in. They have since expanded their use of AWS services, including Lambda@Edge, AWS WAF, and AWS Shield.
Learn how Amazon.com continuously improves the availability and performance of its website with AWS. Gavin Jewell, Director of Amazon's Consumer Cloud Enablement group, will go in depth on how Amazon CloudFront helps them accelerate their website globally, and how it gives flexibility to apply various security measures at the edge. He will also explain how they are using services such as AWS Shield, AWS WAF, and Route 53. Lastly, we will explore Amazon.com's continuous and incremental re-architecture program that ensures their infrastructure is constantly updated to use AWS natively.
Get a deep-dive planning and implementation analysis of Asurion's “All in AWS Edge” migration. Jabez Abraham, Cloud Architect of Asurion, discusses their AWS edge location strategy including: Amazon CloudFront, AWS WAF, AWS Shield Advanced, and AWS Lambda@Edge, and engagement of partners. Jabez shares premigration strategy, architectural reviews, A/B testing requirements, caching, and shielding of endpoints within the VPC, and partner engagements.
AWS Lambda enables you to run code without provisioning or managing servers in one AWS Region. Today, Lambda@Edge provides the same benefits closer to your end users, enabling you to assemble and deliver content on-demand to create low-latency web experiences. Come and join us for examples of how customers can move significant workloads they previously managed server fleets into Lambda for truly serverless website backends.
Your application is exposed to a variety of threats from common distributed attacks to sophisticated zero-day vectors. Learn how to architect beyond the region and take advantage of the AWS Edge Network and upgrade your security posture with easy to deploy solutions that scale. At this session you will learn how to I ensure your application will withstand malicious threats and DDoS attacks, what role does architecture play in your security posture, and how professional services and partners like Flux7 can help.
Since last year's ‘Taking DevOps to the Edge', and with the introduction of AWS Lambda@Edge, the tools available to apply DevOps practices to your application edge have broadened. In this updated session, we dive deep into how you can integrate Amazon CloudFront and related services into your application, be agile in developing and adapting the application, and follow best practices when configuring the services to improve security and performance, all while reducing costs. Attend this session and learn how to determine the best location (origin, edge, or client) to execute your code, avoid needless forwarding of headers and cookies, test your application when making changes, version your configuration changes, monitor usage and automate security, create templates for new distributions, configure SSL/TLS certificates, and more.
Reaching a large podcast audience can present some significant infrastructure scaling challenges. In this session, startup company Whooshkaa walks you through the podcasting landscape. During this session, you will learn about the new audiences you can reach through podcasts. We will explore technical solutions such as Amazon Lightsail, S3 and CloudFront which can facilitate experimentation and help you reach a global audience at low cost. We will dive into Whooshkaa's podcasting platform and explore advanced architectures, leveraging AWS services, allowing you to curate and customize content for each listener. We will also explore tools and solutions for measuring engagement and connecting with your audience through podcasting.
If you want to deliver videos to all consumers on all devices, building such workloads is complex, time consuming, and expensive. Now, it is fast and easy to implement video-on-demand workflows on AWS and distribute video content to a global audience. Companies, small or large and in various industries, can deliver streaming video without complex professional video tools. In this session, learn how to build complex video workflows entirely in code using AWS services.
Join this session for an in-depth look into how the Amazon CloudFront team measures the internet in real time to give our customers the best possible experience using AWS technologies, such as Amazon Kinesis and Amazon EMR. AWS customers should expect to leave this whiteboarding session with sample design patterns that they can use when they build their own distributed applications that need a feedback control system.
Nowadays, it's common for a web server to be fronted by a global content delivery service, such as Amazon CloudFront, to accelerate delivery of websites, APIs, media content, and other web assets. Website administrators and developers want to generate insights in order to improve website availability through bot detection and mitigation, by optimizing web content based on the devices and browser used, by reducing perceived latency by caching a popular object closer to its viewer, and so on. In this session, we dive deep into building an end-to-end serverless analytics solution to analyze Amazon CloudFront access logs, both at rest and in transit, using Amazon Athena and Amazon Kinesis Analytics, respectively, and we generate visualization insights using Amazon QuickSight. Join a discussion with AWS solution architects to learn more about the various ways to generate insights to improve the overall perceived experience for your website users.
In this session, we discuss the evolution of database and analytics services in AWS, the new database and analytics services and features we launched this year, and our vision for continued innovation in this space. We are witnessing an unprecedented growth in the amount of data collected, in many different forms. Storage, management, and analysis of this data require database services that scale and perform in ways not possible before. AWS offers a collection of database and other data services—including Amazon Aurora, Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon ElastiCache, Amazon Kinesis, and Amazon EMR—to process, store, manage, and analyze data. In this session, we provide an overview of AWS database and analytics services and discuss how customers are using these services today.
Amazon Aurora services are MySQL and PostgreSQL -compatible relational database engines with the speed, reliability, and availability of high-end commercial databases at one-tenth the cost. This session introduces you to Amazon Aurora, explores the capabilities and features of Aurora, explains common use cases, and helps you get started with Aurora.
MySQL is the world's most popular open source relational database and is used by dozens of popular open source applications. AWS provides several ways to run a MySQL application in the cloud, including Amazon EC2, Amazon RDS for MySQL, Amazon RDS for MariaDB, and Amazon Aurora. This session presents the different options for running MySQL in the AWS Cloud, discusses the different ways to migrate your MySQL database to AWS, and provide tips and tricks for optimizing your MySQL workloads in AWS. Also, a customer presents lessons learned while migrating their MySQL databases to AWS.
In this session, Shawn Bice, VP of NoSQL and QuickSight, will cover what's new in AWS non-relational data services, such as Amazon DynamoDB, Amazon ElastiCache, and Amazon Elastisearch. We will discuss how developers might select different data services to solve different aspects of an application and demo scenarios on which application use cases lend themselves well to which data services. If you're a developer building massively scaled applications, requiring flexibility, consistent millisecond performance, and trying to understand what non-relational data service you might use, this is a great introductory session.
The Amazon Aurora MySQL-compatible Edition is a fully managed relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It is purpose-built for the cloud using a new architectural model and distributed systems techniques. It provides far higher performance, availability, and durability than previously possible using conventional monolithic database architectures. Amazon Aurora packs a lot of innovations in the engine and storage layers. In this session, we do a deep-dive into some key innovations behind Amazon Aurora MySQL-compatible edition. We explore new improvements to the service and discuss best practices and optimal configurations.
Amazon RDS enables customers to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon RDS provides you six database engines to choose from, including Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we take a closer look at the capabilities of the RDS service and review the latest features available. We do a deep dive into how RDS works and the best practices to achieve the optimal performance, flexibility, and cost saving for your databases.
This is the general session for Amazon DynamoDB and will cover newly announced features, as well as provide an end to end view of recent innovations. We will also share some of our successful customer stories and use cases. Come to this session to learn all about what's new for DynamoDB!
In this session, we provide a peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns with our Redis and Memcached offerings and how customers have used them for in-memory operations to reduce latency and improve application throughput. During this session, we review ElastiCache best practices, design patterns, and anti-patterns.
We're still in the midst of a data explosion. With data, we want to do things smarter, quicker, and more accurately, and working with a company like Snowflake enables us to do so. In this session, Kelly Mungary, Lionsgate director of enterprise data and analytics, details how Lionsgate achieved the “Holy Grail” of data analytics – easily storing, blending, and analyzing structured and semi-structured data for agile reporting and data science. With modern data warehousing built for the cloud, Lionsgate achieves faster time to market and continues to migrate investment away from traditional, fading platforms. Lionsgate is a leading global entertainment company that generates vast stores of varied data types from sales, research, social media, exit polls, box office activity, and many other channels to better serve its business customers and its fans. Session sponsored by Snowflake
Netflix runs hundreds of multivariate A/B tests a year, many of which help personalize the experience in the UI. This causes an exponential growth in the number of user experiences served to members, with each unique experience resulting in a unique JS/CSS bundle. Pre-publishing millions of permutations to the CDN for each build of each UI simply does not work at Netflix scale. In this session, we discuss how we built, designed, and scaled a brand new Node.js service, Codex. Its sole responsibility is to build personalized JS/CSS bundles on the fly for members as they move through the Netflix user experience. We've learned a ton about building a horizontally scalable Node.js microservice using core AWS services. Codex depends on Amazon S3 and Amazon DynamoDB to meet the streaming needs of our 100 million customers.
You can significantly reduce database licensing and operational costs by migrating from commercial database engines to Amazon RDS. In addition, you can gain flexibility and operational efficiency by avoiding the frustrating usage constraints that accompany commercial database licenses. Amazon RDS is a fully managed database service, so you no longer need to worry about complex database management tasks. Launch a single database instance or thousands of them in just a few minutes, and pay only for what you use. Learn how AWS Database Migration Service and AWS Schema Conversion Tool help you migrate commercial databases like Oracle and Microsoft SQL Server to Amazon RDS and Aurora easily and securely with minimal downtime.
We have recently seen some convergence of different database technologies. Many customers are evaluating heterogeneous migrations as their database needs have evolved or changed. Evaluating the best database to use for a job isn't as clear as it was ten years ago. In this session, we discuss the ideal use cases for relational and nonrelational data services, including Amazon ElastiCache for Redis, Amazon DynamoDB, Amazon Aurora, and Amazon Redshift. This session digs into how to evaluate a new workload for the best managed database option.
In this talk, Anurag Gupta, VP for AWS Analytic and Transactional Database Services, will talk about some of the key trends we see in data processing and how they shape the services we offer at AWS. Specific trends will include the rise of machine generated logs as the dominant source of data, the move towards Serverless, api-centric computing, and the growing need for local access to data from users around the world.
Learn what it takes to migrate an on-premises database to Amazon Relational Database Service (Amazon RDS) for SQL Server, and how you can take advantage of the features and options available in the fully managed Amazon RDS platform. This session walks through best practices for system sizing and configuration, various database migration strategies, and how to leverage your existing authentication system using Amazon RDS.
Amazon Relational Database Service (Amazon RDS) simplifies setup, operation, and management of databases in the cloud. In this session, we will explore Amazon RDS features and best practices that offer graceful migration, high performance, elastic scaling, and high availability for Oracle databases. You will also learn from the Chief Architect for Intuit's Small Business Division how the QuickBooks Online team is using Amazon RDS for Oracle to scale the world's largest online accounting platform.
PostgreSQL is an open source database growing in popularity because of its rich features, vibrant community, and compatibility with commercial databases. Learn about ways to run PostgreSQL on AWS including self-managed, and the managed database services from AWS: Amazon Relational Database Service (Amazon RDS) and the Amazon Aurora PostgreSQL-compatible Edition. This talk covers key Amazon RDS for PostgreSQL functionality, availability, and management. We also review general guidelines for common user operations and activities such as migration, tuning, and monitoring for their RDS for PostgreSQL instances.
Aurora is a cloud-optimized relational database that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. In this session, we will discuss various options for migrating to Aurora with MySQL compatibility, pro and cons of each, and which method is preferred when. Migrating to Aurora is just the first step. We'll share common use cases and how you can run optimally on Aurora.
Tatsuo Ishii from SRA OSS has done extensive testing to compare the Aurora PostgreSQL-compatible Edition with standard PostgreSQL. In this session, he will present his performance testing results, and his work on Pgpool-II with Aurora; Pgpool-II is an open source tool which provides load balancing, connection pooling, and connection management for PostgreSQL.
In this introductory session, we look at how to convert and migrate your commercial databases and data warehouses to the cloud and gain your database freedom. AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) have been used to migrate tens of thousands of databases. These include Oracle and SQL Server to Amazon Aurora, Teradata and Netezza to Amazon Redshift, MongoDB to Amazon DynamoDB, and many other data source and target combinations. Learn how to easily and securely migrate your data and procedural code, enjoy flexibility and cost savings, and gain new opportunities.
Amazon Neptune is a fully managed graph database service which has been built ground up for handling rich highly connected data. Graph databases have diverse use cases across multiple industries; examples include recommendation engines, knowledge graphs, fraud detection, social networks, network management and life sciences. Amazon Neptune is open and flexible with support for Apache TinkerPop and RDF/SPARQL standards. Under the hood Neptune uses the same foundational building blocks as Amazon Aurora which gives it high performance, availability and durability. In this session, we will do a deep dive into capabilities, performance and key innovations in Amazon Neptune.
In this session, we will provide an overview of Amazon Neptune, AWS's newest database service. Amazon Neptune is a fast, reliable graph database that makes it easy to build applications over highly connected data. We will then explore how Siemens is building a knowledge using Amazon Neptune.
In this session, we introduce you to the best practices for migrating databases, such as traditional RDBMS or other NoSQL databases to Amazon DynamoDB. We discuss DynamoDB key concepts, evaluation criteria, data modeling in DynamoDB, how to move data into DynamoDB, and data migration key considerations. We share a case study of Samsung Electronics, which migrated their Cassandra cluster to DynamoDB for their Samsung Cloud workload.
AWS architecture for Careem, a fast-growing car-booking service in the broader Middle East, has quickly evolved to support over six million users in eleven countries. Careem also operates in areas with weak GPS signals and unique traffic patterns, resulting in the poor user experience of long driver match times and rider wait times. Careem was storing driver location data in MySQL, but their high volume of concurrent calls and lack of geospatial support in MySQL 5.6 resulted in continuous deadlocks and performance issues. Amazon ElastiCache for Redis helped meet their need for in-memory storage service with advanced data structures. ElastiCache for Redis accelerated their car booking application and reduced ride matching times from several minutes to milliseconds. Learn how their big bottleneck of insert and update operations in MySQL became a quick lookup in ElastiCache for Redis by using Redis Sorted Sets, geohashes, and timestamps.
The BBC's website and apps are used around the world by an audience of millions who read, watch, and interact with a range of content. The BBC handles this scale with an innovative website platform, built on Amazon ElastiCache and Amazon EC2 and based on nanoservices. The BBC has over a thousand nanoservices, powering many of its biggest webpages. Explore its nanoservices platform and use of ElastiCache. Learn how Redis's ultra-fast queues and pub/sub allow thousands of nanoservices to interact efficiently with low latency. Discover intelligent caching strategies to optimize rendering costs and ensure lightning fast performance. Together, ElastiCache and nanoservices can make real-time systems that can handle thousands of requests per second.
Coffee Meets Bagel is a top-tier dating app that focuses on delivering high-quality matches via our recommendation systems. We use Amazon ElastiCache as part of our recommendation pipeline to identify nearby users with geohashing, store feature vectors for on-demand user similarity calculations, and perform set intersections to find mutual friends between candidate matches. Coffee Meets Bagel also employs Redis for other novel use cases, such as a fault-tolerant priority queue mechanism for its asynchronous worker processes, and storing per-user recommendations in sorted sets. Join our top data scientist and CTO as we walk you through our use cases and architecture and highlight ways to take advantage of ElastiCache and Redis.
Building rich, high-performance streaming data systems requires fast, on-demand access to reference data sets, to implement complex business logic. In this talk, Expedia will discuss the architectural challenges the company faced, and how DAX + DynamoDB fits into the overall architecture and met their design requirements. Additionally, you will hear how DAX that enabled Expedia to add caching to their existing applications in hours, which previously was taking much longer. Session attendees will walk away with three key outputs: 1) Expedia's overall architectural patterns for streaming data 2) how they uniquely leverage DynamoDB, DAX, Apache Spark, and Apache Kafka to solve these problems 3) the value that DAX provides and how it enabled them to improve our performance and throughput, reduce costs, and all without having to write any new code.
The backend for the Snapchat Stories feature includes Snapchat's largest storage write workload. Learn how we rebuilt this workload for Amazon DynamoDB and executed the migration. Safely moving such a critical and high-scale piece of the Stories infrastructure to a new system, right before yearly peak usage, led to interesting challenges. In this session, we cover data model changes to leverage DynamoDB strengths and improve both performance and cost. We also cover challenges and risks in making remote calls across cloud providers, dealing with issues of scale, forecasting capacity requirements, and how to mitigate the risks of taking an unproven system through the dramatic traffic spikes that occur on New Year's Eve.
Sales on Prime Day 2017 surpassed Black Friday and Cyber Monday, making it the biggest day ever in Amazon history. An event of this scale requires infrastructure that can easily scale to match the surge in traffic. In this session, learn how AWS and Amazon DynamoDB powered Prime Day 2017. DynamoDB requests from Amazon Alexa, the Amazon.com sites, and the Amazon fulfillment centers peaked at 12.9 million per second, a total of 3.34 trillion requests. Learn how the extreme scale, consistent performance, and high availability of DynamoDB let Amazon.com meet the needs of Prime Day without breaking a sweat.
Database capacity planning is critical to running your business, but it's also hard. In this session we'll compare how scaling is usually performed for relational databases and NoSQL databases. We'll look behind the scenes at how DynamoDB shards your data across multiple partitions and servers. Finally, we'll talk about some of the recent enhancements to DynamoDB that make scaling even simpler, particularly a new feature called adaptive throughput that eliminates much of the throttling issues that you may have experienced.
Are you considering a massive data migration? Do you worry about downtime during a migration? Dr. JunYoung Kwak, Tinder's Lead Engineering Manager, will share his insights on how Tinder successfully migrated critical user data to DynamoDB with zero downtime. Join us to learn how Tinder leverages DynamoDB performance and scalability to meet the needs of their growing global user base.
DAT329: Case Study: Ola Cabs Uses Amazon EBS and Elastic Volumes to Maximize MySQL Deployment
Ola Cabs is India's leading taxi aggregator, providing point-to-point transportation for a million people daily in more than 110 cities. Ola Cabs chose AWS from the start because it offered the flexibility, scalability, and agility the company needed to establish its competitive edge. In this session, you hear about Ola Cabs' journey to the cloud and learn how they take advantage of the flexibility and elasticity of Amazon EBS storage to optimize performance, maximize availability, and save money compared with instance storage. We describe best practices and share tips for success throughout.
Sprinklr delivers a complete social media management system for the enterprise. It also helps the world's largest brands do marketing, advertising, care, sales, research, and commerce on Facebook, Twitter, LinkedIn, and 21 other channels on a global level. This is all done on a single integrated platform. In this session, you learn about Sprinklr's journey to the cloud and discover how to optimize your NoSQL database on AWS for cost, efficiency, and scale. We also do dive deep into best practices and architectural considerations for designing and managing NoSQL databases, such as Apache Cassandra, MongoDB, Apache CouchDB, and Aerospike on Amazon EC2 and Amazon EBS. We share best practices for instance and volume selection, provide performance tuning hints, and describe cost optimization techniques throughout.
Airbnb has served over 200,000,000 customers across 191 countries and is one of the largest database consumers on AWS. They have heavily adopted MySQL and have recently completed a migration to Amazon Aurora. In this session, Airbnb shares their story, including design considerations for operating at Airbnb scale, tips, tricks, and advice for others startups, and thoughts on why they decided to run on Aurora.
Learn how Verizon is adopting the Amazon Aurora PostgreSQL-compatible edition for their mission-critical applications. Verizon has a history of adopting best of breed database technologies as they continue to serve their 140M+ customers. As Verizon moves its enterprise applications to the cloud, database performance and reliability are the key considerations. With heavy dependence on commercial databases, learn how a large enterprise like Verizon evaluated performance, reliability and operational characteristics of Amazon Aurora, and was able to create internal momentum behind adoption of open source technologies by showcasing early wins. This session also highlights best practices for using Amazon Aurora and the newly-announced RDS Performance Insights.
The IARPA Machine Intelligence from Cortical Networks (MICrONS) program is a research endeavor created to improve neurally-plausible machine-learning algorithms by understanding data representations and learning rules used by the brain through structurally and functionally interrogating a cubic millimeter of mammalian neocortex. This effort requires efficiently storing, visualizing, and processing petabytes of neuroimaging data. The Johns Hopkins University Applied Physics Laboratory (APL) has developed an open-source, highly available service to manage these data, called the Boss. The Boss uses AWS to provide a cloud-native spatial database with an innovative storage hierarchy and auto-scaling capability to balance cost and performance. This system extensively uses serverless components to meet both scalability and cost requirements. In this session, we provide an overview of the Boss, and we focus on how the APL used Amazon DynamoDB, AWS Lambda, and AWS Step Functions for several high-throughput components of the system. We discuss both the challenges and successes with serverless technologies.
Amazon Aurora is a fully-managed relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The initial launch of Amazon Aurora delivered these benefits for MySQL. We have now added PostgreSQL compatibility to Amazon Aurora. In this session, Amazon Aurora experts discuss best practices to maximize the benefits of the Amazon Aurora PostgreSQL-compatible edition in your environment.
This session, we go deep into advanced design patterns for DynamoDB. This session is intended for those who already have some familiarity with DynamoDB and are interested in applying the design patterns covered in the DynamoDB deep dive session and hands-on labs for DynamoDB. The patterns and data models discussed in this presentation summarize a collection of implementations and best practices leveraged by the Amazon CDO to deliver highly scaleable solutions for a wide variety of business problems. In this session, we discuss strategies for GSI sharding and index overloading, scaleable graph processing with materialized queries, relational modeling with composite keys, executing transactional workflows on DynamoDB, and much, much more.
Today, small software teams have the ability to disrupt big markets as more and more businesses start to deliver their products as-a-service. The ability for teams to respond to customers and innovate quickly is their key differentiator. In this session, we will cover how you can begin your DevOps journey by sharing best practices used by the "two pizza" engineering teams at Amazon. We will showcase how you can accelerate developer productivity by implementing continuous Integration and delivery workflows using AWS Developer tools including AWS CodeStar, AWS CodeCommit, AWS CodeBuild, AWS CodePipeline and AWS CodeDeploy. Finally, we will demonstrate how to build an end-to-end CICD pipeline with CodeStar in minutes.
Analyzing and debugging production distributed applications built using a service oriented, microservices, or serverless architectures is a challenging task. In this session, we introduce AWS X-Ray, an AWS service that makes it easier to identify performance bottlenecks and errors, pinpoint issues to specific services in your application, identify the impact of issues on application users, and visualize the service call graph and the request timelines for your applications. We will also showcase a customer, Chick-fil-A and how they have adopted AWS X-Ray to play a role throughout the microservice lifecycle in order to ensure quality, transparency, and operational visibility for their services on AWS
The AWS SDK for Java (Version 1.x) has been connecting JVM based applications to AWS services since 2010. However, the JVM eco-system has changed a lot in the last 7 years. Based on a lot of customer feedback, we recently launched a developer preview of Version 2.0 of the AWS SDK for Java, which has been completely re-written from the core HTTP layer to the service clients. In this session, we'll get under the covers of the code-base to see how we've been able to get over 100,000 TPS from a single client instance during initial testing. We'll also go over some of the many new features and highlight some of the major differences with 1.x including: pluggable HTTP, non-blocking I/O, enhanced pagination, immutability and more.
Come join us as we take a deeper look at Amazon's approach to releasing mission critical software. In this session, we will take a journey through the release process of an AWS Tier 1 service on its way to production. We'll follow a single code change throughout the entire process from idea to release, and focus on how Amazon updates critical software quickly and safely for its global customers. Throughout the talk we'll demonstrate how our internal software release processes map to AWS Developer tools, highlighting how you can leverage AWS's CI/CD services to create your own robust release process.
Interested in new ways to extend and deploy Ruby applications on AWS? There are a number of tools and services to help streamline your application management and deployments in AWS. In this session we will take an example Ruby application and demonstrate how you can deploy and extend it using AWS tools.
Static applications living on long-running servers and assumptions about monitoring are becoming history on the cloud. Now, we routinely deploy a range of services, from automatic scaling to decoupled message queues to serverless applications. These dynamic services, with microservices architectures, can break or coexist with traditional monitoring and instrumentation approaches. Whether you're building new apps, were told to migrate yesterday, are currently migrating, or are already scaling your apps on AWS, this session dives into the how, where, and when to monitor your applications and infrastructure, no matter where your apps run. Also hear best practices we've learned from our customers, and from running our own service (1.5 billion+ metrics per minute). Join us for a little bit of history and a whole lot of now as we show you how and what you need to scale and prove your success on the AWS Cloud. Session sponsored by New Relic
Over the years, Atlassian's engineering teams have developed a set of proven and dependable DevOps practices that have allowed us to increase velocity and ship more reliably. Like many of you, Atlassian is grappling with complex, distributed teams; ever-increasing demand on our products and services; and a greater need than ever for a fast, stable release cadence and reliable uptime. This year, we're going to be sharing 10 of our dev tested, ops approved practices with you. In this session, we discuss: how Atlassian tools integrate with AWS to break down silos, increase development speed, and minimize system outages; how to scale the DevOps basics, from building a culture of collaboration to quadrupling your release cadence; and how to track the business value of what you're building, using, deploying, and repairing. Session Sponsored by Atlassian.
Hubspot relies on the performance of their business-critical cloud applications; any emerging issues must be addressed immediately. Running a microservices environment made up of thousands of instances to support 45 different teams means that unexpected changes can have a major impact. Manual intervention is often not fast enough to catch emerging anomalies and to maintain the level of availability required. In this session, we discuss how we've built our own tooling around our CD pipeline and how we've utilized automation to exponentially improve our MTTR. Learn how: Hubspot manages a microservices environment to enable hundreds of deployments a day; leveraging SignalFx detectors help proactively determine emerging issues; automatically invoking AWS services reduces usage-related failures; visibility across an entire AWS environment improves MTTR. Session Sponosored by: SignalFx
AWS Elastic Beanstalk provides an easy way for you to quickly deploy, manage, and scale applications in the AWS Cloud. Through interactive demos and code samples, this session will teach you how to deploy your code using Elastic Beanstalk, provision and use other AWS services (Amazon SNS, Amazon SQS, and Amazon DynamoDB, AWS CodeCommit) use your application's health metrics to tune performance, scale your application to handle millions of requests, perform zero-downtime deployments with traffic routing, and keep the underlying application platform up-to-date with managed updates.
Managing large-scale production environments can be complex – things will go wrong and learning to operate and manage these environments is critical. From routine tasks such as building AMIs to managing the lifecycle of your instances, investing in automation and tooling can help you detect problems earlier, minimize downtime, and reduce manual work. In this session, you will learn how to use Amazon EC2 Systems Manager to troubleshoot common issues, detect and remediate configuration drift, and automate common actions. You will learn how to author common actions and about community driven features of Systems Manager. You can use the same tools across Linux and Windows, in AWS and in hybrid environments. You will also hear from a Systems Manager customer on how they are using Systems Manager to better manage and operate their infrastructure. Our customer, Ancestry, will talk about how they are using EC2 Systems Manager to manage their environment in an agile manner.
Are you using the AWS CLI to manage your AWS services and want to do more? In this session learn how to utilize the latest features of the AWS CLI to improve your current workflows in interacting with and managing your AWS resources from the command line. It is recommended that audience members have familiarity with the AWS CLI as this talk will focus on its newer, more advanced features. Come hear from the core AWS CLI development team on how to leverage these features for 2017 and beyond!
As organizations move their workloads to the cloud, companies must take steps to protect and audit their private and confidential information. This session focuses on Amazon S3 best practices and using AWS CloudTrail Data Events to help better protect data residing within Amazon S3. The session includes a demonstration to show how CloudTrail, in combination with other AWS services, can help with Amazon S3 governance and compliance requirements.
“Infrastructure as Code” has changed not only how we think about configuring infrastructure, but about the infrastructure itself. AWS has been at the core of this movement, enabling your infrastructure teams to benefit from software engineering best practices such as CI/CD, automated testing, and repeatable deployments. Now that you have mastered the art of managing your infrastructure as code, it's time to leverage these same lessons for monitoring and metrics. In this session, we dive into how you can leverage tooling such as AWS, Terraform, and Datadog to programmatically define your monitoring so that you that you can scale your organizational observability along with your infrastructure, and attain consistency from local development all the way through production. Session sponsored by Datadog, Inc.
AWS Lambda has emerged as a powerful and cost-effective way for enterprises to quickly deploy services without the need to provision and manage virtual servers. This session includes a hands-on demo of how to use GitHub as the core of a DevOps toolchain. Learn how to leverage AWS integrations with Jenkins, the AWS CLI, and open source software to build, test, and deploy a service to AWS Lambda. We also explore key product updates to GitHub and GitHub Enterprise that are designed to make serverless development easier and more efficient. Session sponsored by GitHub, Inc.
AWS CloudFormation enables software and DevOps engineers to harness the power of infrastructure as code. As organizations automate the modeling and provisioning of applications and workloads with CloudFormation, repeatable processes and reliable deployments become more critical. This session guides you through various techniques to improve your infrastructure automation, including protecting your AWS resources and stacks with safety guardrails while monitoring infrastructure changes. In addition, we'll cover efficient ways to provision resources across accounts and regions, as show you how to test and improve the reliability of your deployments.
Managing Infrastructure as Code (IaC) successfully within an organization is a challenge. Regardless of team size, it can turn into a patchwork of solutions causing difficulties collaborating among individuals and teams. Intuit has faced and learned from these challenges, while coordinating among different teams running workloads that provide solutions for different business units. We developed a system that improved our development process for IaC using AWS CloudFormation. In this session, we demonstrate how to move away from an inconsistent development of infrastructure by complementing common development practices with a solution using the serverless technologies from AWS. We walk through our journey and help you discover an approach to assemble a similar solution for your organization.
Over the last decade AWS has launched more than 90 services. Even today, we continue to innovate at a rapid pace and are adding new features and services. We see backwards compatibility not as a goal to strive for, but as a necessity to maintain our most important asset: customer trust. It's not just the service API that needs to be backwards compatible, client-side libraries need to be able to handle service changes as well. Over the years we've learned how to design API's in a way that preserves backwards compatibility, while continuing to evolve. In this session you will learn: · What backwards compatibility means and what forms it may take · What impact breaking changes may have on consumers of an API or library · How to design to prevent breaking changes while allowing for future enhancements Through this session, you will also pick-up concrete design patterns that you can use and anti-patterns that you can recognize so that your service API or library can continue to grow without breaking the world. Using the AWS client-side SDKs as a case study we'll look at actual code examples of specific strategies that you can employ to keep your library backwards compatible without painting yourself into a corner.
Today, more teams are adopting continuous integration (CI) techniques to enable collaboration, increase agility, and deliver a high-quality product faster. Cloud-based development tools such as AWS CodeCommit and AWS CodeBuild can enable teams to easily adopt CI practices without the need to manage infrastructure. In this session, we showcase a Crawl, Walk, and Run approach to CI. In Crawl, we showcase how to use AWS CodeBuild with your master code branch for running a basic CI workflow. In Walk, we add team collaboration capabilities to the previously developed CI workflow and showcase feature branches and pull requests. In Run, we showcase how to optimize the CI workflow for speed and quality with caching, code analysis, and integration testing.
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. In this session, we introduce the AWS CLI and how to use it to automate common administrative tasks in AWS. We cover several features and usage patterns including Amazon EBS snapshot management and Amazon S3 backups. We show how to combine AWS CLI features to create powerful tools for automation. See how to develop, debug, and deploy these examples in several live, end-to-end examples.
Continuous delivery (CD) enables teams to be more agile and quickens the pace of innovation. Too often, however, teams adopt CD without putting the right safety mechanisms in place. In this talk, we discuss opportunities for you to transform your software release process into a safer one. We explore various DevOps best practices, showcasing sample applications and code. We discuss how to set up delivery pipelines with nonproduction testing stages, failure cases, rollbacks, machine and Availability Zone redundancy, canary testing and deployments, and monitoring. We'll use AWS Lambda, AWS CloudFormation, AWS CodePipeline, AWS CodeDeploy, and both Amazon CloudWatch alarms and events.
We've seen companies like fast-growing startups and large enterprises adopt and evolve strategies to optimize their application deployment to Amazon EC2. Some AWS customers perform in-place updates across their servers. Some perform blue-green deployments to newly provisioned servers. In this session, we'll share the advantages of each approach and talk about the scenarios in which you should choose one over the other. We will also demonstrate how to perform auto-scaling and auto-rollback for deployments.
As Coursera has grown, both in traffic and engineering team size, we've put much more emphasis on site performance, reliability, and developer productivity. Solving these problems is more complicated than just scaling up the number of EC2 instances though -- they've required rethinking approaches from the ground up and using better tools for the job. In this session, Coursera's frontend infrastructure team will walk you through how we leveraged AWS services, including ECS, CodeBuild, and ALBs, to improve site performance by 30% and reduce build times by 80%, all while cutting costs in the process.
Cisco's video solutions were historically designed for on-premises dedicated hardware deployments. Typically, major releases occurred annually or bi-annually. The release process lacked the ability to absorb frequent changes and adapt to rapid market trends. This session looks into how Cisco's IVP Solution team evolved a production system from its monolithic design into a microservices platform, leveraging cloud services, automated deployments, and delivery pipelines. Through this transition the team adopted a biweekly deployment cadence. This ultimately enabled a fast-paced migration to an AWS environment, using AWS services such as Amazon EC2, Amazon RDS, and Amazon Elasticsearch Service.
This example based session will educate you on how to develop cross-platform .NET Core applications on AWS. Through demos, we will provide a walkthrough on how to deploy .NET Core applications using various AWS infrastructure services including Amazon EC2 and AWS Elastic Beanstalk. Additionally, we will showcase how to accelerate the release of your applications with the AWS's CI/CD toolchain, with services such as AWS CodeCommit and AWS CodeBuild.
Using the DevOps model to treat your infrastructure environments as code enables you to automate and scale your development and production environments. Companies such as Puppet and Chef have built popular infrastructure automation solutions and have a thriving community interested in helping others succeed. AWS OpsWorks helps you succeed in using Puppet and Chef on AWS by removing the undifferentiated heavy lifting. In this session, discover how OpsWorks helps you focus on the core task of configuration management using Puppet and Chef, by setting up and maintaining your environment in just a few clicks.
There is a constant tension between empowering teams to be agile through autonomy and enforcing governance policies to maintain regulatory compliance. Hear from Nathan Scott, Senior Consultant at AWS and James Martin, Automation Engineering Manager at 3M on how they have achieved both autonomy and governance through self-service automation tools on AWS. Learn how to avoid pitfalls with building the CI/CD team, right sizing and how to address. This session will also feature a demo from Casey Lee, Chief Architect at Stelligent on the tools used to accomplish this for 3M, including AWS Service Catalog, AWS CloudFormation, AWS CodePipeline and Cloud Custodian, an open source tool for managing AWS accounts.
Learn how Mapbox improved and leveled up their Amazon ECS monitoring by using Amazon CloudWatch Events and custom metrics. We cover the events that kick off data collection, which enables our team to track the trillions of compute seconds happening every day on Mapbox's ECS clusters. The result of the data collection includes custom metrics and alarms used to inform stakeholders across Mapbox about detailed ECS usage, so development teams and finance alike can easily put a price tag on each container.
Chaos Engineering is described as “the discipline of experimenting on a distributed system in order to build confidence in the system's capability to withstand turbulent conditions in production.” Going beyond Chaos Monkey, this session covers the specifics of designing a Chaos Engineering solution, how to increment your solution technically and culturally, the socialization and evangelism pieces that tend to get overlooked in the process, and how to get developers excited about purposefully injected failure. This session provides examples of getting started with Chaos Engineering at startups, performing chaos at Netflix scale, integrating your tools with AWS, and the road to cultural acceptance within your company. There are several different “levels” of chaos you can introduce before unleashing a full-blown chaos solution. We provide a focus on each of these levels, so you can leave this session with a game plan you can culturally and technically introduce.
Managing AWS and hybrid environments securely and safely while having actionable insights is an operational priority and business driver for all customers. Using SSH or RDP sessions could lead to unintended or malicious outcomes with no traceability. Learn to use Amazon EC2 Systems Manager to improve your security posture, automate at scale, and minimize application downtime for both Windows and Linux workloads. Easily author configurations to automate your infrastructure without SSH access, and control the blast radius of configuration changes. Get a cross-account and cross-region view of what's installed and running on your servers or instances. Learn to use Systems Manager to securely store, manage, and retrieve secrets. You can also run patch compliance checks on the fleet to react to malware and vulnerabilities within minutes, while still providing granular control to users with different privilege levels and full auditability. You will hear from FINRA, the Financial Industry Regulatory Authority, on how they use Systems Manager to safely manage their Enterprise environment.
Do you know how your applications will behave when things go wrong, either naturally or artificially? See how Expedia uses Amazon EC2 Systems Manager to perform automatic resilience tests as part of CI/CD pipelines, giving application owners confidence they are prepared for the worst.
In this session, you learn how to enable governance, compliance, operational, and risk auditing of your AWS account. Approaches discussed include a combination of continuous monitoring and assessing, auditing, and evaluating your AWS resources. With AWS management tools, you can view a history of AWS API calls for your various accounts, review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, determine your overall compliance against the configurations specified in your internal guidelines, and give developers and systems administrators a secure and compliant means to create and manage AWS resources.
Amazon.com enables all of its developers to be productive on AWS by operating across tens-of-thousands of team-owned AWS accounts, all while raising the bar on security, visibility and operational control. Amazon has been able to achieve these seemingly conflicting ideals by automating setup and management of these accounts at scale using AWS Management Tools such as CloudFormation, Config, CloudTrail, CloudWatch and EC2 Systems Manager. In this session, discover more about how Amazon.com built ASAP using AWS Management tools, and understand some of the decisions they made as their usage of AWS evolved over time. You will learn about the design, architecture and implementation that Amazon.com went through as part of this effort.
DevOps is everywhere, but too often, people think they can buy “DevOps in a box” and just sprinkle some tools and automation over your broken or slow (or even super-fast AWS) stack. But we all know that software delivery is still hard. So what is this crazy DevOps thing, and why and how does it make things better? In this session, Jez and Nicole talk about what they've found working with dozens of organizations and conducting the largest DevOps research studies to date, covering over 23,000 data points across 2,000 organizations around the world. We start with the outcomes that companies care about: organizational performance, software delivery performance, and software quality. We then define what DevOps is, how you measure it, and how the best, most innovative teams and organizations are using it to drive improvements in performance and quality.
DEV346: NEW LAUNCH! Gain Operational Insights and Take Action on AWS Resources with AWS Systems Manager
Learn how to gain visibility and control of your infrastructure on AWS. AWS Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. Systems Manager simplifies resource and application management, shortens the time to detect and resolve operational problems, and makes it easy to operate and manage your infrastructure securely at scale.
Are you using the AWS X-Ray service to gather insights into your distributed applications and want to learn how to do more? Or are you still learning about the service and want to understand its full power and potential? This session is a deep dive into the AWS X-Ray service and how the API's can be used to derive new and interesting insights. The session will walk through the creation of a custom analytical application, and show code samples that will be available to all attendees. An initial short overview of the service will be provided, but is recommended that for those unfamiliar with the AWS X-Ray service they attend an introductory session.
In this session, Verizon and Stelligent demonstrate techniques and approaches on how to validate your security infrastructure during the development process through Continuous Security, and keep it that way through AWS Lambda auto-remediation. Verizon and Stelligent present a hands-on demo of these techniques, and a deep dive into the code that enables these technologies.
Deep Learning Summit
DLS01: Deep Learning Summit"
Deep learning is having a profound impact on AI applications. With the future of neural network-inspired computing in mind, re:Invent is hosting the first ever Deep Learning Summit. Designed for developers to learn about the latest in deep learning research and emerging trends, attendees will hear from industry thought leaders—members of the academic and venture capital communities—who will share their perspectives in 30-minute Lightning Talks. The Summit will be held on Thursday, November 30th at the Venetian from 1-5pm. The Deep Learning Revolution - Terrence Sejnowski, The Salk Institute for Biological Studies Eye, Robot: Computer Vision and Autonomous Robotics - Aaron Ames & Pietro Perona, California Institute of Technology Exploiting the Power of Language - Alexander Smola, Amazon Web Services Reducing Supervision: Making More with Less - Martial Herbert, Carnegie Mellon University Learning Where to Look in Video - Kristen Grauman, University of Texas Look, Listen, Learn: The Intersection of Vision and Sound - Antonio Torralba, MIT Investing in the Deep Learning Future - Matt Ocko, Data Collective Venture Capital"
Did you know that there are over 300 AWS User Groups worldwide? Join this panel discussion featuring AWS community leaders from around the world, and learn the value of attending community-led AWS Meetups in your region. Community leaders share their experiences, talk through how local communities help developers solve problems and achieve their goals, and discuss the benefits of participating in peer-to-peer AWS knowledge sharing and networking activities. This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities.
The Open Guide to AWS is an open source writing project, which over the past year has become one of the most popular AWS resources on the web. It's both a written resource on GitHub, with over 100 contributors, and a large Slack group. Each has become a forum for trading practical knowledge not covered in standard documentation. In this session, we talk about the Guide and how it started, share lessons on seeding initial content, the editorial process, and how to foster a healthy extended community and encourage social engagement. This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities.
Interest in serverless technologies and architectures has been on an incredible upward swing since the introduction of AWS Lambda in 2014, and there has been an extraordinary amount of work done within the community to understand, explain, and evolve serverless architectures. This talk covers the evolution of serverless from the perspectives of the community. We discuss notable serverless trends developers ought to know about to be effective, share important advice and tips, and give examples from our experience building A Cloud Guru—a completely serverless learning management system. We also present a short case study on how CityWallet, a startup, went from 500 to 5,000 users by migrating to serverless architectures, and at the same time made their software faster, more secure, and more resilient. This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities.
When is the last time you explored the nooks and crannies of Amazon EC2? While you weren't looking, AWS leveled it up with more features and capabilities than you can shake a shell at. This talk explores the newest, shiniest Amazon EC2 features, including the Amazon EC2 Systems Manager and the Application Load Balancer, served with a fine selection of pro tips sourced from experts throughout the AWS community. This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities.
Developers and management can seem at cross purposes when one group looks at technologies and the other looks at organizational issues. Both groups are looking for ways to deliver value faster, leaner, and at less cost. There are technological avenues for accomplishing these goals, including DevOps and serverless architectures. However, these approaches also have organizational implications, as they change the nature and content of communication between teams. In this session, we cover the technology benefits and organizational transformations involved in DevOps and serverless architectures. This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities.
Compliance is necessary and a good thing. However, many compliant companies are still getting breached. In this talk, we discuss the importance of using a risk model to figure out the biggest threat to your business and mitigation and monitoring tactics to guard against these high-risk threats. We also dive into a real-world example of achieving Payment Card Industry Data Security Standard (PCI-DSS) compliance in under a year; we share architecture and design patterns; and we discuss what worked and what didn't. Leave this session knowing what the top cloud attack vectors are and how to protect yourself by using AWS services to build a fully automated, highly flexible and secure environment. This session is part of the re:Invent Developer Community Day, six community-led sessions where AWS enthusiasts share technical insights on trending topics based on first-hand experiences and knowledge shared within local AWS communities.
Favorable economics are the starting point for a compelling business case to move to the cloud, but it is only part of the total picture. The cloud can provide benefits in additional areas such as technology optimization, cost of change, and business value. In this session, you will learn a framework and the tools available to create a compelling business case for a large-scale migration to AWS.
With VMware Cloud on AWS, not only can you consume VMware products on AWS, but you can also leverage AWS native services from virtual machines running within VMware Cloud on AWS. In this session, learn about the integrations we are preparing and how you can leverage the best of both VMware and AWS for your environment. Session sponsored by VMware
Many organizations are awash in different types of data, yet they struggle to rapidly use these assets to improve operations and benefit customers. In this session, we explore best practices from deployments at leading global organizations that have unlocked value with C3 IoT and AWS. We also address how the C3 IoT platform has pre-integrated over 40 AWS services, enabling developers and data scientists to build and deploy enterprise scale big data, AI, and IoT applications in one-tenth of the time and cost. Session sponsored by C3 IoT, Inc.
Cloud is the new normal - it continues to deliver amazing new technologies and drive us to innovate our operational models. Cloud also gives us new capabilities to see the tremendous financial impact to our businesses. As you journey into the cloud, your teams will need to wield the cloud power responsibly. They will need more visibility into the financial impact of their resource usage in order to make the best decisions for today and for the longer term. Join Cloudability and HERE Technologies as we guide you through the journey to financial agility on the cloud as we have seen it play it out in hundreds of customers. You'll walk away with the steps you need to take and the mindset your teams will need to adopt in order to achieve financial agility. Topics include: - Discovering and validating your goals (needs, requirements, responsibilities) - Analyzing and closing the gaps between your goals and your reality - Developing a new operational model that provides the financial agility and predictability - HERE Technologies case study Session Sponsored by Cloudability
Docker makes it easy to package and launch code onto a virtual machine. But once you scale your container across multiple machines, or even multiple AWS Regions, how do you efficiently manage container traffic, resource utilization, security, and code changes? In this session, we feature best practices and real-world examples of customers who deployed containerized apps at scale. We include strategies for maximizing cost efficiency across various traffic patterns and implementing a granular access control mechanism for your container infrastructure.
For many organizations, a perceived lack of cloud skills in their staff can limit their move to the cloud. Proper training of your engineers and developers can speed the pace of adoption, cloud migration, and delivery of business benefits by effectively operating the AWS Cloud. In this session, we discuss field-proven, prescriptive steps for reskilling and scaling your technical teams so that you can use the AWS Cloud securely, efficiently, and effectively.
Migrating to the cloud provides an opportunity to reinvent your organization's operations and the management of your IT landscape. In this session, we discuss how to evaluate your organizational readiness for the cloud and how to develop foundational capabilities before the migration. We also review key considerations developed by AWS Professional Services to help organizations prepare for a migration at scale through the Migration Readiness Assessment (MRA) and Migration Readiness and Planning (MRP) programs.
We've partnered with hundreds of customers on their large-scale migrations to AWS. This session outlines some of the common challenges that our customers face and how they've overcome these challenges. The session also describes the patterns we've observed that make legacy migrations successful, and the mechanisms we've created to help customers migrate faster.
In this session, Encirca Services by DuPont Pioneer discusses how they performed a lift-and-shift migration from their on-premises data center to AWS in less than six months. First, they cover how they aligned organizational stakeholders to prepare for the migration. Then, they discuss strategies used to increase the pace of their mass migration. Finally, they talk about actions taken after the migration to measure success and solicit feedback from customers.
Hess Corporation is a leading global independent energy company engaged in exploration for and production of crude oil and natural gas. Early in Hess's journey to the cloud, they operated the AWS platform in a manner similar to how they operated their on-premises data centers, creating a number of challenges. In this session, Hess Corporation discusses how they worked to further optimize their use of the AWS Cloud following their data center migration. They also cover technical strategies implemented to improve security, governance, and financial reporting and examine changes to their corporate culture that encourage innovation while improving cost controls.
Many organizations are awash in different types of data, yet they struggle to rapidly use these assets to improve operations and benefit customers. In this session, we explore best practices from deployments at leading global organizations that have unlocked value with C3 IoT and AWS. We also address how the C3 IoT platform has pre-integrated over 40 AWS services, enabling developers and data scientists to build and deploy enterprise scale big data, AI, and IoT applications in one-tenth of the time and cost. Session sponsored by C3 IoT, Inc.
C.H. Robinson, Intuit, and Scopely hold a fireside chat with Redis Labs' CMO to discuss the challenges of deploying large-scale applications that need to support personalized user experiences based on real-time insights. The architects from these companies share how Redis Enterprise operates as the primary datastore and tackles the issues of maintaining consistency in geo-distributed deployments, ingests massive amounts of data while executing hybrid transactions-analytics functions, and balances workloads between RAM and SSDs while performing hundreds of thousands of operations per second with sub-millisecond latency. In this business-technical session, you learn about diverse use cases and capabilities of Redis, a highly popular in-memory NoSQL database, including job and queue management, machine learning, streaming, search, geo-spatial indexing, fast data ingest, and high speed transactions. Session sponsored by Redis Labs
Leading Edge Forums (LEF) has labelled the synergistic combination of cloud computing and machine intelligence (MI) as ‘the Matrix': the combination of cloud services such as IaaS, IoT, MI, and edge computing. For companies to thrive, they need to know the answers to the following questions: How are successful companies harnessing the power of the Matrix? How do they structure their organizations? What makes them so agile? How do they attract and retain skilled employees? LEF studies successful businesses and learns what makes them great. Our 6-month research program has dived deep with multiple AWS customers to understand not only their use of the technology, but also the business transformation program that allowed them to maximize the value that AWS provides. Attend this session to learn more about the research that has been done, client examples, observations that the LEF has made and how this can be used to help drive your transformation program. Session sponsored by DXC Technology
Compass Group, better known for its brands, Bon Appetit, Eurest, Canteen, Wolfgang Puck, among others, is one of the world's largest foodservice companies, serving millions of people daily and employing over 500,000 associates across all continents. Compass Group relies on SAP to run their business, from supply chain and financials to payroll. Compass Group started deploying their SAP Landscape on AWS a few years ago, first in India and LATAM, where they deployed SAP Business Suite on HANA. Subsequently, they moved their Japan SAP Landscape on Microsoft SQL to AWS. Last year, they migrated both their EMEA SAP Business Suite on Oracle as well as their UK SAP Business Suite on DB2 to AWS. In this session, Compass shares the details this journey, the benefits they gained running SAP on AWS, and the lessons they learned. They also discuss how they integrated their different SAP Landscapes with other legacy systems, how they architected for resiliency, and what's next in their journey.
As a China-based global technology company that is helping some of the world's largest energy providers transition into renewable energy, Envision Energy is leading a digital disruption of the traditional energy system. In this session, Envision discusses how they used the AWS Cloud to create a technology infrastructure that connects and orchestrates millions of smart energy devices around the globe for their Energy IOT platform. They also review how AWS is used to host Envision's core systems, including SAP and Citrix.
Amazon MQ is a new managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud. Amazon MQ manages the work involved in setting up an ActiveMQ message broker, from provisioning the infrastructure, to installing the software and ongoing maintenance. It supports industry-standard messaging APIs and protocols, so you can switch from any message broker to Amazon MQ without re-writing the supported applications. This session provides an overview of how Amazon MQ makes enterprise messaging and migration more manageable. You'll learn how you can use Amazon MQ to launch a production-ready message broker in minutes. Guest speakers from Volvo WirelessCar and GE will share how they're using messaging in their own applications and systems, and the advantages enabled by a managed ActiveMQ service on AWS.
Artificial Intelligence is here this time, to stay. For the Enterprise, AI materializes into solutions that improve customers' experiences by optimizing, automating, and personalizing high-volume tasks while lowering cost and time to market, therefore accelerating innovation. In this session, we cover AWS' AI products and services that enable innovation in the enterprise while maintaining compliance with different regimes such as HIPAA, PCI, and more. Finally, we discuss enterprise architectures on AWS for machine learning and deep learning workloads.
The cloud offers a first-in-a-career-opportunity to constantly optimize your costs as you grow and stay on the bleeding edge of innovation. By developing a cost-conscious culture and assigning the responsibility for efficiency to the appropriate business owners, you can deliver innovation efficiently and cost effectively. This session will review a wide range of cost planning, monitoring, and optimization strategies featuring real-world experience from AWS customers.
VMWare Cloud on AWS allows your teams to migrate existing assets to the AWS Cloud quickly by using tools you are already familiar with. VMware Cloud on AWS brings VMware's enterprise class Software-Defined Data Center software to Amazon's public cloud, delivered as an on-demand, elastically scalable, cloud-based VMware sold, operated and supported service for any application and optimized for next-generation, elastic, bare metal AWS infrastructure. This solution enables customers to use a common set of software and tools to manage both their AWS-based and on-premises vSphere resources consistently. This session uses practical, real world customer deployment examples to dives deep on hybrid cloud network connectivity, data protection best practices, and AWS native service integrations. Attendees will walk away with practical guidance and tips on getting the best of both worlds with VMware and AWS hybrid cloud solution.
VMware Cloud on AWS brings VMware's enterprise class Software-Defined Data Center software to the AWS Cloud, and enables customers to run production applications across vSphere-based private, public, and hybrid cloud environments. Delivered, sold, and supported by VMware as an on-demand service, customers can also leverage AWS services including storage, databases, analytics, and more. With the same architecture and operational experience on-premises and in the cloud, IT teams can now quickly derive value instant business value from use of the AWS and VMware hybrid cloud experience. Session sponsored by VMware
When migrating lots of applications to the AWS Cloud, it's important to architect cloud environments that are efficient, secure, and compliant. Landing zones are a prescriptive set of instructions for deploying an AWS-recommended foundation of interrelated AWS accounts, networks, and core services for your initial AWS application environments. In this session, we will review the benefits and best practices for developing landing zones as well as how to incorporate them into your migration process.
With cloud maturity comes operational efficiencies and endless potential for innovation and business growth. It is critical to have a well-thought-out strategy for governing your cloud infrastructure. Visibility, accountability, and actionable insights are some of the invaluable considerations. The AWS Cloud enables convenience and cost savings for organizations that know how to leverage its potential. Amazon EC2 Reserved Instances, in particular, present a tremendous opportunity when scaling to save significantly on capacity. But there are many considerations to fully reap the benefits. CloudCheckr CTO Patrick Gartlan presents issues that every organization runs into when scaling, provides best practices for how to combat them, and helps you show your boss how Reserved Instances can help your organization save money and move faster. Session sponsored by CloudCheckr
This session is aimed at those who want to learn how a large enterprise organization adapted their teams, tooling, and methodology to successfully migrate and operate business critical functions in the cloud. You also learn how the organization used migration as a launchpad for additional innovation. Session Sponsored by Cloudreach
Governing cloud infrastructure at scale requires software that enables you to capture and drive management from internal policies, best practices, and reference architectures. A policy-driven management and governance strategy is critical to successfully operate in cloud and hybrid environments. As infrastructure grows, you might leverage knowledge that extends beyond the organization. An open-source “cloud policy framework” enables users to leverage a community that can help define and tune best practice policies, and help SaaS vendors and ISVs capture the best way to manage an application and share it with customers. A well-defined management and governance strategy enables you to put automation in place that keeps your cloud running securely and efficiently without having to take it on as a full-time job. This session discusses the development of a “cloud policy framework” that enables users to leverage open source rule definition organizations can use to govern their cloud. Learn best practice policies for managing all aspects of services, applications, and infrastructure across cost, availability, performance, security and usage. Session sponsored by CloudHealth Technologies
The science of saving on AWS has undergone some major revolutions in the past year. New services, Reserved Instance options, increased flexibility, and discount programs have all created 100's of new opportunities to optimize your usage and spending. We've also seen a significant uplift in operational maturity that is increasing the overall savings to be had while limiting operational risk. The most mature organizations are now using a particular set of metrics to drive their cost optimization activities as part of a repeatable feedback loop. Join Cloudability's J.R. Storment and Atlassian's Mike Fuller as they walk you through this changing landscape and how it impacts the decisions your team makes every day. Topics include: cost avoidance and savings metrics to drive your optimization initiatives; data points and techniques for making effective rightsizing decisions; maximizing coverage and flexibility with ISF and convertible Reserved Instances; and lessons from the front lines with Atlassian's Mike Fuller. Session Sponsored by: Cloudability, Inc.
Setting a goal for your teams to move a large number of workloads to AWS in a short period of time can be a great way to motivate teams to migrate quickly. Cardinal Health created a migration factory composed of teams, tools, and processes that streamlined the movement of workloads from on-premises to AWS. In this session, hear from Cardinal Health about how they used a migration factory to successfully move thousands of applications to the AWS Cloud. In addition, learn best practices for creating an effective migration platform and process in your organization.
When migrating a large number of workloads to AWS, tracking progress across the various applications and services involved can distract your team from core migration activities. In this session, learn how AWS Migration Hub provides a single place to discover your existing servers and track the status of each application migration. It provides you with better visibility into your application portfolio and streamlines migration tracking, at no additional cost beyond the services you use.
Learn how to take advantage of AWS for disaster recovery. In this session, we examine how traditional disaster recovery concepts can be adapted to the cloud. We also explore ways to cost-effectively reinvent disaster recovery, so it can extend to applications and workloads that have never had it before. This session walks you through tiered technology approaches to apply as part of a disaster recovery strategy that aligns costs to intended business outcomes.
In this session, learn how you can enable governance, compliance, and operational and risk auditing of your AWS account through a combination of continuous monitoring, auditing, and evaluation of your AWS resources. With AWS management tools, you can see a history of AWS API calls for your account, review changes in configurations and relationships among AWS resources, and dive into detailed resource configuration histories. You can determine your overall compliance with the configurations specified in your internal guidelines, and you can give developers and systems administrators a secure and compliant means to create and manage AWS resources.
In this session, we explore multi-account considerations for compliance and auditing. We include topics such as API call prefiltering, a repeatable approach to SCP and IAM policy creation, internal separation of duty and need to know, compliance scope ring-fencing, scope of impact limitation, and mandatory access control. We review approaches for log and event analytics and log record lifecycle management (including redaction where necessary) and alerting. We also discuss how you can deploy compliance assessment tools in multi-account environments and how you can interpret these tools' output so it makes sense. Finally, no set of detailed multi-account sessions is complete without discussing tools for visualization.
When you plan your data center migration to the cloud, it's critical to consider how workloads will run to ensure maximum performance and availability. With Microsoft applications making up 60% or more of most on-premises data centers, more and more customers are moving their Microsoft workloads to AWS to improve performance, increase availability, and improve their security posture. In this session, we discuss how customers are accomplishing this task. We cover the common architectural patterns for moving Microsoft applications such as SQL Server, SharePoint, and Dynamics, and we discuss how to integrate with Active Directory for a seamless transition. We also examine tools like AWS Application Discovery and AWS Server Migration Service that can help make your move faster and easier.
Oracle enterprise applications and middleware such as E-Business Suite, PeopleSoft, Siebel, and WebLogic are central to many IT departments. They often require complex deployments that can greatly benefit from the flexibility, scalability, and security of the cloud. In this session, we discuss architecture patterns and best practices for migrating these applications to and running these applications on AWS. We cover how to work with Oracle enterprise applications and multiple services including Amazon RDS, AWS Database Migration Service, Amazon Elastic File System, and AWS CloudFormation. As part of this, we show examples of successful customer deployments.
Databases continue to grow to be multiple terabytes in size, but migrating to the cloud doesn't have to take days or create disruption for your business. To perform data migration at petabyte scale with minimal impact to your business, you can now use the new combination of AWS Database Migration Service replication agents and AWS Snowball. In this session, we discuss how to extract large-scale data from an on-premises Oracle database and migrate it to Amazon Aurora. We then outline a step-by-step process for converting your Oracle schema to a PostgreSQL-based schema.
Financial Impact Regulatory Authority (FINRA)'s Technology Group has changed its customers' relationship with data by creating a managed data lake that enables discovery on petabytes of capital markets' data, while saving time and money over traditional analytics solutions. FINRA's managed data lake unlocks the value in its data to accelerate analytics and machine learning at scale. The data lake includes a centralized data catalog and separates storage from compute, allowing users to query from petabytes of data in seconds. Learn how FINRA uses Spot Instances and services such as Amazon S3, Amazon EMR, Amazon Redshift, and AWS Lambda to provide the right tool for the right job at each step in the data processing pipeline. All of this is done while meeting FINRA's security and compliance responsibilities as a financial regulator.
IT organizations today need to support a modern, flexible, global workforce and ensure their users can be productive from anywhere. Moving desktops and applications to AWS offers improved security, scale, and performance, with cloud economics. In this session, we provide an overview of Amazon WorkSpaces and Amazon AppStream 2.0, and talk through best practices for moving your end-user computing to AWS. We also dive deep into Amazon AppStream 2.0, and demonstrate some of the newest capabilities, including Microsoft Active Directory integration, single sign-on with SAML 2.0, and new graphics instances.
Cox Automotive provides digital solutions that transform how the world buys, sells, and owns cars. Cox is currently engaged in a multiyear effort to migrate the bulk of its applications from physical data centers to AWS, including client-facing SaaS applications and large, consumer-facing websites. In this session, they discuss their learnings on how to effectively migrate large, service-based architectures to AWS while minimizing the impact to customers. They also share lessons learned for conducting organizational change at scale and creating a culture of self-service.
Maintaining control of sensitive data is critical in the highly regulated financial investments environment that Vanguard operates in. This need for data control complicated Vanguard's move to the cloud. They needed to expand globally to provide a great user experience while at the same time maintaining their mainframe-based backend data architecture. In this session, Vanguard discusses the creative approach they took to decouple their monolithic backend architecture to empower a microservices architecture while maintaining compliance with regulations. They also cover solutions implemented to successfully meet their requirements for security, latency, and end-state consistency.
With serverless computing, you can build and run applications without the need for provisioning or managing servers. Serverless computing means that you can build web, mobile, and IoT backends, run stream processing or big data workloads, run chatbots, and more. In this session, learn how to get started with serverless computing with AWS Lambda, which lets you run code without provisioning or managing servers. We introduce you to the basics of building with Lambda. As part of that, we show how you can benefit from features such as continuous scaling, built-in high availability, integrations with AWS and third-party apps, and subsecond metering pricing. We also introduce you to the broader portfolio of AWS services that help you build serverless applications with Lambda, including Amazon API Gateway, Amazon DynamoDB, AWS Step Functions, and more.
Prime Day is Amazon's global shopping event exclusively for Amazon Prime members, offering great deals, events, and exclusive content. Running an event of this magnitude on the AWS Cloud offers many unique challenges. The Infrastructure Readiness Team for Prime Day 2017 focused on helping Amazon teams realize operational and executional efficiencies so they could reduce costs and ensure event readiness. Join us for an overview and Q&A of lessons learned from our largest event, including event readiness, how to support infrastructure events at scale, communication planning, building escalation paths with unknown issues, and more.
From 2014 through 2017, TomTom successfully migrated its major business systems to the AWS Cloud. This migration helped TomTom's vision of real-time mapmaking become a reality, and it put TomTom significantly ahead of the competition in the domain of location technology. In this session, we explore the practical aspects of migrating to the cloud, including technological challenges as well as the necessary shifts in mindset to successfully get us through the migration journey. We discuss how the migration was done gradually due to huge risk exposure all the while maintaining 24/7 system uptime. We explore how the expected benefits of the cloud came to life, and we elaborate on the unexpected benefits, such as cost of ownership and increased awareness in the teams, as well as upper-stack benefits and the possibility to fit services and hardware while scaling the system—benefits that we could not have realized with on-premises infrastructure.
As GE Transportation moved their applications to the cloud, they faced operational challenges with monitoring their applications and platforms for availability, performance, and compliance. To address their exact requirements, GE developed a cost effective, scalable monitoring and alerting solution based on fully-managed AWS services. In this session, GE Transportation reviews this capability and discusses its process for implementing the solution. Attendees also learn reusable design patterns using AWS managed services that can be applied in a self-service model to scale efficiently across the enterprise.
When critical business applications move to the AWS Cloud, the business needs to be assured that applications will migrate rapidly and that performance will be as good or better than on-premises. This session covers a proven solution to evaluate, move, and compare migrated applications and assure they meet user expectations. The session also covers how to monitor and intelligently remediate applications on an ongoing basis, so user experience is consistent and can scale and heal accordingly. You see Cisco CloudCenter in action, along with discovery and third-party migration tools used to understand applications and move them to AWS. With AppDynamics and CloudCenter working together, you can see before and after examples of a business application running as good as or better than when on-premises. We also share advanced use cases of AppDynamics, providing user experience analytics and directing CloudCenter to scale applications. Session sponsored by Cisco
Energy & Utilities
With increasing competition and shrinking budgets across the industry, Oil and Gas companies around the world are looking to optimize their oil well production, knowing that even single-digit efficiency gains could have a financial impact in the range of hundreds of millions of dollars. To do so, complex reservoir models and other compute-intensive simulations need to be run on the fly, requiring more compute and storage resources than ever before. In this session, you will learn how customers are running complex reservoir simulations in a scalable and cost-effective way using Spot instances and such AWS HPC technologies as cfnCluster, EnginFrame, DCV, as well as the myHPC cluster management solution. We will share reference architectures and provide best practices and considerations for instance and storage selection to optimize performance and minimize cost for the most demanding HPC workloads.
With geoseismic datasets that are petabytes in size and growing, finding tomorrow's energy is increasingly data and compute intensive. Hess Corporation, a global energy company, needed to be able to respond quickly to changing oil market demands, while minimizing costs. By migrating petabytes of data and running high performance computing (HPC) workloads on AWS, Hess reduced compute costs and accelerated time in which geologists received results. In this session, you will learn how Hess built a GeoSeismic data repository on AWS, by leveraging S3 and EFS, and processes that data by building HPC clusters on-demand using the GPU-enabled P2 instance family. Additionally, you will learn how the Hess subsurface computing team was able to move from running on premise cap-ex driven GPU clusters to an op-ex driven on-demand model in the AWS cloud."
Supervisory Control and Data Acquisition (SCADA) systems are critical real-time software applications used to manage nearly any form of upstream, midstream, and downstream processes in the energy industry. Traditionally, these technologies have been deployed on premises and managed separately from core IT, to ensure security, availability and consistent performance. As energy and utility companies expand geographically, and the number and types of sensors in each location grow, disparate and growing data streams are becoming increasingly complex and challenging to manage. It is estimated that up to 95% of valuable device and sensor information is left stranded in the field, information that could prove valuable to machine learning, predictive analytics, and process optimization. In this session, energy and utility customers will learn how easy it is to implement IIoT on AWS, so they can easily extract value from additional devices and sensors, and innovate faster. We will dive into a reference architecture for accessing current mission critical SCADA data as well as previously stranded data into AWS using Kinesis and DynamoDB, ultimately enabling customers to reduce downtime, increase efficiencies, improve reliability, and gain more business insights through connected data.
What if your utilities company could fix your hot water service before you knew it was broken, or introduced novel pricing models to improve global sustainability? Centrica, a global utility company with notable brands like British Gas, is a market leader in connected home products that help customers manage their energy use. With millions of customers and thousands of device installations a week, the business was outgrowing their on-premises data center despite ongoing investments, so they needed a reliable and elastic architecture that could quickly scale to meet demand. They also needed an agile and compliant IoT platform to manage the explosion of data resulting from more customers, more devices, and more sensors. With AWS IoT, they can focus on delivering better customer experiences while generating valuable business insights to optimize energy usage, reduce costs, and enable global sustainability. In this session, participants learn how Centrica seamlessly migrated to AWS IoT, and how they are modernizing their platform to deliver the future of energy.
At AWS, security is job zero. Our infrastructure is architected for the most data-sensitive, financial services companies in the world. We have worked with global enterprises to meet their respective security requirements and have learned that there are best practices and pitfalls to avoid. In this session, we provide a guided tour of governance patterns to avoid – ones that may seem logical at first, but that actually impede your ability scale and realize business agility. We also cover best practices, such as setting up key preventative and detective controls for implementing 360-degrees of security coverage, practicing DevSecOps on a massive scale, and leveraging the AWS services (such as Amazon VPC, IAM, Amazon EMR, Amazon S3, Amazon CloudWatch, and AWS Lambda) to meet the most strict and robust enterprise security requirements.
For many securities organizations, post-trade processing is expensive, cumbersome, and time-consuming. This is in part due to the massive volumes of data required for processing a trade and the limited agility of the technology on which many organizations rely today. In order to create efficiencies and move faster, many financial services organizations are working with AWS to implement post-trade solutions built with AWS storage services (Amazon S3 and Amazon Glacier) and big data capabilities (Amazon Athena, Amazon EMR, Amazon Redshift, and Amazon QuickSight ). In this session, we walk through a trade capture and regulatory reporting solution that uses the aforementioned AWS services. We also provide guidance around obtaining data-driven insights (from pixels to pictures); bolstering encryption with AWS KMS; and maintaining transparency and control with Amazon CloudWatch and Amazon CloudTrail (which also helps meet SEC Rule 613 that requires the creation of comprehensive consolidated audit trails).
Financial institutions today must manage multiple data types from a wide variety of sources. Among these various data types, archive data presents a particular challenge: it is invisible to much of the organization and not easily leveraged by the lines of business for analytics, insight, and product innovation. Faced with massive volumes of archive data, Financial Services organizations are finding that delivering insights in a timely manner requires a data storage and analytics solution with more agility and flexibility than traditional data management systems can provide. In this session, we will discuss a design pattern that (1) brings this data into a highly available, lower-cost queryable archive within AWS than you currently have and (2) migrates that data to a data lake that the entire organization can use to extract insight and drive innovation. We will walk through a strategy that addresses the following topics: storing archive data in compressed, cost-effective, and readily available formats; creating lifecycle policies to archive older data sets and make them easily accessible; fully utilizing the features of object storage to enrich the data lake; and applying AWS analytics tools to gather business insights.
Many financial institutions want to provide greater autonomy to their developers but find themselves hemmed in by intricate legacy processes and centralized IT teams that own different parts of the infrastructure and application stack. For Barclays PLC, the solution to this impasse lay in automating the management of hundreds (and, soon, thousands) of accounts, granting developers access to AWS technology directly via AWS Console or API while conforming to bank policy. Key to Barclays' strategy is “Persephone,” a system that creates accounts, sets policy, defines which services are enabled in the account, and runs Lambda functions to ensure continuous compliance. Persephone relies on services including CloudTrail, CloudWatch Events, IAM, KMS, and others to balance control and productivity. Barclays is also working to introduce automated reasoning testing and assurance techniques made available by the Automated Reasoning Group at AWS. Attendees of this session will gain an understanding of the mindset, skillset, and toolset shifts that a diversified global financial institution must make to meet the control objectives of the present while ensuring the agility to accommodate technology change of the future.
The Bank of Nova Scotia is using deep learning to improve the way it manages payments collections for its millions of credit card customers. In this session, we will show how the Bank of Nova Scotia leveraged Amazon EC2 Container Service and EC2 Container Registry and Docker to streamline their deployment pipeline. We will also cover how the bank used AWS IAM and Amazon S3 for asset management and security, as well as AWS GPU accelerated instances and TensorFlow to develop a retail risk model. We will conclude the session by examining how the Bank of Nova Scotia was able to dramatically cut costs in comparison to on-premise development.
How do you get your security and compliance team to embrace the cloud? "Getting to Yes" with Vanguard's Security, Legal, and Compliance Teams was a key factor to the organization's journey to the cloud. Maintaining a high level of assurance is solvable when using an iterative, agile approach. Vanguard is taking existing on-premises controls, plus cloud frameworks such as NIST, CSA, etc., to develop the right set of cloud controls that provide maximum security without sacrificing business agility. In this session, we cover: Vanguard's approach to developing appropriate controls for its cloud deployments; key considerations and best practices when implementing controls; leveraging the AWS Cloud Adoption Framework and the four security perspectives to map controls appropriately; and the various AWS services (IAM, Amazon VPC, AWS KMS, and AWS CloudTrail) that we leveraged. We also cover the iterative and agile approach we are taking by embracing DevSecOps principles.
FINRA's analytics platform unlocks the value in capital markets data by accelerating trade analytics and providing a foundation for machine learning at scale. The platform enables FINRA's analysts to perform discovery on petabytes of trade data to identify instances of potential fraud, market manipulation, and insider trading. By centralizing all data in S3, FINRA's architecture offers improved agility, scalability, and cost effectiveness. Analytics services such as Amazon EMR and Amazon Redshift have freed FINRA's data scientists from the constraints of desktop tools, allowing them to apply machine learning techniques to develop and test new surveillance patterns. All of this is done while meeting FINRA's security and compliance responsibilities as a financial regulator. At the end of this session, you'll have an understanding of how to apply FINRA's architecture to trade analytics and other financial services use cases, including meeting regulatory requirements such as the Consolidated Audit Trail (CAT) reporting.
Many enterprises that follow regulated, process-driven workflows would like to take advantage of the innate features and benefits of AWS to become more agile, achieve operational excellence, and accelerate time-to-market while leveraging a DevOps culture and development methodology. But building a mature DevOps capability doesn't happen overnight. Creating and implementing testing, compliance, and security automation frameworks requires time and organizational and process changes. Financial institutions are addressing this challenge by using AWS Service Catalog to help bridge the gap between traditional operations and true DevOps.
For years, Riot Games deployed to their own private data centers across the globe to meet the growing demands of their game, League of Legends. The last seven years saw explosive growth in new data centers worldwide, along with a great deal of technical debt. This is Riot Games' story of how they overcame their technical debt by taking the League of Legends platform into the AWS Cloud. AWS services gave Riot Games the infrastructure agility they were lacking within their data centers and empowered them to focus more on new player features and less on legacy infrastructure. In this session, you learn why and how Riot Games migrated their existing platform to the AWS Cloud and the advantages gained by the move using services such as Amazon EC2, Elastic Load Balancing (with Application Load Balancers), Amazon EBS, and Auto Scaling. Riot Games also shares how they created new automation toolsets to enable the existing tools and replace a few legacy ones.
Learn how Gearbox integrated Amazon GameLift into "Spark", the cloud-ready infrastructure that powers all of their games. In this session, Gearbox talks about the mental shifts they had to make when moving their games online, as well as the unexpected challenges. Gearbox also dives deep into their experience of integrating the Amazon GameLift Server SDK, running multiple processes with different parameters, asynchronous build deployment, global fleet management, load balancing, and scaling to meet unexpected player demand. You'll learn Amazon GameLift best practices that can help simplify game session management, reduce engineering overhead, and optimize for player experience.
Given the complexity and scale of contemporary games, it's been a dream of game creators to algorithmically generate game content. Nexon wanted to create a large-scale, open-world MMORPG called Durango, where algorithmic generation is desperately needed to minimize development costs and maintain game longevity. In-game objects such as trees and plants are placed based on complex rules, with the intention of mimicking the realistic ecosystem that evolves continuously. However, a large amount of computation and a careful orchestration of various computing resources are required due to the immense size of the in-game lands. Nexon achieved this goal by leveraging AWS services to take advantage of massive parallelism supported by the infrastructure. In this talk, Nexon discusses the architecture they settled on for algorithmic generation of game content in a large scale, and the AWS services involved such as Amazon SQS and Amazon ECS with automatic scaling and spot instances.
Learn how Rovio replaced functionalities in their analytics platform provided by a 3rd party vendor in seven weeks with Amazon Athena, Amazon EMR, and Amazon Redshift. In the ever changing world of games, you must stay ahead of trends and understand your customer base's actions. We describe how we built a data lake using AWS services such as Amazon S3, Amazon Athena, and Amazon EMR. We give a general overview on our analytics stack, business problems unique to the gaming industry we're trying to solve, data philosophy (one truth, open data), and challenges we had at the beginning of this year. We also dive deep into areas most gaming companies struggle with, like how we manage the schemas, convert data to efficient columnar format, partition the data, ultimately use the data, and handle some of the challenges we faced using services like Athena. Learn about our quest to perfect games and customer gaming experience!
This session covers how the team at Ubisoft evolved For Honor's infrastructure using Amazon ECS and supporting systems (Amazon CloudFront, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon SQS, and AWS Lambda, with monitoring through DataDog) from a proof of concept to an infrastructure as code solution. The team shares war stories about supporting both internal and live environments, and the challenges of bridging cloud and on-premises systems.
Linden Lab has spent over a decade optimizing the production operations for Second Life, an online 3D virtual world created by its users. With our new social VR platform, Sansar, we wanted to take our vision of virtual experiences to a whole new level of innovation in which AWS played a vital role. We'll dive into Sansar's AWS tech stack, an infrastructure built not only for technical robustness but also extreme scalability. We will discuss what the different use cases were between EC2 and Containers and how Lambda worked as a positive enabler for customization. Lastly, we'll cover IAM, Security Groups and VPC, which we refer collectively as the "Great Wall of Preventing Unfortunate Design Decisions.”.
The pace of technology innovation is relentless, especially at AWS. Designing and building new system architectures is a balancing act between using established, production-ready technologies while maintaining the ability to evolve and take advantage of new features and innovations as they become available. In this session, learn how Amazon Game Studios built a flexible analytics pipeline on AWS for their team battle sport game, Breakaway, that provided value on day one, but was built with the future in mind. We will discuss the challenges we faced and the solution we built for ingesting, storing and analyzing gameplay telemetry and dive deep into the technical architecture using AWS many services including Amazon Kinesis, Amazon S3, and Amazon Redshift. This session will focus on game analytics as a specific use case, but will emphasize an overarching focus on designing an architectural flexibility that is relevant to any system.
Global Partner Summit - Business
In this session, we provide an overview of the artificial intelligence/machine learning landscape, discuss the current state of the industry, and identify new market opportunities. Partners will come away with a better understanding of the investment that AWS is making in this space, as well as our unique value proposition.
This session is especially tailored for technology and consulting partners, looking to learn more about big data and analytics on AWS. As individuals and commerce move online, companies have unprecedented access to data to improve customer experience and take advantage of new market opportunities. However, organizations often struggle with turning data into actionable insights to drive their business. Learn how AWS and big data APN partners are helping companies enable a broad range of analytic capabilities, to deliver better business results and better serve their customers. We discuss key big data and analytics use cases, and programs to enable partners to get to market with these solutions.
Join this session to learn more about 2018 AWS Partner Network Program launches and how to prepare for the upcoming changes.
Join us in this session to learn more about the evolving landscape for AWS Partners capable of providing a full lifecycle experience for their customers, from plan and design to build and migrate to run, operate, and optimize. We share in-depth information about the investment, revenue, and margin opportunities for these next-gen MSPs. We also dive into AWS services and third-party tooling to help partners along this journey. Partners leave this session with a clear view of new ways to optimize their AWS business, expand their customer offerings, and improve their profitability.
In this session, learn about Amazon Connect and how your organization can benefit from its capabilities, extensibility, and scalability. We explain how it's designed to use AWS natural language understanding to provide enterprise and consumer interactions that replicate the experiences consumers have at home with their Echo products. Learn how to combine Amazon Connect with leading CRM, analytics, and workforce optimization/quality management platforms to provide a complete system of engagement and system of record for enterprises across all verticals and size ranges.
In this session, we walk through an overview of AWS database services. We discuss why customers choose to adopt AWS database services and how APN Partners can help customers by building a database practice using AWS services such as Amazon Aurora, Amazon Redshift, and Amazon DynamoDB. We share best practices for APN Partners to start building a successful database practice on AWS. We also talk about how APN Partners can use various resources offered by APN to accelerate their practice-building process.
The availability of the latest Windows upgrades is intersecting with the "Cloud as the New Normal" market trend, forcing customers to face a crossroad in how best to deliver end user services as the digital work space evolves. In this session, learn about the solutions for desktop and application streaming from AWS and our leading technology partners that can empower customers to focus on end user service innovation, while improving ongoing security and operations. We highlight best practices of customers and APN Consulting Partners who turn their latest Windows upgrade cycle into an opportunity for work space transformation.
It's not always straightforward for our customers to go through DevOps transformation while they migrate their workloads into AWS. DevOps partners can help customers transform their workloads during their migration journey. This creates a unique opportunity for our DevOps partners to expand their business beyond DevOps to Cloud Migration. We will discuss various opportunities DevOps consulting and Technology partners can leverage to help customers, and grow their business. We will also share compelling business reasons for customers to consider DevOps transformation while migrating specific workloads to AWS. We also explain unique situations where it is not beneficial to do both at the same time.
With recent reports that banks face a regulatory change every 12 minutes, it's no wonder firms increasingly look to automate compliance and reduce operational risk. By leveraging the latest technology advances—including cognitive computing, enhanced analytics, digital identities, big data, and the cloud—they hope to reduce their compliance burdens and free human and financial capital for more productive uses. Today's cutting-edge approaches offer advantages for agility, speed, and ease of integration. In this session, we dive deeper into the cloud-based RegTech solutions that are available on AWS.
Across the Healthcare & Life Sciences sector, we're working to support better consumer and business decisions with innovative technologies. Many factors lead to improved patient outcomes. Ease of access to health information via voice, improved connectivity between disparate health data repositories, infusion of real-world data into R&D for new medical products—these are just a few examples where customers and partners innovate with AWS. Come learn from our partners as they share both their business insights and the design patterns that make these solutions work.
The Internet of Things (IoT) keeps evolving, and there's a critical need for high-speed data processing, analytics, and reduced latency at the edge. Meeting the needs of these systems that leverage a distributed architecture to bring compute resources to the edge and the cloud is essential. A cloud-only model might not be applicable for time-sensitive operations or where network connectivity is poor. Also, connecting every device to the cloud and sending raw data over the internet can have privacy, security, and legal implications, especially for sensitive data. Learn how AWS extends AWS Greengrass to devices, so they can act locally on data and use the cloud for management, analytics, and durable storage.
The AWS Migration tooling segment team has invented migration tool packages that serve three key business objectives. First, technology choice, using competent tools from our ecosystem and AWS migration platform, with a tool recommender that help customers identify the right tools to achieve their business objectives. Next, speed of procurement, with "Single click" to procure the tools right from AWS Marketplace. Finally, the cost of migration, with highly discounted tools that reduce the cost of migration by 25 - 30%. In this session, we explain how our customers and SI partners can leverage these packages to enable the frictionless migration of thousands of workloads into AWS.
Interested in developing your AWS Partner Network (APN) partner business with government, education, and nonprofit customers? Join us to learn more about partner opportunities in the public sector. Hear best practices and success stories, and learn about available APN tools, training, and benefits.
Interested in understanding best practices for cloud procurement in the public sector? We cover how AWS partners can guide and educate public sector organizations to effectively access the full benefits of the cloud. Topics include best practices for pricing, governance, security, terms and conditions, and buying frameworks.
Partners increasingly look to a Software as a Service (SaaS) delivery model for products to respond to customer demand, improve operational efficiency, increase agility, and expand market and global reach. AWS provides a low-cost, reliable, and secure foundation to use as you build and deliver SaaS solutions to your customers. The AWS Partner Network (APN) helps you build a successful AWS-based business by providing valuable business, technical, marketing, and go-to-market (GTM) support. In this session, we discuss what a typical journey to SaaS on AWS looks like, and all of the AWS and APN resources and benefits available to you in every stage.
Security is about visibility and control. It starts with getting visibility (collecting as much data as possible about your environment), then deciding what is worth alarming versus what is a distraction. A classic case of finding needles in the haystack. AWS Partners can leverage highly scalable, machine learning (ML) services to process large amounts of log, event, flow, and other data to build AWS–specific security solutions that scale. Pass the undifferentiated heavy lifting to AWS so you can focus on your core value proposition! This session helps AWS Partners understand what services are available and applicable for building security solutions, and provides use cases to help accelerate adoption.
GPSBUS217: GPS: Evolve your Storage Business Models in AWS
Transitioning on-premises solutions to the cloud and creating SaaS solutions is no small undertaking. Providing unique, differentiated value, adjusting business models, and adroitly messaging your cloud-enabled solutions can set you up for long-term success. In this session, we'll discuss the opportunity ahead for storage partners with AWS. How to transform solutions to leverage the power of the AWS cloud; will your operational paradigms shift as your customer deployment scenarios do; and can you help and advise customers as they navigate their own cloud transitions? On-premises and born-in-the-cloud partners will discuss how they have transformed their businesses on AWS.
Data shows that APN Partners who invest in AWS Training drive incremental revenue and accelerate customer adoption of AWS. In this session, learn how investments in AWS Training can help you drive more business, while solving customer challenges quicker. Learn about improvements we're making to AWS Partner Training, including new learning paths for technical and business roles. In addition, hear from APN Partners on how AWS Training benefits them and their customers. We provide training resources and best practices that you can take back to your partner organization.
AWS Certification builds personal and partner credibility and provides value to customer engagements. Partners earn multiple AWS certifications to differentiate themselves, which drives substantially more revenue. In this session, learn about the types of certifications offered, the path to getting certified, and the benefits you can earn after passing an exam. Hear best practices on aligning certifications with your goals. Get tips for achieving AWS Certification and get your questions answered.
Developers and architects migrating Microsoft enterprise applications to AWS can leverage new tools and services to implement DevOps best practices identified and developed by AWS solution architects and service teams. Learn about architectural best practices and AWS services such as AWS CodeBuild and AWS CodeDeploy, focusing on the .NET environment. Get examples of using the latest SQL Server release on Amazon EC2 or Amazon RDS, or on other database offerings native to AWS, like Amazon Aurora or serverless environments. Hear how an APN Partner took a global retail customer's ecommerce engine and SQL Server–based data platform from on premises to the AWS Cloud in just weeks.
Migrating mission-critical SAP workloads to AWS allows enterprises to realize business benefits quickly and securely without a significant upfront investment. Today, customers are turning capital expense into operating expense at a record pace and are accelerating business processes and efficiency for less than the cost of a week at a beach resort. Learn how other SAP customers are removing risk and testing their SAP migrations and upgrades for low cost to jumpstart their SAP projects for low cost.
Learn how SAP customers are running mission-critical workloads on AWS. An amazing number of enterprise customers are moving their entire SAP landscapes, including production environments, to AWS to increase business agility and reduce costs. British Petroleum, Kellogg's, and Lionsgate are examples of enterprise customers running their core businesses on AWS today. Learn how we guide Fortune 50 companies as they rapidly adopt emerging technologies and accelerate greater innovation with the AWS Cloud.
Join this session to learn why you should join the AWS Partner Network (APN). Hear best practices to take advantage of all that the APN has to offer.
As customers move to the cloud and become more agile, their expectations around the speed and efficiency of software procurement and fulfillment are increasing as a result. Consulting partners that deliver advisory, professional, and managed services to customers need to be able to purchase and deploy the required software solutions in days, not weeks. In this session, we explore how the software channel is evolving, including the economic and business forces that are creating change. This session benefits business leaders at Consulting Partners and ISVs, both partners new to AWS and existing partners, who will learn about AWS Marketplace's unique approach to enabling this evolution.
AWS Marketplace helps customers migrate their workloads to AWS through a variety of ISV solutions and aids and accelerates the journey to the cloud. AWS Marketplace provides the tools and software for each step of the migration process and post-migration to sustain a cloud operating model. In this session, we explore the various stages of migration into AWS, common challenges that exist in each stage, and how AWS Marketplace helps our customers address some of these challenges.
Companies around the world are looking at using artificial intelligence and machine learning to launch new innovative products and services and to drive efficiencies via automation in their businesses. Come to this session to understand why you should consider building an AI/ML practice in your consulting company. Learn the importance of having strong data engineering skills, including data annotation, and get some tips on building a data science team that can deliver customer projects.
How does a practice become a "best" practice? How does a pattern become an "anti" pattern? As always, experience is the best teacher. As Partner Solution Architects, we receive a lot of partner feedback on how practices and design patterns work—and occasionally fail to work—in the real world. We use this feedback to inform our recommendations and reference architectures. In this session, we explore a representative set of real-life "failures." We look at what these failures have to teach us about design and how to prioritize remediation of known issues.
For many, Blockchain has been a black box with little standardization on how to use the technology. The need to understand a multitude of protocols, consortiums, and services, along with their strengths and weaknesses, makes it difficult to select the best option for individual use cases. The lack of technical maturity leads to an uneasiness within the community that can negatively affect adoption. With our new partnerships, along with Intel, this panel discusses technology drivers that are pushing standards forward and accelerating the adoption of Blockchain in the AWS enterprise space. Come join us for a closer look at what Blockchain is doing for several industries and their use cases.
You've gotten up and running with containers, and now you're trying to understand how to take them to production. In this session, we deep-dive into tools that help you build and manage development and production clusters. We explore CI/CD pipelines that test your code and scan for vulnerabilities, use Docker multistage builds to efficiently use resources, monitor your network, debug issues in development, and monitor your applications as they go to production.
Financial services companies are using machine learning to reduce fraud, streamline processes, and improve their bottom line. AWS provides tools that help them easily use AI tools like MXNet and Tensor Flow to perform predictive analytics, clustering, and more advanced data analyses. In this session, hear how IHS Markit has used machine learning on AWS to help global banking institutions manage their commodities portfolios. Learn how Amazon Machine Learning can take the hassle out of AI.
Healthcare and life sciences companies often have to adhere to specific regulatory requirements, such as GxP or HIPAA. The ability to treat your application environment as code on AWS lets you iterate faster while adhering to the appropriate regulatory frameworks. In this session, we discuss how DevOps principles can help you achieve your compliance requirements by validating your infrastructure in the same way that you do software. In particular, we discuss common compliance principles, demonstrate how to translate from policies to technical controls, and highlight how our partners are building for GxP and HIPAA.
Come see first-hand how Amazon EC2 Systems Manager can help you manage your servers at scale with the agility and security you need in today's dynamic cloud-enabled world. To be truly agile, you need a way to define and track system configurations, prevent drift, and maintain software compliance. At the same time, you need to collect software inventory, apply OS patches, automate your system image maintenance, and configure anything in the OSs of your EC2 instances and on-premises servers. Amazon EC2 Systems Manager does all of that and more for both Linux and Windows systems. In this session, learn about the seven services that make up Amazon EC2 Systems Manager and see them in action. No matter if you are managing 10 or 10,000 instances, see how you can manage your systems, increasing your agility and security with EC2 Systems Manager.
SaaS architects must always have their finger on the pulse of tenant consumption. Understanding the patterns for tenant consumption provides both business and technical teams the data they need to make sound decisions about product packaging, metering, and tiering. Of course, building a robust model for analyzing and attributing tenant consumption can be tricky. In this session, we look at specific strategies for capturing, aggregating, and associating consumption with tenants in a multitenant, shared resource model. We touch on common patterns and strategies that are used to instrument and publish metrics spanning compute, storage, and so on. We also look at how tools and models that can be used to correlate consumption with AWS spend.
Supporting a multitenant environment requires a robust management and monitoring strategy. SaaS operations teams require tools and views of system health that enable them to analyze and diagnose both multitenant and tenant-centric issues. The goal of this session is to identify specific strategies and tools that can be combined to support the unique set of operational challenges that SaaS providers face. In this session, we look at how analytics, consumption, and application metrics can correlate tenant activity with system health to proactively identify and troubleshoot issues. We also explore techniques for monitoring and managing different SaaS tenant isolation models, such as silo, pool, and so on.
AWS Identity and Access Management (IAM) is the foundation that all AWS services require to function and perform any action. Mastering IAM is the skill set you need in your arsenal so that you can provide best-in-breed services through your application or services to your customers. This session shows you best practices for IAM, the latest service additions, and advanced automation techniques to become a certified IAM ninja.
Data exfiltration—also called data extrusion, data exportation, or data theft—is the unauthorized transfer of data. It is a very serious challenge to business because attackers go after business critical or highly confidential data. Data exfiltration can be done manually by a person, or automated using scripts. Attack sophistication increases by the day. Signature-based techniques to defend against attacks are limited and cannot protect against zero-day attacks. To counter this, we use machine learning (ML) techniques. ML is effective at solving many problems in computer vision, robotics, etc., and is increasingly used in security. Learn an ML technique called anomaly detection, and other state-of-the-art techniques to identify data exfiltration attempts.
AWS provides a suite of services and tools to deploy business-critical SAP HANA workloads on the AWS Cloud. In this session, we discuss how you can use AWS services, native SAP HANA high availability (HA) tools, and third-party software to achieve HA for SAP HANA systems on the AWS Cloud. We review multiple options that use different AWS features, Availability Zones, and global regions, and discuss the pros, cons, and related costs of each option.
Real-time data processing is a powerful technique that allows businesses to make agile automated decisions. This process is particularly powerful when applied to workloads like security, analyzing access logs, parsing audit logs, and monitoring API activity to detect behavior anomalies. Combined with automation, business can quickly take action to remediate security concerns, or even train a machine learning (ML) model. We explore different techniques for analyzing real-time streams on AWS using Lambda, Amazon Kinesis, Spark with Amazon EMR, and Amazon DynamoDB. We also cover best practices around short- and long-term storage and analysis of data and, briefly, the possibility of leveraging ML.
Is your customer worried about scaling their monolithic application for an upcoming major event and has a tight timeline? Maybe it's time you recommend moving their application to a microservices architecture. In this session, we explore how to convert a monolithic application to a microservices model by using AWS serverless services such as AWS Lambda, Amazon API Gateway, and Amazon DynamoDB. We step through the common architectural changes in moving to a microservices structure, and we discuss how to manage your application at scale. We also demonstrate a web application built using AWS serverless services.
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. A proper data model is an integral part of an optimal Amazon Redshift deployment. It allows you to scale efficiently in a cost-effective manner to meet your increasing demand. In this session, I offer tips to help you optimize your Amazon Redshift deployment.
Desktop and Application Streaming solutions provide the "last mile” between users and their applications and data. In this session, we will review the elements and benefits of the layered desktop image, application dependencies, and lifecycle management of desktops. Liquidware Labs, a APN partner, will showcase their toolset that helps customers deploy on AWS with ease. This deep dive on desktop image management for Amazon WorkSpaces will focus on enterprise organizations migrating to the cloud and will benefit engineers and system administrators that are new to this space as well as experienced architects with existing deployments on AWS.
AWS Greengrass provides a wide range of opportunities from IoT gateway applications to building systems like those with microservice architectures. In this session, we first evaluate how AWS Greengrass fits into OEM, ODM, and IT service delivery models. We then wade into a gentle overview of AWS Greengrass and how it interoperates with AWS IoT and other AWS services. We walk through several key AWS Greengrass distributed architectures. Next, to help you accelerate your solution using AWS Greengrass, we discuss how AWS Greengrass fits into the AWS Cloud development and delivery model. The talk wraps up with a demonstration of AWS Greengrass facilitating communication between a closed machine to machine (M2M) network and AWS IoT.
The threat model for IoT devices is very different from the threat model for cloud applications. Customers must understand what these threats are, prioritize them effectively, and navigate the growing ecosystem of partners that give customers tools to build secure IoT solutions. We showcase how to leverage partner solutions to mitigate threats, explain how to avoid common pitfalls, and make it clear that all IoT solutions must incorporate end-to-end security from the start. We begin with the steps to take in the manufacturing process, how to provision and authenticate devices in the field, and we cover solutions that can help customers comply with IT requirements in the maintenance phase of the product lifecycle.
This session explains how to build reusable, maintainable AWS CloudFormation–based automation for AWS Cloud deployments. We have built over 50 Quick Start reference deployments with partners and customers, and will share this expertise with you. We explore the anatomy of a typical AWS CloudFormation template, dive deep into best practices for building Quick Start automation across Linux and Windows and explore useful design patterns. This expert-level session is for partners interested in building Quick Starts or other AWS CloudFormation–based automation. It requires familiarity with Git, shell scripting, Windows PowerShell, and AWS services like Amazon EC2, Amazon S3 and AWS CloudFormation.
You want to go to the cloud, but you are blocked by legacy technical debt. In this session, we guide you into the cloud using trusted application platforms like OpenShift and CloudFoundry. Come learn how to unblock your migration and unwind an otherwise complicated transformation.
Do you know that customers can seamlessly migrate on-premises applications to VMware Cloud on AWS? Come learn the compute, network, and storage architecture of the VMware Cloud on AWS solution. In this session, we use practical, real-world customer use cases to dive deep on hybrid cloud network connectivity, data protection, and security best practices. Additionally, we highlight how to use native AWS services with VMware Software-Defined Data Center (SDDC) workloads. Expect to walk away with practical guidance and tips on helping customers with their VMware and AWS hybrid cloud journey.
In this session, we walk through the fundamentals of Amazon VPC. First, we cover build-out and design fundamentals for VPC, including picking your IP space, subnetting, routing, security, NAT, and much more. We then transition into different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This midlevel architecture discussion is aimed at architects, network administrators, and technology decision-makers interested in understanding the building blocks that AWS makes available with VPC. Learn how you can connect your VPC with your offices and current data center footprint. This session adds a focus on AWS Partners and where they are relevant in AWS networking.
Identity is a foundational element of SaaS design, and getting it right can be challenging. You need a strategy that allows you to connect users to tenants, roles, and policies in a seamless model that doesn't handcuff developers. Fortunately, identity providers and OpenID Connect give us a model that equips SaaS providers with the tools they need to address all the moving parts of SaaS identity. In this session, we dive into the details of how you can use these solutions to build a robust identity solution—a solution that covers binding identities to tenants, supports tenant and system roles, and isolates tenant access. The goal here is to provide a concrete example of how to orchestrate all of these elements of the SaaS identity model on AWS.
Unprecedented computational power for massively parallel applications creates unprecedented storage requirements. Learn about coherent storage clusters processing millions of IOPS at submillisecond latency, how to architect storage for HPC in the cloud, and how to do it all without breaking the bank. This session incorporates live demonstrations, including APN Competency Partner solutions.
Dive deep into storage solutions for enterprise applications, debunk performance and availability perceptions, and learn about anti-patterns. Focusing on consulting and technology partner use cases, this session incorporates live demonstrations, including APN Competency Partner solutions. Attendees learn about reference architectures for enterprise storage solutions and how to incorporate components into new solutions for enterprise workloads.
Advances in artificial intelligence, machine learning, and deep learning, along with the rapid deployment of Internet of Things (IoT) devices, are changing how physical products are designed and built. In this session, learn how AWS partners Siemens and Autodesk use AWS to enhance the design process and how they're incorporating AWS services into their products and smart factories. We explore how these trends impact the future of design and manufacturing.
Healthcare data is often large and complex enough to be complete, or simple enough to be usable: choose one. In this session, you will learn how Cerner tackled this problem for healthcare and other complex data sets on AWS. Cerner started with patterns to quickly evolve an infrastructure based on AWS to meet new demands, move into data engineering techniques on Apache Spark to make otherwise unmanageable data sets simple and usable, and connect that through to popular front-end tools for analysts and data scientists like Jupyter. You will also learn how Cerner is helping healthcare informaticians focus more on their analysis and less on undifferentiated heavy data lifting.
Healthcare organizations are rapidly adopting container technology to drive innovation. In this session, join Horizon Blue Cross Blue Shield of New Jersey and ClearDATA to learn about how to integrate Amazon ECS into your deployment pipeline while maintaining compliance for healthcare workloads, how to harden container environments for sensitive workloads, and how to leverage AWS tooling and microservices to provide new views and analysis for data stored in on-premises data centers.
Learn how to effectively use AWS automation for healthcare compliance. In this session, Verge Health (SaaS provider for practitioner management, organizational compliance, and patient and employee safety) discusses how they moved their risk management platform, supporting over 13,000 hospitals, to AWS. Because Verge Health, as a partner to healthcare organizations, is subject to Health Insurance Portability and Accountability Act (HIPAA), it worked with AWS Partner Network (APN) Advanced Partner Cloudticity to significantly increase their security, availability, performance, HIPAA compliance, and agility, while simultaneously reducing cost through fully automated DevSecOps on AWS. The session focuses on resilient architecture for HIPAA compliance, automated migration techniques for data and VPN connections, and the automation of daily tasks, such as deployments and patching.
This session provides an overview of how Change Healthcare invested in people, process, and an automation platform to adopt a cloud-first strategy. Starting from building a Cloud Center of Excellence team, they identified the compliance, security, and cost optimization requirements and process required to build a framework. They also embedded healthcare compliance, security, architecture best practices, and customer-specific rules and standards for a managed adoption of the cloud. Change Healthcare is leveraging their Cloud 2.0 framework to rapidly deploy their mission applications into AWS. Come learn how Change Healthcare built a serverless architecture using Amazon ECS, AWS Lambda, AWS CodeDeploy, AWS CodeCommit, AWS CloudFormation, AWS Service Catalog, AWS OpsWorks, AWS Elastic Beanstalk, and other managed services.
In this session, hear how Cambia Health Solutions, a not-for-profit total health solutions company, created a self-service data model to convert a large-scale, on-premises batch processing model to a cloud-based, real-time pub-sub and RESTful API model. Learn how Cambia leveraged AWS services like Amazon Aurora, AWS Database Migration Service (AWS DMS), AWS Lambda, and AWS messaging services to create an architecture that provides a reasonable runway for legacy customers to convert from old mode to new mode and, at the same time, offer a fast track for onboarding new customers.
HLC309: The American Heart Association and How to Build a Secure and Collaborative Precision Medicine Platform on AWS
The American Heart Association (AHA) Precision Medicine Platform was built on AWS to enable the research community to accelerate the development of solutions for cardiovascular diseases and stroke. In this session, AWS Partner Network, REAN Cloud, and the AHA discuss how they architected the AHA Precision Medicine Platform to facilitate healthcare research collaboration and data discovery for hundreds of users around the world. They discuss how to catalog, discover, and analyze precision medicine cohorts at scale using Amazon S3, Amazon EMR, Amazon Elasticsearch Service, and Amazon AppStream 2.0. Additionally, they explain how the platform adheres to the strict security and logging requirements necessary for compliance with HIPAA and FedRAMP by using AWS Identity and Access Management, AWS CloudTrail, AWS Config, and Amazon Kinesis.
The innovation team at Methodist Le Bonheur Healthcare (MLH), an integrated health care delivery system, saw AWS as an enabler to faster ideation on breakthrough patient care products over their existing internal private cloud options. In this session, you learn how they eliminated HIPAA compliance as a barrier to their speed-to-market goals by standardizing internal DevOps and DevSecOps duties across applications, as well as taking advantage of the containerization of enterprise technology. MLH partnered with Datica, an APN Healthcare Competency Partner, to address vulnerability scanning, intrusion detection, disaster recovery, backups, encryption, audit logging, and deployment orchestration. You hear how, with this partnership, they ensure that the configuration and orchestration of all AWS HIPAA Eligible Services meet the controls set by healthcare's most stringent accreditation body, HITRUST, with every workload deployment. You also learn how MLH's adoption of a standard compliance layer led to quickly achieving stronger data integration with electronic health records.
This session can help you better understand how to leverage different AWS services to build an IoT application. Learn the value of each AWS service in the Internet of Things (IoT) category, as we go through different use cases that demonstrate how the services are better together. NASA/JPL illustrate those concepts by discussing the inner workings of a demonstration they've built. They also talk about how they use IoT to overcome their technical challenges.
IOT202: Transforming Industry Verticals with AWS IoT
How do customers use IoT to transform their businesses and unlock innovation? In this session, we have several AWS customers discuss how AWS IoT has helped them transform their business. They share lessons learned about strategic opportunities created by AWS IoT, and how they approached their IoT solutions to create compelling opportunities in their respective verticals.
In this session, we present the Philips HealthSuite Digital platform (HSDP), which was purposefully built to address the complex challenges of healthcare. HSDP features deep clinical databases, patient privacy, industry standards and protocols, and personal and population data visualizations. You will learn how AWS Professional Services and Philips partnered to build a new core IoT platform for device management, data storage, and data integration leveraging AWS technologies. HSDP is built using AWS IoT and a serverless, microservices-based architecture based on AWS Lambda and Amazon API Gateway.
Panasonic and the Colorado Department of Transportation (CDOT) launched a partnership to build a connected transportation program in which real-time data would be shared across vehicles, infrastructure, and people to improve safety and mobility on the road – the V2X Deployment Program. To prove its capabilities initially, Panasonic built a fully functional Minimum Viable Product (MVP) in less than six months, using services such as AWS IoT, AWS Lambda, Amazon RDS, Amazon ECS, Amazon EC2, Amazon S3, and more. See how customers will synthesize data from various edge devices (roadside units) on the highways and send it to AWS IoT and analytics services for processing and intelligible actions. We discuss AWS IOT capabilities, design patterns, customer architecture, challenges faced, and lessons learned. Learn best practices and common issues to address to succeed with similar smart city projects.
Autonomous cars need to identify road signs in real time, drones need to recognize objects with or without network connectivity. In this breakout session, you will learn what is machine learning (ML) inference at the edge and why it matters. We will show you how to use AWS Greengrass to locate cloud trained machine learning models, deploy them to your Greengrass devices, enable access to on-device GPU or FPGA, and apply the models to locally generated data without a need for connection to the cloud.
This session covers the most recent AWS IoT announcements at re:Invent. Learn about trends and use cases for the Internet of Things (IoT). Hear about how AWS customers are using AWS IoT to connect their devices to the cloud and solve business challenges with IoT.
This session is an overview of IoT Analytics challenges and use cases with our customers. This session will cover analytics use cases from Consumer IoT to Industrial IoT. It will then show how AWS IoT Analytics helps customers solve these challenges in different IoT verticals.
In this presentation, we will take a deeper look at the newly announced Amazon FreeRTOS. Amazon FreeRTOS (a:FreeRTOS) is an operating system for microcontrollers that makes small, low-power edge devices easy to program, deploy, secure, connect, and manage. Amazon FreeRTOS is based on the FreeRTOS kernel, a popular open source operating system for microcontrollers, and extends it with software libraries that make it easy to securely connect your small, low-power devices to AWS cloud services like AWS IoT Core or to more powerful edge devices and gateways running AWS Greengrass.
In this session, we will take a closer look at the newly announced IoT offering – AWS IoT 1-Click. You will hear how cloud-ready simple devices can be integrated and deployed securely, right out of the box without writing device-specific code. 1-Click enables you to group, identify and track your IoT deployments giving you full visibility into utilization and device health. Come, learn how 1-Click can help manufacturers build and developers integrate ready-to-use simple devices into their cloud applications.
Introducing AWS IoT Analytics, the new managed service that lets customers structure, preprocess, store, analyze, and visualize connected device data. The service team will walk through the service's features and introduce two customer use cases from the private beta.
The AWS IoT message broker is a fully managed publish/subscribe broker service that enables the sending and receiving of messages between devices and applications with high speed and reliability. In this session, learn about the common AWS IoT messaging patterns and dive deep into understanding the scaling best practices while using these patterns in applications. In addition, Amazon Music talks about how they used AWS IoT to build event notifications of soccer games in their applications for our customers.
AWS Greengrass extends AWS onto your devices, so they can act locally on the data they generate while still taking advantage of the cloud. In this session, we discuss the features and development languages of AWS Greengrass that let you build powerful edge compute applications. You'll also hear directly from Greengrass customers in multiple industries.
In this session, AWS IoT customers talk about the nuances, successes, and challenges of running large-scale IoT deployments on AWS. Hear from customers who have been operating on AWS IoT. Learn from their war stories of development and their architectural recommendations on technical best practices on IoT.
This session provides a technical overview of a new-generation core IoT platform, designed and implemented by Enel in partnership with AWS IoT. The core IoT platform provides a single architecture and a common set of services that will be adopted by existing and future IoT applications across different business units at Enel. We analyze use cases with a live showcase of platform capabilities. We also demonstrate how the core platform enables Enel to build resilient and scalable business solutions by leveraging existing and leading-edge AWS services, such as the AWS IoT Device Gateway, AWS IoT Device Shadow, and AWS Greengrass.
In this session, we present AWS IoT and Amazon Machine Learning (Amazon ML) to demonstrate how you can use these services together to build smart applications. Customer SKF presents their use case around AWS IoT and Amazon ML in their wind turbines.
This session covers best practices for using the AWS IoT rules engine when implementing routing, validation, or augmenting of messages that are sent through AWS IoT. In addition to best practices, AWS partners and customers discuss how the AWS IoT rules engine creates unique use cases to process and inspect data in real time.
If you have a large fleet of IoT devices join us. We will introduce you to a new service called AWS IoT Device Management. It makes it easy for OEMs, enterprises and integrators to securely manage connected devices throughout their lifecycle: from initial setup through software updates, to retirement. We will show you how customers enroll and authenticate their devices in bulk, organize their fleets, manage permissions, remotely manage and update device software, and monitor the performance of their products. Customers already using the service will show how they have used IoT Device Management to create an IoT solution spanning multiple industries and use cases.
IOT336: Accelerating Engie's IoT Application Roadmap with C3 IoT and AWS IoT
This session will explore how Engie is using AWS partners like C3 IoT alongside AWS IoT to accelerate innovation and solve interesting problems faster. Engie will share implementation details for their IoT and big data applications that were designed, developed, and deployed in six months. C3 IoT will then provide a deep dive demonstration of its technology and integration with AWS IoT. C3 IoT will show how it enables application developers and data scientists to rapidly build next-generation apps at scale.
Security is an imperative for any successful IoT deployment. AWS and Intel will showcase their collaboration on IoT security at the edge based on Intel® Zero-Touch Device Onboarding. In this session you will learn how to ensure secure connection back from the edge to AWS cloud, accelerate deployment time for provisioning, and scale solution remotely for customization and management across thousands of devices and end points. Session sponsored by Intel
Businesses are looking to transform their processes with IoT, but without business intelligence, customer context, and an accessible platform, it can be difficult to drive value from your IoT deployments. AWS IoT and Salesforce IoT enable you to securely connect a network of devices to your CRM and automate actions based on specific events, lowering operational overhead and increasing revenue. Contextualize device data with insight into who is interacting with your devices, how they're using your products, and more. Build business logic for any device state using clicks, not code. Session sponsored by Salesforce
In this session, Distinguished Engineer, James Gosling, discusses how AWS innovates in the Internet of Things. James shares stories and experiences in deploying IoT systems, and how AWS thinks of scalability in IoT. In addition, James shares his experiences in engineering Java embedded systems in IoT.
This is a 400 level session that will discuss how customers can use Amazon FreeRTOS on microcontrollers with Greengrass at the edge. It will walk through connecting your devices running Amazon FreeRTOS, how to connect devices to Greengrass, and how these two services can work together to solve customer use cases. We will also cover security and authorization across Amazon FreeRTOS and Greengrass.
DREAM Challenges pose fundamental questions about systems biology and translational medicine. Designed and run by a community of researchers from a variety of organizations, the challenges invite participants to propose solutions, fostering collaboration and building communities in the process. The Sage Bionetworks Synapse platform, which powers many research consortiums including the DREAM Challenges, are starting to put into practice model cloud-initiatives that not only provide impactful discoveries in the areas of neuroscience, infectious disease, and cancer, but are also revolutionizing scientific research by enabling an interactive consortium science platform. In this session, you learn how to build a "consortium model" of research in order to connect research organizations with non-profit organizations, technology companies, biotechnology, and pharmaceutical companies. You can also learn about how to leverage machine learning, Amazon ECS, and R for consortium-based science initiatives.
Historically, there has been an information asymmetry in pharmaceutical R&D where the biopharmaceutical companies had the deepest understanding and knowledge about their products and how they helped and interacted with patients. Now, there's new, real-world data that exists from regulators, health plans, government authorities, and patients, which is helping pharma companies to understand how their therapies and their innovations drive value and impact in patient populations. There are imperatives to leverage that data, create new partnerships in their ecosystem, and get access to that data in an ethical way to derive insights to both fuel innovation and drive discovery. In this session, you learn best practices from Deloitte and Celgene about strategy, operating models, and execution frameworks when implementing a real-world, evidence data platform.
SAP is the predominant mission-critical business software platform for Life Sciences companies. SAP often handles multiple areas of the business including finance, HR, training, manufacturing, and supply chain. In this session, learn from Amgen about how they implemented SAP to comply with Good Manufacturing Practices (GMP), how to avoid unexpected challenges with upgrades, how to structure your project, and best practice approaches when migrating your SAP environment to AWS.
With the increasing use of genomic sequencing for scientific discovery, the rate-limiting step for researchers is not in obtaining genetic code, but in having the capacity for storage and computing power to analyze it. In this session, you learn how Eagle Genomics built a cloud platform that uses an open-source workflow engine (eHive), Docker containers to process jobs, and a REST service to manage pipeline runs, all to help customers process genetic sequences up to 20 times faster without additional costs. You also learn how Eagle used Amazon EC2 Spot instances to provide low-cost compute power, and services such as Amazon S3 to power their cost-effective and scalable genetic processing platform.
Implementing stringent security and compliance controls, like GxP, across your enterprise cloud ecosystem, while ensuring the agility of the DevSecOps process requires significant expertise and a lot of time to design, build, and maintain custom operations tooling. In this session, you learn how Turbot used AWS services to simplify IT operations to provide continuous compliance to major life sciences customers. You also hear how life sciences companies like Novartis Institutes for Biomedical Research (NIBR) have become agile, ensured control, and automated best practices using automated policy controls to configure, monitor, and maintain their cloud resources. By doing this, they became more supportive of their researchers' application stack. You also learn how data scientists and core researchers can take advantage of the power of DevOps and cloud computing without compromising enterprise security or data protection requirements.
The Clinical Innovation Labs team at Eli Lilly and Company is leveraging AWS services, design thinking methodology, and co-creation to transform ideas for translating clinical research into real-world solutions. The Eli Lilly "Innovators' Platform" is a rapid prototyping environment that combines patient behavior discovery analysis, art-of-the-possible storyboarding, health device mockup creation, and simulated patient walkthrough analysis. This platform is used to demonstrate the capabilities of emerging technologies and enables participants to contribute ideas to extend the platform. This presentation describes how the team and its processes work together, what components make up the platform, and why it's making an impact on patient health. We discuss the team's use of various AWS services, including AWS Lambda, Amazon API Gateway, AWS IoT, Amazon Cognito, Amazon S3, and AWS Elastic Beanstalk. We also provide demos of what has been built using this platform methodology.
Pharmaceutical company processes tend to be slow when dealing with customer-facing applications that contain FDA-validated messages, all while maintaining infrastructure and security standards. In this session, discover how Mylan, a US–based global generic and specialty pharmaceutical company, overcame these obstacles and provided scalable solutions by leveraging AWS DevOps methods that lower time to market, while maintaining robust security and release management practices. During the presentation, learn how Mylan redefined process models such as infrastructure change management to define new security and process models. Additionally, learn how Mylan used services like Amazon S3, Elastic Load Balancing (ELB), and AWS CloudFormation to define these new models.
Media & Entertainment
AWS Cloud adoption is rapidly taking hold within critical aspects of media business processes. New media and other early adopters have been using the cloud for basic transcoding and streaming for several years. But we have seen a significant increase in media workload adoption across more business-critical and complex workloads in content creation, media supply chain management, and content distribution (e.g., broadcast channel playout, publishing, and OTT). This session includes customer panelists from the visual effects, supply chain, and broadcast industries who now see AWS as the new ”normal” for implementing scalable, performant, and cost-effective media workloads. Hear from top leaders in the Media & Entertainment space about their challenges and lessons learned when migrating core media workloads to the cloud.
Learn how Amazon EC2 Spot and Thinkbox Deadline can make your VFX and CG renders explode off the screen, with minimal effort and low cost. This session focuses on rendering workloads combining Deadline (an AWS rendering pipeline management service), Thinkbox Marketplace usage-based licensing for flexible render licensing, and Spot for scalable low-cost computing. Learn how to seamlessly integrate your existing production pipeline, as well as advanced asset management, synchronization, and connections to other AWS services such as AWS storage and networking for advanced workflows, including the extension of on-premises render workflows into the cloud and all-in-cloud rendering pipelines. We also highlight a few real-world examples of customers with actual Hollywood productions.
MAE302: Personalizing PBS National Channels by Shifting to Local Ads and Time Zones Automatically
As a national broadcaster, PBS must present the news at 7PM across multiple time zones while offering their local affiliates options for customizing the ad payloads. By using AWS Elemental Cloud, Amazon S3, Amazon EC2, AWS Direct Connect, and other services, PBS is able to improve the user and operator experience while monetizing video content with server-side ad insertion and time-shifting of their broadcasts. Hear how PBS uses their own solutions and AWS products and services to enhance their national content distribution capabilities.
Every evening video streaming consumes over 70% of the internet's bandwidth, with demand only expected to increase as young households forego traditional pay TV for OTT services (whether live, on-demand, ad-supported, transactional, subscription, or a combination thereof). In this session senior tech architects from Netflix, Hulu, and Amazon Video discuss lessons and best practices around hosting largest scale video distribution workloads to enable high traffic consumption at demanding reliability requirements. You will also hear from AWS experts on the latest developments in cloud-based video processing, distribution, and personalization for both live and on-demand streaming.
Turner Broadcasting is using the AWS Cloud to provide storage and content processing required to enable mission-critical video libraries. Turner is creating a copy of CNN's 37-year news video library in AWS to take advantage of the cost and architectural benefits of cloud storage. This project has unique requirements around retrieval times, and Turner partnered with AWS to drive specific capabilities such as those Amazon Glacier expedited and bulk retrieval options. These cloud-based archives can enable Turner to use other cloud-based value-add services, such as AI/ML/search, and media supply chains efficiently. Turner's global content exploitation strategies call for extensive versioning of content assets required for distribution to different platforms, products, and regions. Today, this involves complex workflows to derive multiple downstream versions. Adopting the SMPTE Interoperable Mastering Format (IMF) and cloud-based object storage, Turner will dramatically simplify these workflows by enabling cloud-based automation and elastic scalability. Hear Turner's strategy, implementation around these media workloads, and lessons learned.
Security is paramount for media storage and workloads and can directly impact a studio's bottom line. As core media workloads move to the cloud, it's imperative to examine the security implications of a multi-tenant public cloud environment in light of various asset classifications, content production, and delivery scenarios, as well as content handling during production workloads. In this session, we address questions and concerns about security in the cloud in the context of tier-1 (pre-released) studio content, as well as the MPAA. We highlight a studio's journey to meet Marvel's security requirements and run a tier-1 content workflow in the cloud and discuss what it took to approve the environment on AWS.
In this session, we take a pragmatic approach to enhancing common media workflows built around ingest, media asset management, live video, and OTT on-demand streaming. We show how to extract metadata as an additional intelligence layer for video using Amazon AI services, such as Amazon Rekognition, in combination with turnkey architecture built around AWS Lambda, Amazon ECS, and Amazon EC2 Spot Instances. The capabilities offered by Amazon AI services provide a unique opportunity to eliminate the traditional undifferentiated heavy lifting associated with contextual, facial recognition and object-based media metadata creation—that is, who is in with what, and where. We also discuss a large studio and broadcaster just starting to use these intelligent offerings from AWS as they change their method of how to best leverage the business value of their content.
Progressive Web Apps (PWAs) are the future of distributable mobile apps. Learn what they are, what tools you need, and how to build a mobile PWA. Then follow along and build your own mobile PWA. Understand what AWS offers for mobile app developers, and learn how to build a PWA and distribute it through AWS Mobile Hub.
Have you ever thought about building a mobile app, but you're daunted by the technology? Join this session to learn the basics of building native cloud-enabled mobile apps with AWS Mobile Hub. Learn about the tools you need, and then follow along to learn how to build your first mobile app. Understand what AWS offers for mobile app developers, and learn how to build a native app and distribute it through AWS Mobile Hub.
This session describes the importance of digital user engagement using multi-channel messaging to drive effective end-user messaging. Mobile push notifications, SMS text messages, and email help you engage with users and drive your desired business outcomes through improved KPIs that are important for your business. Better user engagement helps drive increased conversions and improve retention for your business.
Successful mobile applications rely on a broad spectrum of backend services that support the features and functionality of the front-end mobile application. The success of the mobile application depends on those backend services being built so that they can scale as the application's audience grows, sometimes explosively when an app takes off. They must also protect the security and privacy of the data used in the application.
Vivint Solar is a leading full-service residential solar provider in the United States with more than 100,000 solar systems to its credit. Vivint Solar uses MicroStrategy on AWS to keep track of internal processes, including operations, finance, customer support, and human resources. With MicroStrategy on AWS, Vivint Solar could deploy a fully configured MicroStrategy environment in just an hour, giving their IT team more time to focus on other high-value projects. Today, Vivint Solar uses over 25 different AWS tools and services including, AWS Lambda, AWS Step Functions, Amazon SQS, AWS CloudFormation, and AWS CodeDeploy to track and analyze their data. Join us in this session to learn how Vivint Solar is using analytics to transform the way they do business and revolutionize the way people think about renewable energy. Session sponsored by MicroStrategy
Testing your mobile app is important! In this session, learn about UI testing and how to build UI tests, then run the UI tests on a variety of mobile devices in the cloud. Learn how you can go completely device free by using devices in the cloud for your development. Also, learn about using tools like Appium and Jenkins as part of your testing and QA process. We use PWA and native apps in this session to show the difference.
Amazon Music is a popular music streaming service that uses Amazon Pinpoint to help drive growth of the service through effective engagement with its user base. In this session, learn the ins and outs of how the Amazon Music team uses the Amazon Pinpoint service to drive user growth and retention to achieve the desired business outcomes, achieve KPIs, and use app and user analytics for insight for further improvement projects.
Driving customer engagement is critical to the success of mobile and web applications. The three main components of customer engagement are customer analytics, demographic segmentation, and multi-channel messaging. This session provides a deep dive into how Amazon Pinpoint drives customer engagement. Learn about Amazon Pinpoint's integrated campaign management is used by developers and marketers to optimize messaging and communications with customers across channels, including mobile push notifications, SMS, and email.
Learn how to use Amazon Cognito to build the user identity management workflows, including user on-boarding, sign-up, and sign-on for mobile and web applications. Learn how to customize the look and feel of the UI and UX of the screens and pages, integrate with third-party social identity providers such as Facebook, Google, and Twitter, and use SAML to federate with enterprise directory services.
This session covers the current State of the Union for mobile application development on AWS, providing an overview of the services available to mobile developers from AWS. We discuss the entire lifecycle of the mobile application process from building, testing, deploying, and production, to growing your user base and business with ongoing engagement and campaigns.
With the "freemium" model as the new norm for mobile apps, developers are supplementing their revenue with ad-based monetization. To maximize ad revenue, developers are using multiple ad networks as a best practice. This has created the need to track performance across ad networks and optimize for maximum revenue and fill rate. In this session, we create an app with ads from multiple ad networks. We illustrate how ad revenue can be maximized across these networks by tracking key metrics. Finally, we provide examples of segmenting users based on ad interactions and attributes, to further optimize ad monetization.
In this session, we will build a highly scalable mobile app, website, and serverless mobile backend architecture that demonstrates on-demand video streaming, adaptive multi-bitrate transcoding, and video content ingestion. We use AWS Lambda and Amazon Elastic Transcoder to automatically convert high resolution videos upon upload, Amazon CloudFront to stream video content to devices using network-aware adaptive multi-bitrate protocols (such as HLS), Amazon Cognito to authenticate users, and AWS Mobile Hub and AWS CloudFormation to automate setting up the required resources.
MBL402: NEW LAUNCH! Data Driven Apps with GraphQL: AWS AppSync Deep Dive
MBL404: NEW LAUNCH! Realtime and Offline application development using GraphQL with AWS AppSync
Given the increasing popularity of natural language interfaces such as Voice as User technology or conversational artificial intelligence (AI), Ally® Bank was looking to interact with customers by enabling direct transactions through conversation or voice. They also needed to develop a capability that allows third parties to connect to the bank securely for information sharing and exchange, using oAuth, an authentication protocol seen as the future of secure banking technology. Cognizant's Architecture team partnered with Ally Bank's Enterprise Architecture group and identified the right product for oAuth integration with Amazon Alexa and third-party technologies. In this session, we discuss how building products with conversational AI helps Ally Bank offer an innovative customer experience; increase retention through improved data-driven personalization; increase the efficiency and convenience of customer service; and gain deep insights into customer needs through data analysis and predictive analytics to offer new products and services. Session sponsored by Cognizant
In this session, we dive into design paradigms and architectures that allow you to leverage the power of AWS AI services and Analytics to build intelligent AI systems. Going back to 2001, Washington County jail management system has archived hundred thousands of mugshots and by using Amazon Rekognition and other AWS services, they were able to build a powerful tool for identifying suspects.
Deep Learning continues to push the state of the art in domains such as computer vision, natural language understanding, and recommendation engines. In this session, we provide an overview of Deep Learning focusing on relevant application domains. We introduce popular Deep Learning frameworks such as TensorFlow and Apache MXNet, and we discuss how to select the right fit for your targeted use cases. We also walk you through other key considerations for optimizing Deep Learning training and inference, including setting up and scaling your infrastructure on AWS.
Amazon Polly is a service that turns text into lifelike speech, making it easy to develop applications that use high-quality speech to increase engagement and accessibility. Get a glimpse into successful applications that use Amazon Polly text-to-speech service to enable an app to talk to its users. Attendees will benefit from understanding real-world business use cases, and learn how to add feature-rich voice capabilities to their new or existing applications.
In this session, we cover an integration of Amazon Lex with a contact center solution. We demonstrate how an Amazon Lex chatbot can be inserted into an interactive voice response (IVR) workflow in a contact center, enabling users to interact with the chatbot using natural language. We walk through a ready-to-deploy integration that includes building the bot, setting up the IVR, and managing the call routing. We also describe the best practices for selective routing based on user intent, exchange of information between the chatbot/IVR, and handover to a human agent.
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Neural machine translation uses deep learning to deliver more accurate and more natural sounding translation than older statistical and rule-based translation algorithms. Amazon Translate enables translation at scale so that you can easily translate large volumes of text efficiently to handle tasks like localizing content for international users and facilitating real-time cross-lingual communication. Join this session to learn more and find out how you get can started with Amazon Translate, today!
Join us to hear about our strategy for driving machine learning innovation for our customers and learn what's new from AWS in the machine learning space. Swami Sivasubramanian, VP of Amazon Machine Learning, will discuss and demonstrate the latest new services for ML on AWS: Amazon SageMaker, AWS DeepLens, Amazon Rekogntion Video, Amazon Translate, Amazon Transcribe, and Amazon Comprehend. Attend this session to understand how to make the most of machine learning in the cloud.
AWS has launched Amazon Sumerian. Sumerian lets you create and run virtual reality (VR), augmented reality (AR), and 3D applications quickly and easily without requiring any specialized programming or 3D graphics expertise. In this session, we will introduce you to Sumerian, and how you can build highly immersive and interactive scenes for the enterprise that run on popular hardware such as Oculus Rift, HTC Vive, and iOS mobile devices.
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech to text capability to their applications. The ASR service can be used across a breadth of industries. For example, customer contact centers can convert call recordings into text for further analysis of what drives positive outcomes; media content producers can automate subtitling workflows for greater reach, and marketers and advertisers can enhance content discovery and display more targeted advertising based on the extracted metadata.
This planet is a big place filled with amazing and unusual things. Understanding every object, location, and action on this pale blue dot is an enormous challenge. As the world's leading provider of high-resolution Earth imagery, data, and analysis, DigitalGlobe faces this challenge every day. They use automatic computer vision and machine learning where possible, but so far, the only true solution requires the most powerful information processing machine we know: the human brain. Scaling this solution to work on the trillions of satellite pixels collected by DigitalGlobe every day requires thousands of brains, all working in harmony. To address this, DigitalGlobe | Radiant's (now Radiant Solutions) Tomnod service uses Amazon Mechanical Turk, a crowdsourcing internet platform, to identify small objects appearing in large areas of new satellite imagery. Tomnod is heavily used for commercial and humanitarian purposes. In this session, you hear how Radiant Solutions uses crowdsourcing to help solve large-scale computer vision and machine learning problems.
Amazon Lex is a service for building conversational interfaces into any application using voice and text, and Amazon Polly is a service that turns text into lifelike speech. This session combines both of these AWS services during which the presenter will demonstrate how to build a Help Desk chatbot that feature spoken-voice interfaces. Attendees will be provided with the foundational skills for those looking to enrich their applications with natural, conversational interfaces. Liberty Mutual Insurance will also present on their chat platform architecture to demonstrated how they areusing Amazon Lex in their organization as an employee digital assistant.
In this session, you will learn best practices for implementing simple to advanced AI/ML use cases on AWS. First. we will review the decision points for using democratised services such as Amazon Lex, Amazon Polly and integration with services such as Amazon Connect. Then we will look at real use cases, optimising the customer experience with chatbots, streamlining the customer experience predicting responses with Amazon Connect. Finally, we will dive deep into the most common of these patterns and cover design and implementation considerations. By the end of the session you will understand how to use Amazon Lex to optimise the user experience, through different user interactions.
Developing deep learning applications just got even simpler and faster. In this session, you will learn how to program deep learning models using Gluon, the new intuitive, dynamic programming interface available for the Apache MXNet open-source framework. We'll also explore neural network architectures such as multi-layer perceptrons, convolutional neural networks (CNNs) and LSTMs.
In this session, Reza Zadeh, CEO of Matroid, presents a Kubernetes deployment on Amazon Web Services that provides customized computer vision to a large number of users. Reza offers an overview of Matroid's pipeline and demonstrates how to customize computer vision neural network models in the browser, followed by building, training, and visualizing TensorFlow models, which are provided at scale to monitor video streams.
Motion detection triggers have reduced the amount of video recorded by modern devices. But maybe you want to reduce that further—maybe you only care if a car or a person is on-camera before recording or sending a notification. Security cameras and smart doorbells can use Amazon Rekognition to reduce the number of false alarms. Learn how device makers and home enthusiasts are building their own smart layers of person and car detection to reduce false alarms and limit video volume. Learn too how you can use face detection and recognition to notify you when a friend has arrived.
Although there are many ways to optimize the speech generated by Amazon Polly's text-to-speech voices, you might find it challenging to apply the most effective enhancements in each situation. Learn how you can control pronunciation, intonation, and timing for text-to-speech voices. In this session, you get a comprehensive overview of the available tools and methods available for modifying Amazon Polly speech output, including SSML tags, lexicons, and punctuation. You also get recommendations for streamlining application of these techniques. Come away with insider tips on the best speech optimization techniques to provide a more natural voice experience.
Enterprises must transform at the pace of technology. Through chatbots built with Amazon Lex, enterprises are improving business productivity, reducing execution time, and taking advantage of efficiency savings for common operational requests. These include inventory management, human resources requests, self-service analytics, and even the onboarding of new employees. In this session, learn how Infor integrated Amazon Lex into their standard technology stack, with several use cases based on advisory, assistant, and automation roles deeply rooted in their expanding AI strategy. This strategy powers one of the major functionalities of Infor Coleman to enable their users to make business decisions more quickly.
In this session, discover how to build a multichannel conversational interface that leverages a preprocessing layer in front of Amazon Lex. This preprocessing layer can enable customers to integrate their conversational interface with external services and use multiple specialized Amazon Lex chatbots as part of an overall solution. As an example of how to integrate with an external service, learn how to integrate with Skype. Watch it in action through a chatbot demonstration with interaction through Skype messaging and voice.
Join Facebook's Pieter Noordhuis to learn about Caffe2, a lightweight and scalable framework for deep learning. You'll learn about its features, the way Facebook applies it in production, and how to use Caffe2 to create and train your own deep learning models on Amazon EC2 P3 instances, which use the latest NVIDIA Volta architecture for GPU-acceleration. This session will also discuss the cost tradeoffs and time to model measurements for deep learning.
Companies can have large amounts of image and video content in storage with little or no insight about what they have—effectively sitting on an untapped licensing and advertising goldmine. Learn how media companies are using Amazon Rekognition APIs for object or scene detection, facial analysis, facial recognition, or celebrity recognition to automatically generate metadata for images to provide new licensing and advertising revenue opportunities. Understand how to use Amazon Rekognition APIs to index faces into a collection at high scale, filter frames from a video source for processing, perform face matches that populate a person index in ElasticSearch, and use the Amazon Rekognition celebrity match feature to optimize the process for faster time to market and more accurate results.
Reinforcement learning is emerging as a powerful tool for autonomous driving, enabling complex maneuvers in a wide range of traffic situations. This session demonstrates how to build a reinforcement learning engine for autonomous vehicles on AWS, showing how it receives environmental input from object detection and produces outputs for controlling the vehicle's steering, acceleration, and braking.
Deep learning and IoT are emerging as an innovative pairing due to the explosion of data produced by a growing number of devices. The data this is generating needs to be quickly analyzed to produce meaningful insights and take action. In this session, we discuss how deep learning can be applied to real-world IoT use cases with a demo of computer vision and anomaly detection. We also do a step-by-step tutorial on how to develop deep learning models for computer vision at the edge using NVIDIA Jetson.
At Netflix, we use machine learning (ML) algorithms extensively to recommend relevant titles to our 100+ million members based on their tastes. Everything on the member home page is an evidence-driven, A/B-tested experience that we roll out backed by ML models. These models are trained using Meson, our workflow orchestration system. Meson distinguishes itself from other workflow engines by handling more sophisticated execution graphs, such as loops and parameterized fan-outs. Meson can schedule Spark jobs, Docker containers, bash scripts, gists of Scala code, and more. Meson also provides a rich visual interface for monitoring active workflows and inspecting execution logs. It has a powerful Scala DSL for authoring workflows as well as the REST API. In this session, we focus on how Meson trains recommendation ML models in production, and how we have re-architected it to scale up for a growing need of broad ETL applications within Netflix. As a driver for this change, we have had to evolve the persistence layer for Meson. We talk about how we migrated from Cassandra to Amazon RDS backed by Amazon Aurora.
Join us for a deep dive on how to use Amazon Rekognition for real world image analysis. Learn how to integrate Amazon Rekognition with other AWS services to make your image libraries searchable. Also learn how to verify user identities by comparing their live image with a reference image, and estimate the satisfaction and sentiment of your customers. We also share best practices around fine-tuning and optimizing your Amazon Rekognition usage and refer to AWS CloudFormation templates.
How do you create a frictionless travel experience, where merely by walking into the hotel you're automatically checked in and hotel associates greet you by name? By combining sophisticated Machine Learning, Location/Motion Tracking, Event Streaming and Serverless architecture with AWS components run on Cloud, we're able to create the foundation for this experience without significant time and capital investment. Accenture will demonstrate how to determine accurate location and motion by passively scanning Bluetooth signals, establish who and where you are, and deduce your intent by creating a network of information including profile, motion and activity. We are reinventing the hospitality experience, and are beginning to use the technology in industries as diverse as healthcare and mining. Join Accenture for a demo and architecture discussion focused on the power of combining architecture components to optimize the customer experience, speed to market, and operating cost. Attendees will learn more about the considerations, risks and implications of the company's cloud transformation program; see examples of reference architectures and implementation guides; and understand what contributed to the success of the program.The patterns presented will be broadly applicable to complex, global organizations with aspirations to make the journey to AWS cloud Session sponsored by Accenture
In this session. We will provide an overview of the latest Amazon Rekognition features including real-time face recognition, Text in Image recognition, and improved face detection. Amazon Rekognition recently added three new features: detection and recognition of text in images; real-time face recognition across tens of millions of faces; and detection of up to 100 faces in challenging crowded photos. In this session, we will cover features, benefits and use cases for these latest Rekognition features, while highlighting customer examples and a brief demo showcasing Amazon Rekognition.
Tensors are higher order extensions of matrices that can incorporate multiple modalities and encode higher order relationships in data. This session will present recently developed tensor algorithms for topic modeling and deep learning with vastly improved performance over existing methods. Topic models enable automated categorization of large document corpora, without requiring labeled data for training. They go beyond simple clustering since they allow for documents to have multiple topics. Tensor methods provide a fast and a guaranteed method for training these models. They incorporate co-occurrence statistics of triplets of words in documents. We are releasing a fast and a robust implementation that vastly outperform existing solutions while providing significantly faster training times and better topic quality. Moreover, training and inference are decoupled in our algorithm, so the user can select the relevant part based on their requirements. We will present benchmarks across multiple datasets of different sizes and AWS instance types, and provide notebook examples.
During this session, we will provide an overview of Amazon Rekognition Video, a deep learning powered video analysis service that tracks people, detects activities, and recognizes objects, celebrities, and inappropriate content. Amazon Rekognition Video can detect and recognize faces in live streams. Rekognition Video also analyzes existing video stored in Amazon S3 and returns specific labels of activities, people and faces, and objects with time stamps so you can easily locate the scene. For people and faces, it also returns the bounding box, which is the specific location of the person or face in the frame. We will also cover different use cases for Amazon Rekognition Video in applications such as security and public safety, and media and entertainment.
AWS has launched Amazon Sumerian. Sumerian lets you create and run virtual reality (VR), augmented reality (AR), and 3D applications quickly and easily without requiring any specialized programming or 3D graphics expertise. In this session, we will dive deep into details about Sumerian so you can see what's under the hood. We will cover creating a project, using the visual state machine, connecting an Amazon Sumerian scene to AWS services, and using a Sumerian Host to add presence to your applications.
In machine learning, training large models on massive amount of data usually improved results. Our customers report, however, that training such models and deploying them is either operationally prohibitive or outright impossible for them. Amazon AI Algorithms is designed to solve this problem. It is a collection of distributed streaming ML algorithms that scale to any amount of data. They are fast and efficient because they distribute across CPU/GPU machines and share a collective distributed state via a highly-optimized parameter server. They scale to an infinite amount of data because they operate in the streaming model. This means they require only one pass over the data and never increase their resources consumption, allowing training to be paused, resumed, and snapshotted and even for algorithms to consume kinesis streams directly providing an “always on” training mechanism. They are production ready. Trained models are automatically containerized and useable in production using Amazon SageMaker hosting. Finally, we provide a convenient SDK which allows scientists to create new algorithms which operate in this model and enjoy all the benefits above. This talk will discuss our design choices and some of the internal working of the system. It will also describe the distributed streaming model and its numerous benefits to machine learning practitioners. We will show how to invoke large scale learning from Amazon SageMaker, or Amazon EMR, and host the solution. Time permits, we will show how to develop a new Algorithm using the SDK.
Customers have several options of architecting recommendation engines, and a graphdb is the best way to create a real-time recommendation engine. I will use the as-yet unreleased Neptune service and will show a demo.In this session, we will look at approaches to use machine learning and graph representations for Cyber Investigative Analytics. We will give a demonstration of Graphistry using Amazon Neptune and a graph-based approach to detecting anomalies in Netflow data.
The need for Natural Language Processing (NLP) is gaining more importance as the amount of unstructured text data doubles every 18 months and customers are looking to extend their existing analytics workloads to include natural language capabilities. Historically, this data had been prohibitively expensive to store and early manual processing evolved into rule-based systems, which were expensive to operate and inflexible. In this session we will show you how you can address this problem using Amazon Comprehend.
Building a conversational AI experience that can respond to a wide variety of inputs and situations depends on gathering high-quality, relevant training data. Dialog with humans is an important part of this training process. In this session, learn how researchers at Facebook use Amazon Mechanical Turk within the ParlAI (pronounced “parlay”) framework for training and evaluating AI models to perform data collection, human training, and human evaluation. Learn how you can use this interface to gather high-quality training data to build next-generation chatbots and conversational agents.
Ever since the term “crowdsourcing” was coined in 2006, it's been a buzzword for technology companies and social institutions. In the technology sector, crowdsourcing is instrumental for verifying machine learning algorithms, which, in turn, improves the user's experience. In this session, we explore how Pinterest adapted to an increased reliability on human evaluation to improve their product, with a focus on how they've integrated with Mechanical Turk's platform. This presentation is aimed at engineers, analysts, program managers, and product managers who are interested in how companies rely on Mechanical Turk's human evaluation platform to better understand content and improve machine learning algorithms. The discussion focuses on the analysis and product decisions related to building a high quality crowdsourcing system that takes advantage of Mechanical Turk's powerful worker community.
Artificial intelligence is going to be part of every software workload in the not-too-distant future. Partnering with AWS, Intel is dedicated to bringing the best full-stack solutions to help solve business and societal problems by helping turn massive datasets into information. Thorn is a non-profit organization, co-founded by Ashton Kutcher, focused on using technology innovation to combat child sexual exploitation. It is using MemSQL to provide a new approach to machine learning and real-time image recognition by making use of the high-performance Intel SIMD vector dot product functionality. This session covers machine learning on Intel Xeon processor based platforms and features speakers from Intel, Thorn, and MemSQL. Session sponsored by Intel
In this talk, you will learn how to use, or create Deep Learning architectures for Image Recognition and other neural network computations in Apache Spark. Alex, Tim and Sujee will begin with an introduction to Deep Learning using BigDL. Then they will explain and demonstrate how image recognition works using step by step diagrams, and code which will give you a fundamental understanding of how you can perform image recognition tasks within Apache Spark. Then, they will give a quick overview of how to perform image recognition on a much larger dataset using the Inception architecture. BigDL was created specifically for Spark and takes advantage of Spark's ability to distribute data processing workloads across many nodes. As an attendee in this session, you will learn how to run the demos on your laptop, on your own cluster, or use the BigDL AMI in the AWS Marketplace. Either way, you walk away with a much better understanding of how to run deep learning workloads using Apache Spark with BigDL. Session sponsored by Intel
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems. You'll also hear how and why Intuit is using Amazon SaeMaker on AWS for real-time fraud detection.
MCL402: Building Content Recommendation Systems Using Apache MXNet and Gluon
Recommendations are becoming an integral part of how many business serve customers, from targeted shopping on demand video. In this session, you'll learn the key elements to build a recommendation system using Gluon, the new intuitive, dynamic programming interface for Apache MXNet. You'll use matrix factorization techniques to build a video on-demand solution using deep learning.
AWS Marketplace & Service Catalog
In this session, you'll learn how to leverage AWS Service Catalog, AWS Lambda, AWS Config and AWS CloudFormation to create a robust, agile environment while maintaining enterprise standards, controls and workflows. Fannie Mae demonstrates how they are leveraging this solution to integrate with their existing workflows and CMDB/ITSM systems to create an end-to-end automated and agile IT lifecycle and workflow.
Organizations use application delivery controllers (ADCs) to ensure that their most important applications receive the best performance across their network. In this session, you learn how and why Salesforce used the F5 BIG-IP platform, an ADC solution from AWS Marketplace, during a migration to AWS. To preserve an existing skillset within their business, Salesforce chose AWS Marketplace to first evaluate the solution on the AWS platform before ultimately selecting it as part of their international rollout. You see how BIG-IP performs application routing and security, and how it works with existing AWS networking solutions to provide a consistent experience for domestic and international rollouts. You also learn how Salesforce successfully used the AWS Marketplace Private Offers program to procure an enterprise license and consolidate the expenditure onto their AWS bill.
Find out how Citrix built a solution using Matillion ETL for Amazon Redshift from AWS Marketplace to load all data into an Amazon Redshift cluster, allowing them to do their analytics on the entire environment at a single time. We'll discuss the transition made to consolidate multiple disparate databases in order to run analytic workloads, get a holistic view of all their data sources, and prevent inconsistent data from being captured.
In this session, we discuss the challenges that regulated industries (e.g., government, financial, and healthcare) face in demonstrating compliance with security requirements. Learn which AWS Marketplace services enable appropriate threat mitigations in cloud computing through customer use cases, and understand how to minimize your burden. We also demonstrate methods to reduce business impact while increasing security effectiveness and reducing risk in your environment.
As customers put more workloads into AWS, the number of VPCs a customer needs to manage also grows. VPCs can exist across geographically disparate AWS Regions, or run in separate AWS accounts connecting to a common VPC that serves as a global network transit center. This session shows you how to implement a networking construct that AWS calls a transit VPC using the Cisco Cloud Services Router. This network topology simplifies network management and minimizes the number of connections that you need to set up and manage. Even better, it is implemented virtually and doesn't require physical network gear or a physical presence in a colocation transit hub. Come hear why customers are procuring this service through AWS Marketplace and how customers are using the transit VPC for private networking, shared connectivity, and cross-account AWS usage.
Come learn how NASDAQ used AWS Marketplace to purchase and launch AppDynamics unified Application Performance Management (APM) and business performance monitoring solution for their migration to AWS. With AppDynamics, NASDAQ was able to move their critical applications to AWS and used AppDynamics to accelerate, visualize and validate the migration process. This allowed NASDAQ to improve their applications in AWS with business performance monitoring and make clear, understandable correlations between the quality of their customer experience with their applications.
MSC304: Enabling Big Data Computing at Pfizer with AWS Service Catalog and AWS Lambda
In this session, data analysts, big data administrators, system administrators, developers, and IT managers learn how to create a robust computing environment for their own teams. As enterprises move to the cloud—providing secure, governed turnkey solutions at scale to a broad set of users faces its own challenges—organizations need to ensure charge back and tracking mechanisms while also rapidly creating new turnkey solutions that are readily available to a broad set of end users to keep up with innovation. With AWS Service Catalog, AWS Lambda, Amazon CloudWatch Events, Amazon DynamoDB, and AWS CloudFormation, Pfizer's Big Data team is defining and enabling the next paradigm of computing at Pfizer.
In this session, learn how customers have gained greater control over their data and improved cost savings by using NetApp ONTAP Cloud from AWS Marketplace, a curated catalog to find, buy, and deploy third-party party software. We dive into how to minimize your storage footprint by using enterprise-class storage features, such as data deduplication, compression, and snapshots with zero impact on app performance. You also learn out how to can accelerate application development using FlexClone technology to clone images and quickly create multiple environments that look just like your original data.
In this session, we walk through the fundamentals of Amazon VPC. First, we cover build-out and design fundamentals for VPCs, including picking your IP space, subnetting, routing, security, NAT, and much more. We then transition into different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This mid-level architecture discussion is aimed at architects, network administrators, and technology decision-makers interested in understanding the building blocks that AWS makes available with Amazon VPC. Learn how you can connect VPCs with your offices and current data center footprint.
This session provides an overview of IPv6 and covers key aspects of AWS support for the protocol. We discuss Amazon S3 and S3 Transfer Acceleration, Amazon CloudFront and AWS WAF, Amazon Route 53, AWS IoT, Elastic Load Balancing, and the virtual private cloud (VPC) environment of Amazon EC2. The presentation assumes solid knowledge of IPv4 and these AWS services.
Many customers are hesitant to adopt SaaS solutions due to the concerns on the safety of the network connectivity traversing internet. It is also difficult to manage the firewall rules, NAT Gateway or VPN connections. AWS PrivateLink provided solution that let our customers' applications, whether in a VPC or in their own data center, to connect to SaaS solutions in a highly scalable and highly available manner, while keeping all the network traffic within the AWS network.
Learn about the new services and features we have and that we are launching across AWS Networking this year. Learn also about our vision for continued innovation in this space and the ongoing evolution of networking capabilities and performance. Gain insight into how these new capabilities help everyone—from developers to enterprises to startups—drive greater security and reliability, improved flexibility, and higher performance. Join Dave Brown, director of Amazon EC2 Networking, and learn more about Amazon Virtual Private Cloud (VPC), Elastic Load Balancing, AWS PrivateLink, VPN, AWS Direct Connect, and more. In addition, we cover new releases and show how easy it is to get started. You leave armed with details of how everything fits together in real-world customer scenarios.
Many enterprises on their journey into the cloud require consistent and highly secure connectivity between their existing data center and AWS footprints. In this session, we walk through the different architecture options for establishing this connectivity using AWS Direct Connect and VPN. With each option, we evaluate the considerations and discuss risk, performance, encryption, and cost. As we walk through these options, we answer some of the common questions that arise from enterprises that tackle design and implementation. You'll learn how to make connectivity decisions that are suitable for your workloads, and how to best prepare against business impact in the event of failure.
In this mid-level architecture session, we cover everything you need to get started with Amazon Route 53, AWS's highly available DNS service. Learn how to use public DNS, including routing techniques such as weighted round-robin, latency-based routing, and geo DNS. Learn also how to configure DNS failover using health checks, how and when to use private DNS within your VPC, and how Amazon Route 53 interacts with Amazon EC2's DNS for instance naming and DNS resolution across your network. We also walk through how to use Traffic Flow to manager traffic to your applications' globally distributed endpoints to optimize for constraints such as endpoint load, the health of your resources, geographic restrictions, and internet latency.
Netflix is big and dynamic. At Netflix, IP addresses mean nothing in the cloud. This is a big challenge with Amazon VPC Flow Logs. VPC Flow Log entries only present network-level information (L3 and L4), which is virtually meaningless. Our goal is to map each IP address back to an application, at scale, to derive true network-level insight within Amazon VPC. In this session, the Cloud Network Engineering team discusses the temporal nature of IP address utilization in AWS and the problem with looking at OSI Layer 3 and Layer 4 information in the cloud.
In this session, we explore the new Network Load Balancer that was launched as part of the Elastic Load Balancing service, which can load balance any kind of TCP traffic. This offers customers a high-performance, scalable, low-cost load balancer that can handle millions of requests per second with very low latencies, while maintaining high levels of performance. Come and learn more about this new Network Load Balancer.
Amazon Virtual Private Cloud (Amazon VPC) enables you to have complete control over your AWS virtual networking environment. Given this control, have you ever wondered how new Amazon VPC features will affect the way you design your AWS networking infrastructure, or even change existing architectures that you use today? In this session, we explore the new design and capabilities and how you might use them.
PrivateLink provides private connectivity between VPCs, AWS Services and on-premises applications. Built with the same underlying technology that powers NAT Gateway, Network Load Balancer and AWS Service endpoints, PrivateLink is now available for use with your own applications. In the session, we'll do a deep dive into the underlying network technology that is used by PrivateLink and explore how PrivateLink can be deployed to improve your network topologies and application architectures. We'll also look at how PrivateLink improves micro-service architectures, allowing for services to be vended between AWS accounts and over DirectConnect connections.
Many applications are network I/O bound, including common database-based applications and service-based architectures. But operating systems and applications are often not tuned to deliver high performance. This session uncovers hidden issues that lead to low network performance, and shows you how to overcome them to obtain the best network performance possible.
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail about Elastic Load Balancing configuration and day-to-day management, and also its use with Auto Scaling. We explain how to make decisions about the service and share best practices and useful tips for success.
As enterprises move to the cloud, robust connectivity is often an early consideration. AWS Direct Connect provides a more consistent network experience for accessing your AWS resources, typically with greater bandwidth and reduced network costs. This session dives deep into the features of AWS Direct Connect and VPNs. We discuss deployment architectures and the process from start to finish. We show you how to configure public and private virtual interfaces, configure routers, use VPN backup, and provide secure communication between sites by using the AWS VPN CloudHub.
This session focuses on best practices for connectivity between many virtual private clouds (VPCs), including the Transit VPC. We review how the Transit VPC works and use cases for centralization, network security, and connectivity. We include best practices for multiple accounts, multiple regions, and designing for scale. In addition, we also review some of the variants and extensions to the Transit VPC, including how to customize your own.
In this session, we walk through the Amazon VPC network and describe the problems we were solving when we created it, and the features we've been adding as we scale it. We cover how these problems and features are traditionally solved, and why those solutions are not scalable, inexpensive, or secure enough for AWS. Finally, we provide an overview of the solution that we've implemented. We discuss some of the unique mechanisms that we use to ensure customer isolation, get packets into and out of the network, and support new features such as NAT and VPC endpoints.
Retail & CPG
A challenge faced by many retailers is how to form an integrated single view of the customer across multiple retail channels to help you better understand purchasing behavior & patterns. In this session, we will present a solution that merges web analytics data with customer purchase history based on AWS API Gateway, Lambda and S3. Learn how to track customer purchase behaviors across different selling channels to better predict future needs and make relevant, intelligent recommendations.
Today's retail customers expect exceptional customer service and tailored solutions to their problems. Chat and voice interfaces provide retailers with new ways to interact with their customers and to provide intelligent & efficient solutions. In this session, we will build and demonstrate a ChatBot powered by Amazon AI that can autonomously guide a customer through the process of reporting an undelivered or defective item & quickly offer appropriate solutions. Learn how to redefine your customer service experience by tying together Amazon Lex, AWS Lambda and AWS DynamoDB to easily add ChatBot functionality to your retail solution.
What is the future for IoT in retail fulfilment and logistics? We discuss retail use cases from Amazon.com, which has a global network of over 150 fulfillment centers with increasing levels of automation. To create more flexible designs and increase the pool of suppliers to choose from, Amazon developed a Machine as a Service framework based on ANSI/ISA-85, allowing Amazon software systems to be vendor agnostic. This framework now extends to incorporate AWS IoT technologies like Amazon Kinesis, AWS Lambda, and AWS Greengrass. Amazon can swap machines with the same functionality without changing the interface. Amazon.com can now create a virtual fulfilment center within Amazon EC2 to test software deployments before the actual building is completed.
Today's retail customers want to set the rules on how and when they buy, receive, and return their product. But many retailers are struggling to unify their sales channels using existing legacy e-commerce software stacks. To consistently serve customers across retail channels, retailers must adopt a modern architecture that is elastic, cost effective, and based on loosely coupled application services. In this session, we dive deep into how retailers can leverage serverless architectures using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. Learn how Amazon Fresh quickly responded to customer feedback on the Totes Pickup feature, developing a cost-effective and scalable self-service serverless application to deliver a 1-click experience for the customer, while providing faster insights back to the business.
Security, Compliance, & Identity
For Vanguard, managing the creation of AWS Identity and Access Management (IAM) objects is key to balancing developer velocity and compliance. In this session, you will learn how Vanguard designs IAM roles to control the blast radius of AWS resources and maintain simplicity for developers. Vanguard will also share best practices to help you manage governance and improve your visibility across your AWS resources.
Traditional solutions for using Microsoft Active Directory across on-premises and AWS Cloud Windows workloads can require complex networking or synching identities across multiple systems. AWS Directory Service for Microsoft Active Directory, also known as AWS Managed AD, offers you actual Microsoft Active Directory on the AWS Cloud as a managed service. In this session, you learn how Capital One uses AWS Managed AD to provide highly available authentication and authorization services for its Windows workloads, such as Amazon RDS for SQL Server. We detail how Capital One uses Lambda, Python, and PowerShell with cross-account AWS Identity and Access Management (IAM) roles to automate directory deployment across AWS accounts. We also cover best practices for integrating AWS Managed AD with your on-premises domain securely, and show you how to automate the joining of AWS resources to your managed domain.
When you use the cloud to enable speed and agility, how do you know if you did it right? We are on a mission to help builders follow industry best practices within security guide rails by creating the largest compliance-as-code repo, available to all. Compliance-as-code is the idea to translate those best practices, guide rails, policies, or standards into codified unit testing. Apply this to your AWS environment to provide insights on what can/must be improved. Learn why compliance-as-code matters to gain speed (by getting developers, architects, and security pros on the same page), how it is currently used (demo), and how to start to use it or be part of building it.
To help prevent unexpected access to your AWS resources, it is critical to maintain strong identity and access policies and track, effectively detect, and react to changes. In this session you will learn how to use AWS Identity and Access Management (IAM) to control access to AWS resources and integrate your existing authentication system with IAM. We will cover how to deploy and control AWS infrastructure using code templates, including change management policies with AWS CloudFormation. Further, effectively detecting and reacting to changes in posture or adverse actions requires the ability to monitor and process events. There are several services within AWS that enable this kind of monitoring such as CloudTrail, CloudWatch Events, and the AWS service APIs. We learn how Netflix utilizes a combination of these services to operationalize monitoring of their deployments at scale, and discuss changes made as Netflix's deployment has grown over the years.
Like many security teams, Riot has been challenged by new paradigms that came with the move to the cloud. We discuss how our security team has developed a security culture based on feedback and self-service to best thrive in the cloud. We detail how the team assessed the security gaps and challenges in our move into AWS, then describe how the team works within Riot's unique feedback culture. Walk away with a better understanding of securing projects within AWS without blocking development teams. Learn how we use the internal RFC process, the built-in features of AWS that help provide better security by default, our approach to developer education, and tools we developed, and those from the community, to provide visibility into the security posture of AWS.
Making sense of the risks of IT deployments that sit in hybrid environments and span multiple countries is a major challenge. When you add in multiple toolsets, and global compliance requirements, including GDPR, it can get overwhelming. Listen to Vonage's Chief Information Security Officer, Johan Hybinette, share his experiences tackling these challenges. Vonage is an established leader with 15 years of experience providing residential and business communications solutions in global markets. With a robust solution for end users, solutions offered by Vonage require a sophisticated, reliable technology stack—that technology is spread between on-premises and AWS Cloud environments. Johan shares lessons learned to achieve a successful and secure cloud deployment. How does GDPR impact a multinational hybrid deployment? Can security drive tool adoption among developers? What's a practical approach to maintaining flexibility and a rapid pace of innovation, while providing world-class security for your customer? Get answers to all these questions and a jumpstart on your challenges from an industry leader. Session sponsored by Trend Micro Incorporated
CTP's Robert Christiansen and Mike Kavis describe how to maximize the value of your AWS initiative. From building a Minimum Viable Cloud to establishing a cloud robust security and compliance posture, we walk through key client success stories and lessons learned. We also explore how CTP has helped Vanguard, the leading provider of investor communications and technology, take advantage of AWS to delight customers, drive new revenue streams, and transform their business. Session Sponsored by: CTP
Cloud migration in highly regulated industries can stall without a solid understanding of how (and when) to address regulatory expectations. This session provides a guide to explaining the aspects of AWS services that are most frequently the subject of an internal or regulator audit. Because regulatory agencies and internal auditors might not share a common understanding of the cloud, this session is designed to help you to help them, regardless of their level of technical fluency.
SID214: Best Security Practices in the Intelligence Community
Executives from the Intelligence community discuss cloud security best practices in a field where security is imperative to operations. Security Cloud Chief John Nicely and Deputy Chief of Cyber Integration Scott Kaplan share success stories of migrating mass data to the cloud from a security perspective. Hear how they migrated their IT portfolios while managing their organizations' unique blend of constraints, budget issues, politics, culture, and security pressures. Learn how these institutions overcame barriers to migration, and ask these panelists what actions you can take to better prepare yourself for the journey of mass migration to the cloud.
In this session, you learn how to adapt application defenses and operational responses based on your unique requirements. You also hear directly from customers about how they architected their applications on AWS to protect their applications. There are many ways to build secure, high-availability applications in the cloud. Services such as API Gateway, Amazon VPC, ALB, ELB, and Amazon EC2 are the basic building blocks that enable you to address a wide range of use cases. Best practices for defending your applications against Distributed Denial of Service (DDoS) attacks, exploitation attempts, and bad bots can vary with your choices in architecture.
SID217.: NEW LAUNCH ! Introduction to Managed Rules for AWS WAF
Managed Rules for AWS WAF is a new feature that allows you to purchase Managed Rules from security sellers in the AWS Marketplace. Managed Rules are proactively updated by security sellers as new threats emerge and enable you to easily protect your web applications and APIs from a wide range of Internet threats.
Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise. Enabled with a few clicks in the AWS Management Console, Amazon GuardDuty can immediately begin analyzing billions of events across your AWS accounts for signs of risk. It does not require you to deploy and maintain software or security infrastructure, meaning it can be enabled quickly with no risk of negatively impacting existing application workloads.
Operating a security practice on AWS brings many new challenges that haven't been faced in data center environments. The dynamic nature of infrastructure, the relationship between development team members and their applications, and the architecture paradigms have all changed as a result of building software on top of AWS. In this session, learn how your security team can leverage AWS Lambda as a tool to monitor, audit, and enforce your security policies within an AWS environment.
Adversaries automate. Who says the good guys can't as well? By combining AWS offerings like AWS CloudTrail, Amazon Cloudwatch, AWS Config, and AWS Lambda with the power of Amazon Alexa, you can do more security tasks faster, with fewer resources. Force multiplying your security team is all about automation! Last year, we showed off penetration testing at the push of an (AWS IoT) button, and surprise-previewed how to ask Alexa to run Inspector as-needed. Want to see other ways to ask Alexa to be your cloud security sidekick? We have crazy new demos at the ready to show security geeks how to sling security automation solutions for their AWS environments (and impress and help your boss, too).
Every journey to the AWS Cloud is unique. Some customers are migrating existing applications, while others are building new applications using cloud-native services. Along each of these journeys, identity and access management helps customers protect their applications and resources. In this session, you will learn how AWS' Identity Services provide you a secure, flexible, and easy solution for managing identities and access on the AWS Cloud. With AWS' Identity Services, you do not have to adapt to AWS. Instead, you have a choice of services designed to meet you anywhere along your journey to the AWS Cloud.
This talk dives deep on how to build end-to-end security capabilities using AWS. Our goal is orchestrating AWS Security services with other AWS building blocks to deliver enhanced security. We cover working with AWS CloudWatch Events as a queueing mechanism for processing security events, using Amazon DynamoDB to provide a stateful layer to provide tailored response to events and other ancillary functions, using DynamoDB as an attack signature engine, and the use of analytics to derive tailored signatures for detection with AWS Lambda. Log sources include available AWS sources and also more traditional logs, such as syslog. The talk aims to keep slides to a minimum and demo live as much as possible. The demos come together to demonstrate an end-to-end architecture for SecOps. You'll get a toolkit consisting of code and templates so you can hit the ground running.
SID305: How CrowdStrike Built a Real-time Security Monitoring Service on AWS
The CrowdStrike motto is “We Stop Breaches.” To do that, it needed to build a real-time security monitoring service to detect threats. Join this session to learn how Crowdstrike uses Amazon EC2 and Amazon EBS to help its customers identify vulnerabilities before they become large-scale problems.
As Chick-fil-A became a cloud-first organization, their security team didn't want to become the bottleneck for agility. But the security team also wanted to raise the bar for their security posture on AWS. Robert Davis, security architect at Chick-fil-A, provides an overview about how he and his team recognized that writing code was the best way for their security policies to scale across the many AWS accounts that Chick-fil-A operates. The use of DevSecOps within Chick-fil-A led to the creation of a set of account bootstrapping tools, auditing capabilities, and event-based policy enforcement. This session goes over these tools and how they were built on AWS.
What do you do when leadership embraces what was called "shadow IT" as the new path forward? How do you onboard new accounts while simultaneously pushing policy to secure all existing accounts? This session walks through Cisco's journey consolidating over 700 existing accounts in the Cisco organization, while building and applying Cisco's new cloud policies. Learn valuable tips and hear about mechanisms used to automate the process. Gain insight into how Cisco integrates AWS's security and monitoring with Cisco's enterprise tools, Cisco SSO integration and continuous security auditability on Cisco's AWS account, and Cisco's CI/CD pipelines with AWS to ensure secure development.
In cloud migrations, the cloud's elastic nature is often touted as a critical capability in delivering on key business initiatives. However, you must account for it in your security and compliance plans or face some real challenges. Always counting on a virtual host to be running, for example, causes issues when that host is rebooted or retired. Managing security and compliance in the cloud is continuous, requiring forethought and automation. Learn how a leading, next generation managed cloud provider uses automation and cloud expertise to manage security and compliance at scale in an ever-changing environment. Through code examples and live demos, we show tools and automation to provide continuous compliance of your cloud infrastructure. Session sponsored by 2nd Watch
Are you interested in learning how to control access to your AWS resources? Have you wondered how to best scope permissions to achieve least-privilege permissions access control? If your answer is "yes", this session is for you. We look at the AWS Identity and Access Management (IAM) policy language, starting with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. We explore policy variables, conditions, and tools to help you author least privilege policies. We cover common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type.
In this session, you learn pragmatic steps to integrate security controls into DevOps processes in your AWS environment at scale. Cyber security expert and founder of Alert Logic Misha Govshteyn shares insights from high performing teams who are embracing the reality that an agile security program can enable faster and more secure workload deployments. Joining Misha is Joey Peloquin, Director of Cloud Security Operations at Citrix, who discusses Citrix's DevOps experiences and how they manage their cyber security posture within the AWS Cloud. Session sponsored by Alert Logic
AWS provides a killer feature for security operations teams: Access Advisor. In this session, we discuss how Access Advisor shows the services to which an IAM policy grants access and provides a timestamp for the last time that the role authenticated against that service. At Netflix, we use this valuable data to automatically remove permissions that are no longer used. By continually removing excess permissions, we can achieve a balance of empowering developers and maintaining a best-practice, secure environment.
Infrastructure-as-Code (IaC) has emerged as an essential element of organizational DevOps practices. Tools such as AWS CloudFormation and Terraform allow software-defined infrastructure to be deployed quickly and repeatably to AWS. But the agility of CI/CD pipelines also creates new challenges in infrastructure security hardening. How do you ensure that your CloudFormation templates meet your organization's security, compliance, and governance needs before you deploy them? How do you deploy infrastructure securely to production environments, and monitor the security posture on a continuous basis? And how do you do this repeatedly without hitting a speed bump? This session provides a foundation for how to bring proven software hardening practices into the world of infrastructure deployment. We discuss how to build security and compliance tests for infrastructure analogous to unit tests for application code, and showcase how security, compliance and governance testing fit in a modern CI/CD pipeline. Session Sponsored by: Dome9
A surprising trend is starting to emerge among organizations who are progressing through the cloud maturity lifecycle: major improvements in revenue growth, customer satisfaction, and mission success are being directly attributed to improvements in security and compliance. At one time thought of as speed bumps in the path to deployment, security and compliance are now seen as critical ingredients that help organizations differentiate their offerings in the market, win more deals, and achieve mission-critical goals faster. This session explores how organizations like Jive Software and the National Geospatial Agency use the Evident Security Platform, AWS, and AWS Quick Starts to automate security and compliance processes in their organization to accomplish more, do it faster, and deliver better results. Session sponsored by Evident.io
In this session, we walk you through a hypothetical incident response managed on AWS. Learn how to apply existing best practices as well as how to leverage the unique security visibility, control, and automation that AWS provides. We cover how to set up your AWS environment to prevent a security event and how to build a cloud-specific incident response plan so that your organization is prepared before a security event occurs. This session also covers specific environment recovery steps available on AWS.
Fighting fraud means countering human actors that quickly adapt to whatever you do to stop them. In this presentation, we discuss the key components of a fraud prevention program in the cloud. Additionally, we provide techniques for detecting known and unknown fraud activity and explore different strategies for effectively preventing detected patterns. Finally, we discuss lessons learned from our own prevention activities as well as the best practices that you can apply to manage risk.
In this session, we review best practices for managing multiple AWS accounts using AWS Organizations. We cover how to think about the master account and your account strategy, as well as how to roll out changes. You learn how Capital One applies these best practices to manage its AWS accounts, which number over 160, and PCI workloads.
AWS distinguished engineer Eric Brandwine speaks with hundreds of customers each year, and noticed one question coming up more than any other, "How does AWS operationalize its own security?" In this session, Eric details both strategic and tactical considerations, along with an insider's look at AWS tooling and processes.
If left unmitigated, Distributed Denial of Service (DDoS) attacks have the potential to harm application availability or impair application performance. DDoS attacks can also act as a smoke screen for intrusion attempts or as a harbinger for attacks against non-cloud infrastructure. Accordingly, it's crucial that developers architect for DDoS resiliency and maintain robust operational capabilities that allow for rapid detection and engagement during high-severity events. In this session, you learn how to build a DDoS-resilient application and how to use services like AWS Shield and Amazon CloudWatch to defend against DDoS attacks and automate response to attacks in progress.
In this session, Edmunds discusses how they create workflows to manage their regulated workloads with Amazon Macie, a newly-released security and compliance management service that leverages machine learning to classify your sensitive data and business-critical information. Amazon Macie uses Recurrent Neural Networks (RNN) to identify and alert potential misuse of intellectual property. They do a deep dive into machine learning within the security ecosystem.
Steve Schmidt, chief information security officer of AWS, addresses the current state of security in the cloud, with a particular focus on feature updates, the AWS internal "secret sauce," and what's on horizon in terms of security, identity, and compliance tooling.
In less than 12 months, Zocdoc became a cloud-first organization, diversifying their tech stack and liberating data to help drive rapid product innovation. Brian Lozada, CISO at Zocdoc, and Zhen Wang, Director of Engineering, provide an overview on how their teams recognized that infrastructure as code was the most effective approach for their security policies to scale across their AWS infrastructure. They leveraged tools such as AWS CloudFormation, hardened AMIs, and hardened containers. The use of DevSecOps within Zocdoc has enhanced data protection with the use of AWS services such as AWS KMS and AWS CloudHSM and auditing capabilities, and event-based policy enforcement with Amazon Elasticsearch Service and Amazon CloudWatch, all built on top of AWS.
Macquarie, a global provider of financial services, identified early on that it would require strong partnership between its business, technology and risk teams to enable the rapid adoption of AWS cloud technologies. As a result, Macquarie built a Cloud Governance Platform to enable its risk functions to move as quickly as its development teams. This platform has been the backbone of Macquarie's adoption of AWS over the past two years and has enabled Macquarie to accelerate its use of cloud technologies for the benefit of clients across multiple global markets. This talk will outline the strategy that Macquarie embarked on, describe the platform they built, and provide examples for other organizations who are on a similar journey.
AWS Encryption Services provide an easy and cost-effective way to protect your data in AWS. In this session, you learn about leveraging the latest encryption management features to minimize risk for your data.
AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and manage the encryption keys used to encrypt your data. In this session, we will dive deep into best practices learned by implementing AWS KMS at AWS' largest enterprise clients. We will review the different capabilities described in the AWS Cloud Adoption Framework (CAF) Security Perspective and how to implement these recommendations using AWS KMS. In addition to sharing recommendations, we will also provide examples that will help you protect sensitive information on the AWS Cloud.
Whether it is per business unit or per application, many AWS customers use multiple accounts to meet their infrastructure isolation, separation of duties, and billing requirements. In this session, we discuss considerations, limitations, and security patterns when building out a multi-account strategy. We explore topics such as identity federation, cross-account roles, consolidated logging, and account governance. Thomson Reuters shared their journey and their approach to a multi-account strategy. At the end of the session, we present an enterprise-ready, multi-account architecture that you can start leveraging today. We encourage you attend the full multi-account track: SID331: Architecting Security and Governance Across a Multi-Account Strategy (Session) SID335: Implementing Security and Governance Across a Multi-Account Strategy (Chalk Talk) ENT324: Automating and Auditing Cloud Governance and Compliance in Multi-Account Environments (Session) SID311: Designing Security and Governance Across a Multi-Account Strategy (Workshop) SID308: Multi-Account Strategies (Chalk Talk)
Learn how to set up an end-user directory, secure sign-up and sign-in, manage user profiles, authenticate and authorize your APIs, federate from enterprise and social identity providers, and use OAuth to integrate with your app—all without any server setup or code. With clear blueprints, we show you how to leverage Amazon Cognito to administer and secure your end users and enable identity for the applied patterns of mobile, web, and enterprise apps.
SID333: Security at Scale: How Autodesk Leverages Native AWS Technologies to Provide Uniformly Scalable Security Capabilities
Learn how Autodesk implemented security at scale, moved to native AWS security products and features, as well as attained SOC certification.
AWS offers customers multiple solutions for federating identities on the AWS Cloud. In this session, we will embark on a tour of these solutions and the use cases they support. Along the way, we will dive deep with demonstrations and best practices to help you be successful managing identies on the AWS Cloud. We will cover how and when to use Security Assertion Markup Language 2.0 (SAML), OpenID Connect (OIDC), and other AWS native federation mechanisms. You will learn how these solutions enable federated access to the AWS Management Console, APIs, and CLI, AWS Infrastructure and Managed Services, your web and mobile applications running on the AWS Cloud, and much more.
In some organizations, the theme of “can't we all just get along” accurately describes the relationship between DevOps and network security. DevOps operates at a rapid and dynamic pace, using the cloud to create and deploy. Security teams exercise industry best practices of policy change control to eliminate potential security holes. Inevitably, deployment challenges arise. The ideal solution is one where security becomes part of the DevOps fabric. In this session, Ivan Bojer, automation specialist, and Jaime Franklin, cloud architect, both of Palo Alto Networks, discuss and demonstrate how AWS customers can automate the deployment of the VM-Series next generation firewall to protect DevOps environments on AWS. The topics in this session are based on current customer examples. They include: “touchless” deployment of a fully configured firewall utilizing automation tools, such as AWS CloudFormation templates, Terraform, and Ansible; consuming AWS tags to execute commitless policy updates; using Amazon CloudWatch and Elastic Load Balancing to deliver scalability and resiliency. This session wraps up with a discussion of sample templates and scripts to get started and a video demonstration of a fully automated VM-Series deployment. Session sponsored by Palo Alto Networks
The session will focus on the newly-launched security tool Hammer, which Dow Jones developed after identifying a security vulnerability internally. Users will learn more about Hammer and how it solves certain security configuration issues in the AWS cloud. The team behind the development of Hammer will showcase real-world examples of the tool identifying, analyzing and remediating issues, all as part of Dow Jones' commitment to helping everyone in the community as they make the jump to the cloud.
This presentation will include a deep dive into the code behind multiple security automation and remediation functions. This session will consider potential use cases, as well as feature a demonstration of a proposed script, and then walk through the code set to explain the various challenges and solutions of the intended script. All examples of code will be previously unreleased and will feature integration with services such as Trusted Advisor and Macie. All code will be released as OSS after re:Invent.
Hundreds of microservices, millions of AWS Lambda invocations, and dozens of global regions—the way we design, build, and operate cloud infrastructure and applications is increasingly distributed and composed of ephemeral components. From experience, we know a key to success with these systems is the ability to understand them using data. While there is considerable knowledge around how to use metrics and logs to analyze and troubleshoot traditional applications and infrastructure, emerging technology like serverless functions and orchestrated containers require a new observability approach. This is especially true when trying to understand the relationship between new services, like an IoT or mobile backend, and legacy systems. Session sponsored by New Relic
In this session, attain knowledge about how AWS can help create a differentiated customer experience for your end users and employees, at scale and at the speed of innovation to meet your customer's expectations. Hear from a panel of enterprise IT executives, including Glenn Weinstein, CIO of Appirio, who are innovating and driving real transformations of their business via the AWS Cloud. They are leveraging AWS offerings to build, migrate and run their applications on AWS, including AWS Lex, AWS Lambda, Amazon Kinesis, and Amazon Redshift. Session sponsored by Wipro
In this session, we will discuss how the AWS Serverless Application Repository makes it easy to discover and deploy serverless applications published by fellow developers and companies like Datadog, Here, Splunk, and many others. We will cover how you can use the repository to find applications for a variety of use cases and then deploy them to your AWS account. In addition, we will discuss how you can publish your own applications to the repository. You will also hear from two contributors, Datadog and Here, who will describe their approach to building the serverless applications that they have published to the Serverless Application Repository.
As a fully managed database service, Amazon DynamoDB is a natural fit for serverless architectures. In this session, we dive deep into why and how to use DynamoDB in serverless applications, followed by a real-world use case from CapitalOne. First, we dive into the relevant DynamoDB features, and how you can use it effectively with AWS Lambda in solutions ranging from web applications to real-time data processing. We show how some of the new features in DynamoDB, such as Auto Scaling and Time to Live (TTL), are particularly useful in serverless architectures, and distill the best practices to help you create effective serverless applications. In the second part, we talk about how CapitalOne migrated billions of transactions to a completely serverless architecture and built a scalable, resilient and fast transaction platform by leveraging DynamoDB, AWS Lambda and other services within the serverless ecosystem.
Building and deploying serverless applications introduces new challenges for developers whose development workflows are optimized for traditional VM-based applications. In this session, we discuss a method for automating the deployment of serverless applications running on AWS Lambda. We first cover how you can model and express serverless applications using the open-source AWS Serverless Application Model (AWS SAM). Then, we discuss how you can use CI/CD tooling from AWS CodePipeline and AWS CodeBuild, and how to bootstrap the entire toolset using AWS CodeStar. We will also cover best practices to embed in your deployment workflow specific to serverless applications. You will also hear from iRobot about its approach to serverless deployment. iRobot will share how it achieves coordinated deployments of microservices, maintains long-lived and/or separately-managed resources (like databases), and red/black deployments.
How do you monitor and troubleshoot an application made up of many ephemeral, stateless functions? How do you debug a distributed application in production? In this talk, we walk you through best practices, tools, and conventions using common troubleshooting scenarios. We'll discuss how you can use AWS services to address these scenarios, such as using Amazon CloudWatch for alarms and using AWS X-Ray to detect cross service calls. You will also learn how Financial Engines leverages AWS X-Ray to debug, monitor, and analyze latency data for its serverless applications. It will also share some best practices for debugging and reporting.
Have a lot of real-time data piling up? Need to analyze it, transform it, and store it somewhere else real quick? What if there were an easier way to perform streaming data processing, with less setup, instant scaling, and no servers to provision and manage? With serverless computing, you can build applications to meet your real-time needs for everything from IoT data to operational logs without needing to spin up servers or install software. Come learn how to leverage AWS Lambda with Amazon Kinesis, Kinesis Firehose, and Kinesis Analytics to architect highly scalable, high throughput pipelines that can cover all your real-time processing needs. We will cover different example architectures that handle use cases like in-line process or data manipulation, as well as discuss the advantages of using an AWS managed stream.
Join us to learn what's new in serverless computing and AWS Lambda. Dr. Tim Wagner, General Manager of AWS Lambda and Amazon API Gateway, will share the latest developments in serverless computing and how companies are benefiting from serverless applications. You'll learn about the latest feature releases from AWS Lambda, Amazon API Gateway, and more. You will also hear from FICO about how it is using serverless computing for its predictive analytics and data science platform.
AWS Step Functions makes it easy to coordinate AWS Lambda functions, run business workflows, and automate operations using state machines. The product has been live in the field for a year now, and it's time to learn from what people are doing with it. In this session, we'll present a series of innovative, high-impact, and just plain crazy applications of state machines from all sorts of customers. Guest-star Coca-Cola will show how they used Step Functions to support vending loyalty programs and product nutrition syndication. Managing application state is a central problem of building the serverless apps of the future; learn how Step Functions does it simply and scalably. Warning: there will be code!
In this session, you will learn how to deploy, monitor and manage your serverless APIs in production. We will deep dive into advanced capabilities of API Gateway that enable customers to build large scale data ingesting applications and dynamic websites. We will show you how to set up alarming and analyze logging for Amazon API Gateway and AWS Lambda using AWS CloudWatch, and automate common maintenance and management tasks for your services. In addition, we will review latest features that provide customers more control over their integrations and more insight into API usage.
You need a new approach to security for serverless applications. Classic approaches just don't make sense, because tools and process can only take you so far. You need a fresh look at what security means in these environments. Serverless applications let you focus on solving the problem at hand. Gone are most of the worries of traditional solutions. No more support code. No more building out infrastructure to deliver your application. This means you have to do less and get more in return. Classic operations fall by the wayside and you can scale your team in unprecedented ways. But what does this mean for security? No matter the design pattern, you're always responsible for your data, even if you're not running the underlying infrastructure. How do you make sure your data is safe and secure if you can't apply the usual set of security controls? In this session, we explore how serverless designs impact security. We look at how the right approach can modernize your security practice, streamline ops, and reduce your workload. This session introduces a step-by-step security process for serverless applications, using services like AWS WAF, IAM, Amazon CloudWatch, and others to build stronger applications. Session sponsored by Trend Micro Incorporated
AWS enables companies to build innovative cloud applications combining technologies like Alexa, AWS IoT, and AWS Lambda with enterprise-scale, microservice backends. After these applications move into production, there are teams responsible for monitoring all components and providing insights needed to optimize the customer experience. In this session, we share an easy-to-apply framework to build all components successfully to get the answers needed to run and improve every application, no matter how complicated. First, we lay the foundation with powerful tools in the AWS ecosystem like Amazon CloudWatch, AWS CloudTrail, and AWS X-Ray. Then, we complement these insights with approaches for monitoring frontend web and mobile performance and behavior, eventually extending into IoT devices. Finally, we show how to derive actionable insights from all the gathered data and integrate it into enterprise-grade monitoring platforms. Session sponsored by Dynatrace
When designing microservices there are a number of things to think about. Just for starters, the bounds of their functionality, how they communicate with their dependencies, and how they provide an interface for their own consumers. Serverless technologies such as AWS Lambda change paradigms around code structure, usage of libraries, and how you deploy and manage your applications. In this session, we show you how by combining microservices and serverless technologies, you can achieve the ultimate flexibility and agility that microservices aim for, while providing business value in how serverless greatly reduces operational overhead and cost. In addition, National Geographic will share how it built its NG1 platform using a serverless, microservices architecture. The NG1 platform provides National Geographic consumers with content personalized to their preferences and behaviors in an intuitive, easy-to-use way on smartphones.
Serverless applications can be composed of multiple AWS resources, such as AWS Lambda functions, Amazon API Gateway APIs, Amazon DynamoDB tables, and Amazon S3 buckets. When building a serverless application, what is the most straightforward way to group all your resources into one serverless application? Once you define your serverless application, how quickly can you develop, test, and iterate on your local machine, before deploying to AWS? In this session, learn how to define serverless applications with the AWS Serverless Application Model (AWS SAM), and how to use the AWS SAM Local CLI tool to develop and test locally, before deploying to AWS.
AWS Lambda enables you to run code without provisioning or managing servers. Today, you can write your Lambda functions once and execute them everywhere your end viewers are present with AWS Lambda@Edge. This session walks through multiple examples of web applications that use the serverless programming model for authentication, customization, and security to address the question of how to design and deploy intelligent web applications with AWS Lambda@Edge and Amazon CloudFront. The startup DataDome will also share its experience with Lambda@Edge and CloudFront, and how it simplified the onboarding process for its customers. Deployed globally on CloudFront PoP locations, their bot protection service can now be activated in one-click through the AWS console.
Serverless computing already provides high availability and fault tolerance for your application by default. However, you can build serverless applications that are deployed across multiple regions in order to further increase your availability and fault tolerance. In this session, we show you how to architect a multi-region serverless application with Amazon API Gateway and AWS Lambda that can route end users to the appropriate region to achieve optimal latency or availability. Learn about the different options for running an active/active versus an active/passive multi-region setup, and the setup for failing over between regions.
Learn how Verizon's Revvel team built a world-class video transcoding pipeline on AWS. This session shows how Revvel migrated from transcoding video on EC2 instances to a serverless pipeline using AWS Lambda and Amazon S3. You will gain insights on how they were able to achieve massive parallelization that gives Revvel the ability to transcode an entire movie or TV series into multiple device formats and bitrates within minutes, while never paying for idle resources. In addition, the Revvel team will cover best practices and lessons learned in creating a custom CI/CD pipeline for Lambda functions that allows them to test code quality during an upgrade to a Lambda function. Revvel is a team within Verizon, chartered with building and innovating video solutions. In November 2016, Verizon acquired Vessel to accelerate their efforts in building a next-generation television service, and Revvel was born.
In this session, principal architect Mike Broadway describes how HomeAway built a high-throughput, scalable pipeline for manipulating, storing, and serving hundreds of image files every second with Lambda, Amazon S3, DynamoDB, and Amazon SNS. He also shares best practices and lessons learned as they scaled their mission-critical On Demand Image Service (ODIS) system into production. Lambda functions form the backbone of ODIS, which handles over 100 million photographs that are uploaded to HomeAway's vacation rental platform. HomeAway is a vacation rental marketplace with more than 2 million rentals in 190 countries and is part of Expedia.
To address rising crash fatalities, Agero built a near real-time driver behavior analysis platform that provides actionable insights to its customers on how to become safer drivers. Come learn how Agero built this responsive, scalable platform using an entirely serverless mobile backend. The backend is built on Lambda, DynamoDB, Amazon S3, Kinesis, and Amazon Redshift. Agero protects 80 million motorists in North America, almost one in three vehicles on the road today, through their software-enabled driver safety services.
AWS helps financial services institutions run risk and pricing scenario calculations against large datasets in shorter timeframes and at lower cost. In this session, we will discuss how high performance computing (HPC) and grid computing patterns in the cloud are evolving to leverage serverless architectures with AWS Lambda. Also in this session, Fannie Mae discusses how it migrated a mission-critical, financial modeling application to Lambda from an on-premises grid computing infrastructure. It will describe the journey to serverless computing to develop the first serverless high performance computing (HPC) platform in its industry. Fannie Mae will also cover how Lambda has enabled the company to reliably perform quadrillions of calculations each month, at a fraction of the cost and effort.
Pacific Northwest National Laboratory's rich data sciences capability has produced novel solutions in numerous research areas including image analysis, statistical modeling, and social media (and many more!). See how PNNL software engineers utilize AWS to enable better collaboration between researchers and engineers, and to power the data processing systems required to facilitate this work, with a focus on Lambda, EC2, S3, Apache Nifi and other technologies. Several approaches will be covered including lessons learned.
In this session, learn how Nextdoor replaced their home-grown data pipeline based on a topology of Flume nodes with a completely serverless architecture based on Kinesis and Lambda. By making these changes, they improved both the reliability of their data and the delivery times of billions of records of data to their Amazon S3–based data lake and Amazon Redshift cluster. Nextdoor is a private social networking service for neighborhoods.
Serverless and AWS Lambda specifically enable developers to build super-scalable application components with minimal effort. You can use Amazon Kinesis and Amazon SQS to create a universal event stream to orchestrate Lambdas into much more complex applications. Now, using AWS Step Functions, we can build large distributed applications with Lambdas using visual workflows. See how Step Functions are different from Amazon SWF, how to get started with Step Functions, and how to use them to take your Lambda-based applications to the next level. We start with a few granular functions and stitch them up using Step Functions. As we build out the application, we add monitoring to ensure that changes we make actually improve things, not make them worse. Leave the session with actionable learnings for using Step Functions in your environment right away. Session sponsored by Datadog
Learn how to build powerful backends without managing servers by using MongoDB Stitch. Stitch is a backend-as-a-service that lets developers perform CRUD operations directly against their database with a REST API, declaratively specify field-level security on their data, and compose server-side logic and external services with hosted functions. We provide four live coding demonstrations of Stitch in action. First, we demonstrate querying and inserting data into Stitch by adding comment capability to a static blog. Second, we demonstrate the power of Stitch's declarative ACL rules in the context of a medical records application. Third, we show services integration using Amazon S3 and Amazon Rekognition. Finally, we put it all together with an IoT-powered two-factor door security system, demonstrating how Stitch orchestrates a complex architecture of devices, logic, and services. Session sponsored by MongoDB
Are you an experienced serverless developer who wants a handy guide to unleash the full power of serverless architectures for your production workloads? Do you have questions about whether to choose a stream or an API as your event source, or whether to have one function or many? In this talk, we discuss architectural best practices, optimizations, and handy little cheat codes to build secure, high-scale, high-performance serverless applications, using real customer scenarios to illustrate the benefits.
AWS Lambda is a great fit for many data processing tasks, for data analytics and for machine learning inference. The Lambda team use Lambda for our own Analytics in conjunction with other AWS services. In this session, we will cover how we tie these services together to crunch the data Lambda creates to generate insights to better run our service. We will cover common design patterns for big data processing, how they map to Lambda and serverless, and look at some new patterns that serverless makes possible. Finally we will look to how to leverage Machine Learning inference on Lambda to derive better insights from the data.
Many serverless applications need a way to manage end user identities and support sign-ups and sign-ins. Join this session to learn real-world design patterns for implementing authentication and authorization for your serverless application—such as how to integrate with social identity providers (such as Google and Facebook) and existing corporate directories. We cover how to use Amazon Cognito identity pools and user pools with API Gateway, Lambda, and IAM.
In this session, learn about all of the AWS storage solutions, and get guidance about which ones to use for different use cases. We discuss the core AWS storage services. These include Amazon Simple Storage Service (Amazon S3), Amazon Glacier, Amazon Elastic File System (Amazon EFS), and Amazon Elastic Block Store (Amazon EBS). We also discuss data transfer services such as AWS Snowball, Snowball Edge, and AWS Snowmobile, and hybrid storage solutions such as AWS Storage Gateway.
Though Office 365 is a valuable tool for companies to grow faster and collaborate more effectively, it also presents some risks that aren't always obvious to the user, like accidental deletion, corruption, or malicious intent. Attend this session to explore how you can protect your critical Exchange Online, SharePoint Online, and OneDrive for Business data from accidental deletion, corruption, or malicious intent using NetApp Cloud Control a Software-as-a-service (SaaS) solution leveraging Amazon Simple Storage Service (S3). Learn how an Ivy League University protects 15,000 Office 365 mailboxes with ease using Cloud Control. Infrastructure & Operations (I&O) leaders who are using or evaluating Office 365 can learn about investing in third-party backup and recovery tools for faster, more flexible recovery options, as well as reputation damage control after a malicious attack. This session is suitable for cloud architects, IT infrastructure architects, application administrators, backup administrators, DBAs, and storage administrators. Session sponsored by NetApp
Do you have on-premises tape backups or expensive VTL hardware? Worried about moving cases of tapes off site? Not sure about the integrity of your data on tape? Learn how to use AWS services, including AWS Storage Gateway, to replace existing traditional approaches. Using Storage Gateway and standard backup software, you can back up to Amazon S3 and Amazon Glacier or tier snapshots to AWS. This enables both long-term data retention for compliance, and also recovery into Amazon EC2, locally, or to another site in case of a disaster. Southern Oregon University shares how they replaced tape backups with AWS, and the lessons learned in the process.
AWS now offers the simple services of data migration at a petabyte scale. You can easily move large volumes of data from onsite to the cloud. You can also quickly get started with the cloud as a backup target using data transfer services, such as AWS Snowball, AWS Snowball Edge, or AWS Storage Gateway. Learn about the available data migration options and which one is the right fit for your requirements. We discuss customer use cases and review the different applications they used with our data migration services to cut their IT expenditures and management time on hardware and backup solutions.
Organizations around the world are facing a “data tsunami” as next-generation sensors produce enormous volumes of earth observation data. Come learn how NASA is leveraging AWS to efficiently work with data and computing resources at a massive scale. NASA is transforming its earth science EOSDIS (Earth Observing System Data Information System) program by moving data processing and archiving to the cloud. NASA anticipates that their data archives will grow from 16 PB today to over 400 PB by 2023 and 1 Exabyte by 2030. They're moving to the cloud to scale their operations for this new paradigm.
Learn from our engineering experts how we've designed Amazon S3 and Amazon Glacier to be durable, available, and massively scalable. Hear how Sprinklr architected their environment for the ultimate in high availability for their mission-critical applications. In this session, we'll discuss AWS Region and Availability Zone architecture, storage classes, built-in and on-demand data replication, and much more.
Learn best practices for Amazon Simple Storage Service (Amazon S3) performance optimization, security, data protection, storage management, and much more. Learn how to optimize key naming to increase throughput, apply the appropriate AWS Identity and Access Management (IAM) and encryption configurations, and leverage object tagging and other features to enhance security.
Learn how to build an archive in Amazon Glacier, which provides cost-effective retention and compliance options and exciting new features.
STG304: Deep Dive on Data Archiving in Amazon S3 & Amazon Glacier, with Special Guest, 20th Century Fox
Learn about ways to archive data for compliance or cost savings, and balance retrieval speed and cost to fit your specific use case. We examine concepts such as active archiving (archive storage with fast retrieval times), compliance archiving, and many more.
Surveys consistently rank backup as one of the first workloads to move to the cloud. But what does it really look like? This session gives backup managers and admins the straight story on streamlining AWS Cloud integration with existing on-premises data backup software, tape processes, virtual tape libraries, third-party snapshots, file servers, and archives. Learn how to choose the right integration with varying degrees of disruption, how to automatically migrate data for cost reductions and compliance, and how to recover individual files or many files fast. We discuss Amazon S3, Amazon Glacier, Amazon EFS, AWS Snowball, AWS Storage Gateway (both as VTL and File Gateway), and third-party partner integrations.
In this popular session, discover how Amazon EBS can take your application deployments on Amazon EC2 to the next level. Learn about Amazon EBS features and benefits, how to identify applications that are appropriate for use with Amazon EBS, best practices, and details about its performance and volume types. The target audience is storage administrators, application developers, applications owners, and anyone who wants to understand how to optimize performance for Amazon EC2 using the power of Amazon EBS.
In this session, we explore the world's first cloud-scale file system and its targeted use cases. Learn about Amazon EFS features and benefits, how to identify applications that are appropriate for use with Amazon EFS, and details about its performance and security models. The target audience is security administrators, application developers, and applications owners who operate or build file-based applications.
Today, data backup isn't enough. IT teams with a cloud data management strategy become a data broker for the business. Data helps the business improve company reputation, drive revenue, and satisfy customers. With a hybrid architecture approach to managing data on-premises and in the cloud, the business can be more agile and more responsive than today. Find out what your IT peers are doing with cloud data management (hint: it's more than backup). Learn how data backup, recovery, management, and e-discovery capabilities can help maximize your use of AWS. See what your peers are doing to best move, manage, and use data across on-premises storage and cloud services. In this session, you learn steps for seamless, risk-free migration to different AWS services (Amazon EC2, Amazon RDS, Amazon S3, Amazon S3 - Infrequent Access class, Amazon Glacier and AWS Snowball); tactics for streamlined, enterprise-class disaster recovery; ways to save money by retiring expensive alternatives like tape storage; single view e-discovery across hybrid locations with dynamic data indexing across on-premises and cloud storage; and how to achieve holistic data protection across storage locations. Session sponsored by Commvault
Enterprises of all sizes face continuing data growth and persistent requirements to back up and recover application data. The pains of recurring storage hardware purchasing, management, and failures are still acute for many IT organizations. Some also need to integrate on-premises datasets with in-cloud workloads, such as big data processing and analytics. Learn how to use AWS Storage Gateway to connect on-premises applications to AWS storage services using standard storage protocols, such as NFS, iSCSI, and VTL. Storage Gateway enables hybrid cloud storage solutions for backup and disaster recovery, file sharing, in-cloud processing, or bulk ingest for migration. We discuss use cases with real-life customer stories, and offer best practices.
Running out of capacity on your NAS? Tired of buying and maintaining storage systems for file shares, media archives, or high-performance shared file systems? Learn how you can use AWS storage services to help eliminate the capital expense and operational complexity of on-premises file storage. We provide guidance how to use AWS's in-cloud file storage service Amazon EFS, as well as how to connect on-premises file workloads to data stored in Amazon S3 via the AWS Storage Gateway. Hear examples from customers such as Celgene Corporation, who are using these services in hybrid and in-cloud architectures, to take advantage of AWS durability, performance, and economics.
As your business grows, you gain more and more data. When managed appropriately, you can make this data a strategic asset to your organization. In this session, you'll learn how to use storage management tools for end to end management of your storage, helping you organize, analyze, optimize and protect your data. You'll see how S3 Analytics - Storage Class Analysis helps you set more intelligent Lifecycle Policies to reduce TCO; Object Tagging gives you more management flexibility; Cross-Region Replication provides efficient data movement; Amazon Macie helps you ensure data security; and much more. Then, Paul Fisher, Technical Fellow at Alert Logic, will demonstrate how his organization uses S3 storage management features in their infrastructure.
Learn how to build a data lake for analytics in Amazon S3 and Amazon Glacier. In this session, we discuss best practices for data curation, normalization, and analysis on Amazon object storage services. We examine ways to reduce or eliminate costly extract, transform, and load (ETL) processes using query-in-place technology, such as Amazon Athena and Amazon Redshift Spectrum. We also review custom analytics integration using Apache Spark, Apache Hive, Presto, and other technologies in Amazon EMR. You'll also get a chance to hear from Airbnb & Viber about their solutions for Big Data analytics using S3 as a data lake.
Amazon S3 & Amazon Glacier provide the durable, scalable, secure and cost-effective storage you need for your data lake. But, as your data lake grows, the resources needed to analyze all the data can become expensive, or queries may take longer than desired. AWS provides query-in-place services like Amazon Athena and Amazon Redshift Spectrum to help you analyze this data easily and more cost-effectively than ever before. In this session, we will talk about how AWS query-in-place services and other tools work with Amazon S3 & Amazon Glacier and the optimizations you can use to analyze and process this data, cheaply and effectively.
HERE Technologies enables people, enterprises, and cities around the world to harness the power of location. In this session, you learn how HERE uses JFrog Artifactory with Amazon EFS to deliver close to a million downloads and uploads per day to its CI/CD environment. We walk you through HERE's AWS process for handling development at scale, and we discuss lessons learned and best practices for success throughout.
In the fast-paced world of news media, having a high performance, reliable storage service is a critical component for delivering content at scale. Learn how Thomson Reuters leverages Amazon EFS to host their rich media content, saving them time and money while delivering unmatched reliability. Come hear about their journey, their approach, their architecture, and the considerations they took when building out their environment. Best practices and tips for success will be shared throughout.
For organizations looking to glean insights from their data, it is essential to deploy the right environment to successfully support analytics workloads. Learn about the different storage options from AWS, and discuss with our experts how to select the best option for your big data analytics workloads. Hear how one customer, ViaSat, used Amazon EBS for their Apache Kafka and Apache Hadoop workload to improve cost and performance. We also describe best practices and share tips for success throughout.
Driven by higher resolution and an increasing amount of content due to direct B2C delivery, media companies are looking to cost-effectively leverage cloud compute scalability. Emerging use cases, such as media supply chains, VFX/animation rendering, and transcoding for OTT streaming, require careful planning as they move to the cloud. Storage is critical to the performance and processing of media. In this session, we discuss various AWS cloud storage strategies for different media workloads. We take a deep dive into media supply chains (including content transcoding, QC, mastering, and packaging), post-production tasks in the cloud, and other media and entertainment workloads. You also learn how Theory Studios uses storage on AWS cloud to support rendering visual effects and animation on Amazon EC2 Spot Instances.
Join us for an overview of AWS Snowball and Snowball Edge, a collection of self-service storage appliances built for petabyte-scale data ingest and export operations in the cloud. AWS Snowball and Snowball Edge make it easy to migrate mixed data types to the cloud at scale, whether in support of enterprise workload transformation, active archive, backup & recovery or data lake seeding. In this session, you will hear from organizations that are using AWS Snowball to migrate their critical data assets to the cloud with minimal cost and operational overhead. See how quickly you can accelerate your cloud migration timeline using AWS Snowball and Snowball Edge.
Running WordPress site in the cloud across multiple instances can create a never-ending list of challenges in order to keep your site running optimally. Administration of unpredictable code updates or theme changes can create a never-ending litany of manual or programmatic updates to every single Amazon EC2 instance hosting WordPress. Come join us in this deep dive session, where we detail how you can leverage EC2, Amazon RDS, Amazon ElastiCache, Auto Scaling, Elastic Load Balancing, Amazon Route 53, and Amazon CloudFront. We explore how to take advantage of Amazon Elastic File System (Amazon EFS) and Amazon Simple Storage Service (Amazon S3) as shared storage to deliver a highly available and massively scalable infrastructure that can dynamically scale up or down to automatically adjust to unpredictable traffic demands. We also share best practices, provide performance tuning hints, and describe cost optimization techniques throughout.
STG325: Case Study: Come Learn How SiriusXM and Digital ReLab Leveraged Amazon EFS for their Media Workflows
In the rapidly transforming industry of Media and Entertainment, accelerating time-to-market can be critical to success. Learn how SiriusXM leveraged Amazon EFS to quickly launch a new initiative, and turn potential obstacles into opportunity. SiriusXM was able to gain agility for their product launch, by leveraging key features of EFS —while providing very high levels of availability and durability, at a low TCO. Also, come see how Digital ReLab built its media production workflow using Amazon EFS, more quickly and safely than completely refactoring existing code. Amazon EFS enabled a fast, scalable cloud offering of their Starchive solution, without sacrificing their ability to run on-premises. This design has allowed Digital ReLab to quickly respond to demand while providing a lower TCO.
In a rapid enterprise application development environment, messaging middleware is critical for integration of heterogeneous platforms and scalability. To deploy a highly available messaging solution, a highly durable shared file system that easily scales as needed is essential. Amazon EFS offers a shared file system in the cloud at less than half the cost of a self-managed cloud solution with third-party software. Come to this session to hear how you can use TIBCO Enterprise Message Service (EMS) or IBM MQ with Amazon EFS to deploy enterprise messaging middleware that is reliable, scalable, and fault tolerant, in as little as 30 minutes. We describe best practices and share tips for success throughout.
ProtectWise shifts network security to the cloud to provide complete visibility and detection of enterprise threats and incident response. Built entirely on AWS, the ProtectWise grid has the unique ability to create an unlimited retention window with full-fidelity forensics, automated retrospection, and advanced visualization. To enable customers to store petabytes of networking data and analyze it in seconds, they use Apache Solr and Apache Cassandra to analyze encrypted raw packet data and metadata about network packets—billions of items per day. Maintaining an architecture to handle a large volume of data requires an innovative architecture at a cost-effective standpoint. In this session, you learn how ProjectWise has optimized their solution on AWS using hot, warm, and cold shards across EC2 instance store, Amazon Elastic Block Store (Amazon EBS), and Amazon S3 for cost and scalability.
Experian gathers, analyzes, and processes credit data at massive scale to help businesses make smarter decisions, individuals gain access to financial services, and lenders to minimize risk. The company built its petabyte-scale data-ingestion and analytics solution using CDH (Cloudera Distribution Including Apache Hadoop) running on Amazon EC2, with data stored in Amazon EBS and Amazon S3. This next generation big data platform aims to improve the data accuracy by moving away from traditional batch uploads to a real-time API-based ingestion process. In this talk, you will learn how Experian has leveraged different AWS compute and storage services for agility and quicker time to market. We will discuss lessons learned and best practices for success throughout.
This is your chance to learn directly from top CTOs and Cloud Architects from some of the most innovative AWS customers. In this lightning round session, we'll have an action-packed hour, jumping straight to the architecture and technical detail for some of the most innovative data storage solutions of 2017. Hear how Insitu collects and analyzes data from drone flights in the field with AWS Snowball Edge. See how iRobot collects and analyzes IoT data from their robotic vacuums, mops, and pool cleaners. Learn how Viber maintains a petabyte-scale data lake on Amazon S3. Understand how Alert Logic scales their massive SaaS cloud security solution on Amazon S3 & Amazon Glacier.
Headquartered in the UK, Vodafone provides services in 26 additional markets via Operating Companies (OpCos). The Vodafone Digital TV team's journey on AWS began with the goal of enabling these OpCos to ramp up their TV deployment within 24 hours. This session covers two phases of Vodafone's journey to bring agility, cost, and operational efficiencies to their Pay TV offering. We dive into technical details, architecture, and lessons learned from Vodafone's lift-and-shift hybrid approach in the first phase, which resulted in significantly reduced deployment time and cost while adding flexibility. Learn technology patterns and best practices that Vodafone and its partners have adopted to begin the second phase of optimization, constructing a cloud-centric architecture that uses microservices, Amazon S3, AWS Lambda, Amazon EC2, Elastic Load Balancing, and Amazon CloudFront to support the Vodafone TV product.
Over 40 years, Aspect Software has grown into a multinational leader for contact center solutions, recently launching Aspect Via, a comprehensive cloud-based customer engagement platform. In this session, learn how Aspect used AWS Lambda, Amazon API Gateway, Application Load Balancer, development tooling from Swagger, and microservices to build a secure, scalable, and maintainable API framework that can be integrated into multiple channels of customer interaction. We focus on best practices for transforming on-premises-based applications and moving latency-sensitive services supporting telephony applications to an API-based platform in the cloud.
Customers like Twilio, AudioCodes, and Vonage power their voice, video and messaging services using AWS's global infrastructure. In this session, we walk through the core AWS features and partner products that enable carrier-grade Voice-over-IP (VoIP), WebRTC, and IP Multimedia Subsystems (IMS) solutions. You'll walk away with best practices and lessons learned from our customers' experience. We address the requirements for low latency, high availability, and scalability, critical to communication workloads. To do so, we use a combination of advanced Amazon EC2 networking, Auto Scaling, CloudWatch, Elastic Load Balancing, Route 53, Direct Connect, and AWS Marketplace–provided options.
In June 2017, AWS announced the general availability of the Greengrass service bringing local compute, messaging, data caching and synch capabilities to network edge devices. In this session, you will learn how AWS IOT, Greengrass and Lambda@Edge are integrated into Nokia's Multi-Access Edge Compute (MEC) solution, enabling a platform that provides a programming model at the edge as well as specialized access necessary for the roll-out of advanced 4G and 5G use cases. We will dive into the architecture of this MEC implementation that is tailored to aggregate traffic from multiple macro-cellular and small-cell stations in LTE and 5G networks. You will learn to take advantage of the containerized programming environment on the MEC platform, while also connecting with the eco-system of AWS services.
Join this State of the Union to learn about the latest developments from Amazon for enterprise workloads such as Windows, VMware, and SAP. Sandy Carter, AWS vice president for Enterprise Workloads, discusses the evolution of AWS services for enterprise workloads and the new features and services that we are launching for Windows and VMware. She shares the company's vision for continuing to innovate in this space to make AWS the premier place for enterprise customers. Also, several major customers discuss their own experience running enterprise workloads on AWS as well as pursuing new solutions in areas like AI and IoT.
DraftKings is the largest Daily Fantasy Sports platform on which millions of users compete in 11 different fantasy sports for a chance to win real money. The traffic on the DraftKings website and native apps is highly volatile and directly correlates with the interest in sporting events, with scoring events in those games frequently driving a five-fold spike in traffic on a minute-to-minute basis. In this session, you learn how DraftKings runs a highly elastic server farm and scales its Windows-based infrastructure using a combination of AWS Lambda, Amazon CloudWatch, Auto Scaling groups, and predictive analysis techniques.
Learn how media and entertainment companies use Amazon EC2 for Windows Server for fast rendering on film and television projects. In this session, we discuss how to architect a Windows solution using Deadline to allow the freedom to easily access any combination of on-premises or cloud-based compute resources. Also, learn how to set up a hybrid Windows file system and storage for best performance and cost efficiency. With flexible third-party licensing options, customers using AWS resources can purchase software licenses from the Thinkbox marketplace, deploy existing licenses, or leverage a combination of the two.
Learn how to architect fully available and scalable Microsoft solutions and environments in AWS. Find out how Microsoft solutions can leverage various AWS services to achieve more resiliency, replace unnecessary complexity, simplify architecture, provide scalability, and introduce DevOps concepts, such as compliance, governance, automation, and repeatability. Also, plan authentication and authorization, and explore various hybrid scenarios with other cloud environment and on-premise solutions/infrastructure. Learn about common architecture patterns for network design, Active Directory, and business productivity solutions like Dynamics AX, CRM, and SharePoint, also common scenarios for custom .NET, .NET Core with SQL deployments and migrations.
Migrating databases to the cloud is a critical part of organizations cloud journey and requires careful planning and architecture considerations including migration methods. This session will provide you with best practices and guidelines in migrating and/or architecting hybrid database architecture on AWS with focus on Microsoft SQL server databases. We will review current SQL on RDS, SQL on EC2 capabilities, compare and contrast various migration methods including SQL Export, Backup and Restore, and using AWS Database Migration Service (DMS). We will also look at how Expedia is migrating monolith SQL server databases to AWS using a hybrid approach leveraging SQL Server Distributed Availability Architecture. Expedia will share lessons learned during initial test and deployment phase followed by a demo of their existing architecture and deployment.
Enterprise organizations often require a global Active Directory footprint to support their Windows based workloads. This session will describe best practices for deploying Active Directory on AWS. Starting with a single VPC we will expand to many VPC's in many Regions, thus demonstrating AWS capabilities to support a global Active Directory environment.
Moving your entire CI/CD pipeline to AWS can be a daunting task. You have put years into building out your current system and perfecting it. How do you know the new world will be better, easier, and more scalable? The good news is you don't have to go “all in”. When developing software, we have learned that small, incremental steps are usually the safest and fastest way to go. Why would modifying your CI/CD pipeline be any different? In this session, we move a .NET application from a VSTS environment into AWS incrementally, allowing you to go as deep as you want.
More and more, companies recognize the untold riches they can save by moving their existing software licenses to AWS. This session covers how to bring your Microsoft licenses to AWS, and then demonstrates using PowerShell to import your Windows Server image from VMware or Hyper-V, configure Windows KMS with your license key, and launch an EC2 Dedicated Host. We discuss ways you can use Amazon EC2 Systems Manager to manage license compliance.
VMware Cloud on AWS lets you migrate your existing on-premises applications to the AWS Cloud while retaining the tooling, management, and operating process you already have. VMware Cloud on AWS brings VMware's Software-Defined Data Center (SDDC) to the AWS Cloud, allowing customers to run applications across operationally consistent VMware vSphere®-based private, public, and hybrid cloud environments, with optimized access to AWS services. You can use the VMware on AWS solution to bring the scalability and agility of the cloud to your existing Microsoft applications. This session will provide an architectural deep dive on VMware on AWS for your Microsoft applications. We will discuss the key architectural components of the VMware Cloud on AWS solution for your Microsoft applications. Attendees will learn about best practices and receive practical advice on how to start using Vmware Cloud on AWS for their Microsoft applications.
This session dives deep on best practices and considerations for running Microsoft SQL Server on AWS. We cover best practices for deploying SQL Server, how to choose between Amazon EC2 and Amazon RDS, and ways to optimize the performance of your SQL Server deployment for different types of applications. We review in detail how to provision and monitor your SQL Server databases, and how to manage scalability, performance, availability, security, and backup and recovery in both Amazon RDS and Amazon EC2. In addition, we discuss how you can set up a disaster recovery solution between an on-premises SQL Server environment and AWS, using native SQL Server features like log shipping, replication, and AlwaysOn Availability Groups.
Over the past decade, Verizon built significant investments in on-premises technology. Migrating legacy applications and IT systems takes time, so architecting a secure and performant hybrid architecture is essential to Verizon's cloud adoption. In this session, you see how Verizon operationalized their existing on-premises IT infrastructure with AWS while providing the flexibility needed for both modern and legacy applications. Verizon solved extremely challenging enterprise constraints. Learn from Verizon's cloud experience, and see the resulting architectures designed to meet strict security and compliance requirements while delivering faster application and system migration.
WIN315: Deploying .NET Applications to AWS: Best Practices with KPMG
In this session, we look at the best practices for deploying Microsoft-based applications into AWS that use technologies such as .NET Core, Windows, and Microsoft SQL Server. Learn how to build a scalable and resilient environment for your application using Amazon VPC, Amazon EC2, Amazon RDS, Amazon S3, and AWS CloudFormation, and understand how to integrate with a CI/CD pipeline using third-party tools to automate deployments. We look at the key benefits of deploying Microsoft-based solutions into AWS over alternatives, including in areas such as security, resilience, operations, and cost-optimization. We cover a successful KPMG use case whereby a failing on-premises solution was completely migrated to AWS using DevOps principles to create a simple, fully automated and highly scalable architecture while also meeting KPMG security standards.
The new AWS Tools for Visual Studio Team Services (VSTS) provide integration into many popular AWS services, such as Amazon S3, AWS Elastic Beanstalk, AWS CodeDeploy, AWS Lambda, and more. These tools provide customers with a set of tasks they can include in build and release definitions in VSTS and on-premises TFS instances to work with AWS services. In this session, we show how you can use the new AWS Tools for VSTS in new and existing VSTS build/release pipelines to interoperate with many AWS resources. We demonstrate how you can use the build tasks in the new extensions to easily work with content in Amazon S3 buckets, perform deployments to AWS Elastic Beanstalk environments, and deploy .NET Core functions and serverless applications to AWS Lambda, all from within the familiar VSTS project console.
When you move Windows workloads to AWS, it is important to have an Active Directory in the cloud to support group policy management, authentication, and authorization. This session is a deep dive on AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft Active Directory (AD). We cover how the service operates in support of stand-alone directory and trust-based federation use cases. Topics include features that enable you to migrate a broad range of applications to AWS, how to use SaaS applications, such as Office 365, when managing users in AWS Managed Microsoft AD, how to secure trusts when federating to on-premises Active Directory, and security features to help you with your corporate security policies and compliance.
Many enterprises choosing to adopt Office 365 for business productivity continue to use Amazon EC2 instances to run Microsoft workloads that require highly advanced functionality and customization. In addition, many enterprise deployments of Office 365 have requirements to manage and synchronize user profiles to Office 365, such as restricting user access and providing secure mobile access. They leverage AWS for these components. In this session, you learn how to architect a solution that benefits from the scalability and agility of AWS while maintaining an investment in Office 365 productivity. We discuss the Office 365 services most commonly deployed to AWS to meet these requirements and show how to connect AWS to Azure AD to ensure a globally consistent set of access policies.
In this session, you will learn from AWS customers who are developing curriculum and training programs to build tech talent of the future. Come hear from Digital Divide Data about their work alongside AWS and the National Museum Kenya. Listen to their story of how they virtually captured one of the largest collections or archeology and paleontology in 3D digital imagery, while simultaneously empowering young adults with skills in digitalization, cloud services, mobile technologies, and database administration. Santa Monica College also joins the discussion with insight into the increasing importance of new technology, IT training resources, and knowledge sharing. Explore how to build a cloud-based curriculum, as well as how to expand your pool of applicants in ways beyond traditional recruiting methods.
Emergencies happen with no notice, whether with weather-related events or man-made incidents. Countless lives can be saved not only by predicting disastrous events but also by reacting quickly and effectively. In this session, you will discover how organizations are using AWS capabilities to predict and respond to emergencies around the world. StormSense, a project led by City of Virginia Beach enhances the capability of VA Beach and the neighboring communities of Hampton Roads, VA to predict coastal flooding resulting from storm surge, rain, and tides in ways that are replicable, scalable, measurable, and make a difference worldwide. LiveSafe, A communications platform to improve safety and prevention efforts - puts a mobile security system in the hands of everyone in organization, deputizing employees so they can feel involved and empowered to do something when they see something. LiveSafe's cloud-based command dashboard receives tips in real time and allows security officials to respond via secure live chat.
AWS GovCloud (US) is an isolated AWS Region designed to help US government agencies and highly regulated organizations meet their compliance needs, including the International Traffic in Arms Regulations (ITAR) and Federal Risk and Authorization Management Program (FedRAMP). AWS GovCloud (US) makes it safe and easy to move sensitive data and regulated IT workloads to the cloud, through its adherence to numerous compliance and regulatory requirements. Join us to learn about AWS GovCloud (US) and how AWS can do the heavy lifting for your government agency or regulated enterprise.
WPS206: Modernizing Government in the Cloud
Cloud computing can help government organizations increase innovation, efficiency, agility, and resiliency—all while reducing costs. This session highlights how small, high-powered teams across government agencies are breaking down innovation barriers, tackling mission-critical operations, and delivering more value with the cloud in highly regulated, unclassified environments. This session features varying perspectives from leaders across government agencies.
Explore how education can create equitable learning experiences in the era of bring-your-own-device and Google Chromebooks, by delivering Microsoft Windows applications to students with Amazon WorkSpaces and Amazon AppStream 2.0. We discuss how students of all ages can use design applications, business intelligence tools, technical programs, and even desktop games such as Minecraft as part of their class curriculum, and still have a unified end-user experience, regardless of the hardware they are using. We dive into the steps for setting up WorkSpaces and AppStream 2.0 within a classroom, choosing the right option for your school, connecting existing storage and identity, enabling applications for students, and managing costs.
Join AWS in examining governance and compliance designs aimed at helping organizations meet HIPAA and HITRUST standards. Learn how to better validate and document your compliance, expedite access to AWS compliance accelerators, and discover new ways to use AWS native features to monitor and control your accounts. This session is for a technical audience seeking to dive deep into the AWS service offerings, console, and API.
FedRAMP introduced an expedited version of its authorization process called FedRAMP Accelerated. The US General Services Administration used this redesign to advance cloud.gov, which runs on AWS GovCloud (US). With cloud.gov, government agencies can quickly deploy applications including highly sensitive workloads that comply with federal policies; run scalable cloud-native applications; try experiments: build and test prototypes without adding extra expense; and shorten the path to Authority to Operate (ATO) for applications hosted on cloud.gov. We'll take a deeper dive into this last one to discuss concrete steps for how developers can approach the Assessment and Authorization process, the use of DevOps and agile methodologies, and the role of security automation to speed ATO.
We Power Tech
WPT201: Gender Identity in Tech
How do gender identity, gender expression, and policies for LGBTQI+ employees improve inclusion in the workplace? How can tech leaders build work environments where employees can bring their whole selves to work? This panel will discuss gender identity, and strategies for managers on how to create highly functional and inclusive teams.
This panel focuses on the racial digital divide. Panelists discuss the pipeline, recruiting, hiring, and retention—and share stories on what it means to be underrepresented in this industry. They also discuss strategies for overcoming both microaggressions and systemic challenges.
WPT203: Diversity in Tech: Lightning Talks
In a series of short presentations, leaders of tech inclusion share how they are shaping the future of technology, and tactical ways for people in tech to get involved. This session will be followed by a networking happy hour.
Join senior leadership from AWS as they introduce some of the most innovative women in fashion, philanthropy, and technology, who have gender equity at the center of their work. In this series of rapid talks, you get a front row seat to the world's leading, and up-and-coming women in tech.