For audio versions: Podcast RSS Feed
Podcast version also available on iTunes
Remember to subscribe to the AWS Podcast too!
Andy Jassy, CEO of Amazon Web Services, delivers his AWS re:Invent 2018 keynote, featuring the latest AWS news and announcements. Learn more about AWS at - https://amzn.to/2RiLQte. Topics: 00:01:10 AWS business update 00:05:00 Cloud market share 00:22:00 Glacier Deep Archive 00:25:45 Amazon FSx 00:32:30 Dean Del Vecchio, Guardian - CIO 00:43:45 AWS Control Tower 00:47:00 AWS Security Hub 00:49:20 AWS Lake Formation 01:05:00 DynamoDB Read/Write Capacity On Demand 01:09:00 Amazon Timestream 01:16:20 Amazon Quantum Ledger Database 01:17:40 Amazon Managed Blockchain 01:30:00 Amazon Elastic Inference 01:34:00 AWS Inferentia 01:39:00 Ross Brawn Obe, Formula 1 - Managing Director 01:51:30 Amazon SageMaker Ground Truth 02:00:10 Amazon SageMaker RL 02:02:10 AWS DeepRacer 02:08:00 Dr Matt Wood, AWS - GM Deep Learning and AI 02:14:55 Amazon Textrack 02:18:30 Amazon Personalize 02:22:55 Amazon Forecast 02:28:30 Pat Gelsinger, VMware - CEO 02:33:50 AWS Outposts
Watch Werner Vogels deliver his AWS re:Invent 2018 keynote. Learn more about AWS at https://amzn.to/2FKc7zk. This year's keynote includes featured guests, Yuri Misnik, Executive General Manager, National Australia Bank (NAB), Ethan Kaplan, Chief Product Officer, Fender Musical Instruments, Mai Lan Tomsen Bukovec, AWS VP & General Manager of S3, and Holly Mesrobian, Director of Engineering, AWS Lambda. In the keynote, hear about new AWS launch announcements and get a preview of Werner's new video series, "Now Go Build" - https://youtu.be/a42kxHSX4Xw. Keynote speakers: 00:00:00 Dr Werner Vogels, Amazon CTO 00:32:30 Mai Lan Tomsen Bukovec, AWS VP & General Manager of S3 00:50:30 Ethan Kaplan, Chief Product Officer, Fender Musical Instruments 01:02:50 Holly Mesrobian, Director of Engineering, AWS Lambda. 01:33:10 Yuri Misnik, Executive General Manager, National Australia Bank (NAB) New AWS launch announcements: 01:15:20 AWS Tool Kits for popular IDEs 01:17:00 Custom Runtimes for Lambda 01:18:50 Lambda Layers 01:20:40 Nested Applications using Serverless Application Repository 01:23:10 Step Functions service integrations 01:24:25 WebSocket support for API Gateway 01:47:15 AWS Well - Architected Tool
Watch Peter DeSantis, VP, AWS Global Infrastructure and Customer Support, delivers the Monday Night Live keynote, featuring Chris Dyl, of Epic Games and Keith Bigelow, of GE Healthcare. Learn more at - https://amzn.to/2DJJHmz.
Watch the Global Partner Keynote featuring Terry Wise, Vice President of Global Alliances and Channels, AWS. Learn more about re:Invent 2018 at - https://amzn.to/2BCsNF6. Featured Guests: - Pebbles Sy-Manalang, CIO, Globe Telecom - Ramin Sayar, President & CEO, Sumo Logic - Brad Jackson, CEO, Slalom - Bernd Heinemann, Board Member, Allianz Germany
In this session, learn how experienced leaders in digital advertising respond to the rapid evolution and sophistication of the advertising market driven by innovation and groundbreaking technology. Our customers share real-world applications they've leveraged in the cloud and how they see the media landscape changing as adoption of AI in the space becomes more widespread. Learn about existing and upcoming advancements and how they affect digital transformation in the years to come. Come away with ideas on how you can apply these learnings to your technology stack.
In this session, hear from an AWS customer about how they leveraged Amazon Rekognition deep learning-based image and video analysis to power a data-driven decision system for creative asset production. Learn how this customer was able to leverage the raw data provided by Amazon Rekognition combined with performance data to discover actionable insights. See a demonstration of the solution, and hear about media- and advertising-specific use cases. Learn from the customer's experiences implementing their architecture, the challenges, and the pleasant surprises along the way.
Artificial Intelligence & Machine Learning
Amazon has a long history in AI, from personalization and recommendation engines to robotics in fulfillment centers. Amazon Go, Amazon Alexa, and Amazon Prime Air are also examples. In this session, learn more about the latest machine learning services from AWS, and hear from customers who are partnering with AWS for innovative AI.
Curious about how Amazon machine learning (ML) services can enable healthcare organizations to find the insights they need to survive and thrive? Join us to learn how Takeda researchers built and trained their own disease-specific ML models, including deep-learning models using Deloitte ConvergeHEALTH running on AWS to simulate and quantify the overall disease burden and identify potential risks. This session is brought to you by AWS partner, Deloitte Consulting LLP.
A single device can produce thousands of events every second. In traditional implementations, all data is transmitted back to a server or gateway for scoring by a machine learning (ML) model. This data is also stored in a data repository for later use by data scientists. In this session, we explore data science techniques for dealing with time series data leveraging Amazon SageMaker. We also look at modeling applications using deterministic rules with streaming pipelines for data prep, and model inferencing using deep learning frameworks directly onto edge devices or onto AWS Lambda using Project Flogo, an open-source event-driven framework. This session is brought to you by AWS partner, TIBCO Software Inc.
In this session, learn how the C3 Platform on AWS is architected and why it accelerates the development of enterprise-scale AI applications. Hear how customers like the US Air Force, Enel, and global manufacturing leaders are using C3 on AWS to rapidly aggregate, unify, federate, and normalize data from sensor networks and enterprise IT systems, and apply ML/AI algorithms against this data to unlock significant economic value. Hear from global organizations that are solving complex business challenges, from optimizing the supply network, to predicting which assets will fail, to identifying fraud and money laundering. This session is brought to you by AWS partner, C3.
In this session, learn how the C3 Platform on AWS is architected to accelerate the development of modern AI applications. Hear how customers and partners have used the C3 Type System's data-object centric abstraction layer to realize 10-100x productivity gains when building complex AI/ML applications. In addition, hear how global organizations are using C3 on AWS to solve complex business challenges, from optimizing the supply network, to predicting asset failure, to identifying fraud and money laundering. This presentation is brought to you by AWS partner, C3.
Artificial intelligence (AI) is rapidly evolving, and much of the advancement is driven by deep learning, a machine learning technique inspired by the inner workings of the human brain. In this session, learn what deep learning is and how you can use it in your applications to unlock new and exciting capabilities for your customers and business. Also hear from Samsung SDS about how it developed a deep-learning model for cardiac arrhythmia detection using Apache MXNet, an open-source deep-learning framework. By the end of the session, you will understand how to leverage deep learning in your applications and get started with it.
Video-based tools have enabled advancements in computer vision, such as in-vehicle use cases for AI. However, it is not always possible to send this data to the cloud to be processed. In this session, learn how to train machine learning models using Amazon SageMaker and deploy them to an edge device using AWS Greengrass, enabling you process data quickly at the edge, even when there is no connectivity.
Amazon brings natural language processing, automatic speech recognition, text-to-speech services, and neural machine translation technologies within the reach of every developers. In this session, learn how to add intelligence to any application with machine learning services that provide language and chatbot functions. See how others are defining and building the next generation of apps that can hear, speak, understand, and interact with the world around us. Complete Title: AWS re:Invent 2018: [REPEAT 2] Create Smart & Interactive Apps with Intelligent Language Services on AWS (AIM303-R2)
Analyzing customer service interactions across channels provides a complete 360-degree view of customers. By capturing all interactions, you can better identify the root cause of issues and improve first-call resolution and customer satisfaction. In this session, learn how to integrate Amazon Connect and AWS machine learning services, such Amazon Lex, Amazon Transcribe, and Amazon Comprehend, to quickly process and analyze thousands of customer conversations and gain valuable insights. With speech and text analytics, you can pick up on emerging service-related trends before they get escalated or identify and address a potential widespread problem at its inception.
Join us for a deep dive on the latest features of Amazon Rekognition. Learn how to easily add intelligent image and video analysis to applications in order to automate manual workflows, enhance creativity, and provide more personalized customer experiences. We share best practices for fine-tuning and optimizing Amazon Rekognition for a variety of use cases, including moderating content, creating searchable content libraries, and integrating secondary authentication into existing applications.
Based on the same technology used at Amazon.com, Amazon Forecast uses machine learning to combine time series data with additional variables to build forecasts. Amazon Forecast requires no machine learning experience to get started. You only need to provide historical data, plus any additional data that you believe may impact your forecasts. Come learn more.
Amazon Mechanical Turk operates a marketplace for crowdsourcing, and developers can build human intelligence directly into their applications through a simple API. With access to a diverse, on-demand workforce, companies can leverage the power of the crowd for a range of tasks, from ML training and automating manual tasks to generating human insights. In this session, we cover key concepts for Mechanical Turk, and we share best practices for how to integrate and scale your crowdsourced application. By the end of this session, expect to have a general understanding of Mechanical Turk and know how to get started harnessing the power of the crowd.
While technology continues to improve, there are still many things that human beings can do much more effectively than computers, such as performing data deduplication or content moderation. Traditionally, such tasks have been accomplished by hiring a large temporary workforce-which is time consuming, expensive, and difficult to scale-or have gone undone. However, businesses or developers can use Amazon Mechanical Turk (Mechanical Turk) to access thousands of on-demand workers-and then integrate the results of that work directly into their business processes and systems. In this session, learn how enterprises are using Mechanical Turk to scale and automate their human-powered workflow.
Amazon Textract enables you to easily extract text and data from virtually any document. Today, companies process millions of documents by manually entering the data or using customized optical character recognition solutions, which are prone to error and consume valuable resources. Join us to learn how Amazon Textract uses machine learning to simplify document processing by enabling fast and accurate text and data extraction so you can process millions of documents in hours.
Deploying deep learning applications at scale can be cost prohibitive due to the need for hardware acceleration to meet latency and throughput requirements of inference. Amazon Elastic Inference helps you tackle this problem by reducing the cost of inference by up to 75% with GPU-powered acceleration that can be right-sized to your application's inference needs. In this session, learn about how to deploy TensorFlow, Apache MXNet, and ONNX models with Amazon Elastic Inference on Amazon EC2 and Amazon SageMaker. Hear from Autodesk on the positive impact of AI on tools used to design and make a better world. Learn about how Autodesk and the Autodesk AI Lab are using Amazon Elastic Inference to make it cost efficient to run these tools at scale.
Developers start your engines! This breakout session will provide an introduction to the newly launched AWS DeepRacer. Learn about the basics of reinforcement learning, what's under the hood and your opportunities to experience AWS DeepRacer for yourself.
Successful machine learning models are built on high-quality training datasets. Labeling raw data to get accurate training datasets involves a lot of time and effort because sophisticated models can require thousands of labeled examples to learn from, before they can produce good results. Typically, the task of labeling is distributed across a large number of humans, adding significant overhead and cost. Join us as we introduce Amazon SageMaker Ground Truth, a new service that provides an effective solution to reduce this cost and complexity using a machine learning technique called active learning. Active learning reduces the time and manual effort required to do data labeling, by continuously training machine learning algorithms based on labels from humans. By iterating through ambiguous data points, Ground Truth improves the ability to automatically label data resulting in high-quality training datasets.
Until now, many customers spent time creating or searching for the right algorithm and model when using Amazon SageMaker. In this session you'll learn how to build machine learning applications even faster by finding curated algorithms and model packages in AWS Marketplace and deploying them directly on Amazon SageMaker.
Today, organizations deploy more AI/ML workloads on AWS than on any other cloud platform. The cloud has removed many of the challenges associated with scalability, and it's never been easier or more cost effective to build custom and intelligent data models. In this session, learn how the C3 Platform leverages the full power of Intel Xeon Scalable processors on AWS to rapidly train, deploy, and operationalize AI/ML and big data applications like C3 Inventory Optimization and C3 Predictive Maintenance. In addition, a customer shares how these solutions helped achieve demonstrable value. This session is brought to you by AWS partner, Intel.
Come see examples of how Bebo uses Amazon SageMaker to power massive Fortnite tournaments every week. Traditional sports require referees, scorekeepers, field staff, and broadcast crews for every match. But esports are digital by nature. In this session, learn how machine learning and computer vision are enabling esports to occur at a massive scale. Learn how Bebo developed a model that can detect every victory and elimination, and can even prevent cheating on their tournament platform.
In this session, we cover best practices for enterprises that want to use powerful open-source technologies to simplify and scale their machine learning (ML) efforts. Learn how to use Apache Spark, the data processing and analytics engine commonly used at enterprises today, for data preparation as it unifies data at massive scale across various sources. We train models using TensorFlow, and we use MLflow to track experiment runs between multiple users within a reproducible environment. We then manage the deployment of models to production. We show you how MLflow can be used with any existing ML library and incrementally incorporated into an existing ML development process. This session is brought to you by AWS partner, Databricks.
The TensorFlow deep learning framework is used for developing diverse artificial intelligence (AI) applications, including computer vision, natural language, speech, and translation. In this session, learn how to use TensorFlow within the Amazon SageMaker machine learning platform. Then, hear from Advanced Microgrid Solutions about how they implemented a deep neural network architecture with Keras and TensorFlow to forecast energy prices in near real time. Complete Title: AWS re:Invent 2018: [REPEAT 2] Deep Learning Applications Using TensorFlow, ft. Advanced Microgrid Solutions (AIM401-R2)
With support for PyTorch 1.0 on Amazon SageMaker, you now have a flexible deep learning framework combined with a fully managed machine learning platform to transition seamlessly from research prototyping to production deployment. In this session, learn how to develop with PyTorch 1.0 within Amazon SageMaker using a novel generative adversarial network (GAN) tutorial. Then, hear from Facebook on how you can use the FAIRSeq modeling toolkit, which serves 6B translations daily for Facebook users, to train your own custom PyTorch models on Amazon SageMaker. Facebook also discusses the evolution of PyTorch 1.0 and features introduced to accelerate research and deployment. Complete Title: AWS re:Invent 2018: [REPEAT 1] Deep Learning Applications Using PyTorch, Featuring Facebook (AIM402-R1)
Amazon SageMaker, our fully managed machine learning platform, comes with pre-built algorithms and popular deep learning frameworks. Amazon SageMaker also includes an Apache Spark library that you can use to easily train models from your Spark clusters. In this code-level session, we show you how to integrate your Apache Spark application with Amazon SageMaker. We also dive deep into starting training jobs from Spark, integrating training jobs in Spark pipelines, and more.
Amazon SageMaker is a fully managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker takes away the heavy lifting of machine learning, thus removing the typical barriers associated with machine learning. In this session, we'll dive deep into the technical details of each of the modules of Amazon SageMaker to showcase the capabilities of the platform. We also discuss the practical deployments of Amazon SageMaker through real-world customer examples.
Natural language processing holds the key to unlocking business value from unstructured data. Organizations that implement effective data analysis methods gain a competitive advantage through improved decision-making, risk reduction, or enhanced customer experience. In this session, learn how to easily process, analyze, and visualize data by pairing Amazon Comprehend with services like Amazon Relational Database Service (Amazon RDS), Amazon Elasticsearch Service, and Amazon Neptune. We also share real-world examples of how customers built text analytics solutions with Amazon Comprehend.
Machine learning (ML) enables developers to build scalable solutions that maximizes the use of media assets through automatic metadata extraction. From automatic transcription and language translation to face detection and celebrity recognition, ML enables you to automate manual workflows and optimize the use of your video content. In this session, learn how to use services such as Amazon Rekognition, Amazon Translate, and Amazon Comprehend to build a searchable video library, automate the creation of highlight reels, and more. Complete Title: AWS re:Invent 2018: Unlock the Full Potential of Your Media Assets, ft. Fox Entertainment Group (AIM406)
The Apache MXNet deep learning framework is used for developing, training, and deploying diverse AI applications, including computer vision, speech recognition, natural language processing, and more at scale. In this session, learn how to get started with Apache MXNet on the Amazon SageMaker machine learning platform. Chick-fil-A share how they got started with MXNet on Amazon SageMaker to measure waffle fry freshness and how they leverage AWS services to improve the Chick-fil-A guest experience. Complete Title: AWS re:Invent 2018: [REPEAT 1] Build Deep Learning Applications Using Apache MXNet - Featuring Chick-fil-A (AIM407-R1)
Rohit Prasad Amazon AI We are living in a golden age of artificial intelligence (AI). Machines have already surpassed humans in some specific tasks, including image and speech recognition, thanks to the power of cloud computing, the abundance of data required to train AI systems, and improvements in foundational AI algorithms. While some express fear about the potential for AI systems to increasingly overtake the role of humans, together we should influence how these systems can improve every aspect of our lives. Join Rohit Prasad as he explores the opportunities for AI systems to augment human intelligence in ways that will make it accessible to everyone, increasing the societal good today and into the future.
How will self-driving cars change urban mobility patterns? This talk examines scientific contributions in the field of reinforcement learning, presented in the context of enabling mixed-autonomy mobility-the gradual and complex integration of autonomous vehicles into existing traffic systems. We explore the potential impact of a small fraction of autonomous vehicles on low-level traffic flow dynamics, using novel techniques in model-free deep reinforcement learning. We share examples in the context of a new open-source computational platform and state-of-the-art microsimulation tools with deep-reinforcement libraries.
Rapid improvements in computational simulation have driven advances in many industries that rely on modelling natural phenomena. In particular, visual effects engineers use AI algorithms to create stunning imagery for feature films like Star Wars, Harry Potter, and the Marvel Cinematic Universe. In this talk, Ronald Fedkiw will discuss his research to produce a new wave of simulation technology with more realistic facial animation to remove the 'uncanny valley,' more realistic and predictive cloth simulation, as well as the simulation of botanical trees.
Logged user interactions are one of the most ubiquitous forms of data available because they can be recorded from a variety of systems (e.g., search engines, recommender systems, ad placement) at little cost. Naively using this data, however, is prone to failure. A key problem lies in biases that systems inject into the logs by influencing where we will receive feedback (e.g., more clicks at the top of the search ranking). This talk explores how counterfactual inference techniques can make learning algorithms robust against bias. This makes log data accessible to a broad range of learning algorithms, from ranking SVMs to deep networks.
One of the most exciting frontiers in science is building automated systems that use existing biomedical data to understand and ultimately treat human disease. The key difficulty in the case of cancer is that it is a highly heterogeneous disease, making it challenging to uncover which molecular alterations in tumors are important for the disease and to predict how an individual will respond to treatment. This talk presents an overview of integrative computational methods for analyzing cancer genomes that leverage a diverse range of complementary data in order to extract biomedically relevant insights.
The concept of insect-scale aerial drones has long been in the realm of science fiction rather than reality, especially since powering such small drones is fundamentally difficult. In this talk, Shyam Gollakota will share his work on robofly, the first honeybee-sized wireless drone to successfully lift off. He will also discuss an alternative biology-based solution that integrates sensing, computing, and communication functions onto live-flying bumblebees. Data generated from this mobile IoT platform can feed AI models that have the potential to generate valuable intelligence for applications ranging from precision irrigation to environmental sensing.
The abundance of data available today has been described as a sea change and its own economy. Data has enabled new products, services, businesses, and economies. How can designers thrive as data-savvy innovators in this new economy? What do designers need to know about data, machine learning, and artificial intelligence? In this talk, Jodi Forlizzi draws from multiple research and development efforts to present relevant findings about how to design in a new data-driven economy.
Quantum computing's theoretical potential to exponentially speed up deep learning stands in sharp contrast to the current reality. Implementations are imperfect, suffering from noise and poor coherence times, and scalability limitations. In this talk, we explore how quantum-enhanced machine learning plays a complementary role to classical techniques, rather than acting as a replacement. We discuss relevant computing paradigms, such as quantum annealing and gate-model quantum computing over discrete or continuous variables that are performed efficiently with hybrid classical-quantum protocols.
Since its launch in 2015, Alexa has enabled new experiences across many device form factors at home, work, in the car, and on the go. With over 50,000 published skills, hundreds of new API features releases, and numerous Alexa-enabled devices, it can be hard to keep track with of the current pace. In this session, we get you up to speed on the current Voice First movement, the current Conversational AI trends, and we give demonstrations of some of the latest Alexa features and devices. Come learn about the new Alexa Skills Kit (ASK) multi-modal framework, Alexa Presentation Language (APL) for developers, Alexa skill fulfillment and consumables for customers, and some of the latest device offerings utilizing the Alexa Voice Service (AVS) and the new Alexa Gadgets Toolkit.
In this session, we walk you through the process of designing and adding in-skill purchasing to your skills. Experienced developers share their in-skill purchasing journey, the lessons they learned, and the best practices that they followed.
Join us for a deep dive into the system architecture for voice-enabled products with "Alexa Built-In". Device makers can use the Alexa Voice Service (AVS) to add conversational AI to a variety of products from smart speakers to headphones, screen-based devices to smarthome, and more. Learn how to choose the right hardware and software tools, ensure great customer experience with test and certification guidelines, and leverage qualified solution providers to get your products to market faster. Complete Title: AWS re:Invent 2018: Voice Assistants Beyond Smart Speakers - Integrate Alexa into Your Unique Product (ALX305)
Games push the boundaries of tech, enabling us to learn from them for a wide range of use cases. Join Gal Shenar, founder of Stoked Skills, in this talk about designing and implementing Alexa in-skill purchasing for 'Escape the Room' and 'Escape the Airplane' game skills. Learn about optimizing the user experience by soft launching with live updates and implementing a premium content in-skill purchase gate while monitoring user experience flow to increase conversion rates by using AWS Lambda.
Learn how the new Alexa Presentation Language makes it easy to develop interactive voice and touch experiences that are portable to any Alexa enabled device with a screen-tablets, TVs, Echo Show, Echo Spot, and more. Learn how to use the new APL file format, reusable UI components, and device groupings to create rich, interactive multimodal experiences that automatically adapt to target devices.
In this session, learn about the latest update to the Smart Home Skill API, featuring new capability interfaces you can use as building blocks to connect any device to Alexa, including those that fall outside of the traditional smart home categories of lighting, locks, thermostats, sensors, cameras, and audio & video gear. Also learn how to create Alexa Skills that contain multiple interaction models to provide a seamless customer experience. We walk through an example case study to demonstrate how you can implement these technologies to model any device or feature with Alexa.
In this session, developers learn how to leverage Alexa in-skill purchasing APIs, Amazon Pay, and developer reporting tools to help unlock premium digital content in a custom voice experience. Discover how in-skill purchasing gives customers and developers the flexibility of making payments through consumables, subscriptions, and one-time entitlements. Dive deep, and leave this session with everything you need to know to make money with Alexa skills.
In this session, we explore the suite of developer tools offered by the Alexa Skills team and dive into how they can help you be more productive in coding, deploying, testing, debugging, and collaborating with others on your skill. Learn about the different tools and libraries we have built to help you through the development process. A guest speaker joins us to introduce real-world use cases where our tools helped the team improve their productivity and the robustness of their skill.
The transformation of the auto industry from manufacturers to mobility providers is centered on seamlessly and safely connecting vehicles to the outside world. In this session, we discuss how customers are using AWS for a variety of connected vehicle use cases. Leave this session with source code, architecture diagrams, and an understanding of how to use the AWS connected vehicle reference architecture to build your own prototypes. Also learn how companies leverage Amazon services such as Alexa, AWS IoT, AWS Greengrass, AWS Lambda, and Amazon Kinesis Data Analytics to rapidly develop and deploy innovative mobility services. Learn how to use new enhancements in your architectures, including the IoT Device Simulator, a scalable, simulated vehicle, load generation tool, as well as the AWS IoT Framework for Automotive Grade Linux (AGL), an integrated build tool for AGL that includes the AWS IoT Device SDK and AWS Greengrass.
Vehicle mobility is evolving, from traditional rental and fleet services, to car sharing, ride hailing, and future driverless services. Mobility providers need an agile, scalable, digital platform to manage all aspects of their fleet and its usage. In this session, Avis Budget Group (ABG) and Slalom walk through their serverless mobility platform using the AWS connected vehicle reference architecture, Amazon SageMaker, Amazon Kinesis Data Analytics, and AWS Lambda. Learn the practical application of using AWS IoT to connect vehicles and Amazon SageMaker to apply machine learning to uncover insights for use cases, including vehicle inventory, shuttling efficiency, driver behavior, and vehicle trajectory analysis to identify fraudulent vehicle usage. We dive deep into the overall solution and services mentioned above, as well as the operations dashboard ABG created with Uber's open source framework, deck.gl.
In this session, we discuss architectural principles that helps simplify big data analytics. We'll apply principles to various stages of big data processing: collect, store, process, analyze, and visualize. We'll disucss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architectures, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost. Complete Title: AWS re:Invent 2018: [REPEAT 1] Big Data Analytics Architectural Patterns & Best Practices (ANT201-R1)
Most companies are overrun with data, yet they lack critical insights to make timely and accurate business decisions. They are missing the opportunity to combine large amounts of new, unstructured big data that resides outside their data warehouse with trusted, structured data inside their data warehouse. In this session, we discuss the most common use cases with Amazon Redshift, and we take an in-depth look at how modern data warehousing blends and analyzes all your data to give you deeper insights to run your business. Intuit joins us to share their experience modernizing their analytics pipeline. Complete Title: AWS re:Invent 2018: [REPEAT 1] Modern Cloud Data Warehousing ft. Intuit: Optimize Analytics Practices (ANT202-R1)
Amazon EMR provides a flexible range of service customization options, enabling customers to use it as a building block for their data platforms. In this session, AWS customers Salesforce.com and Vanguard discuss in detail how they use Amazon EMR to build a self-service, secure, and auditable data engineering platform. Customers who want to optimize their design and configurations should attend this session to learn best practices from customer experts. Topics include achieving cost-efficient scale, using notebooks, processing streaming data, rapid prototyping of applications and data pipelines, architecting for both transient and persistent clusters, setting up advanced security and authorization controls, and enabling easy self service for users.
In this talk, Anurag Gupta, VP for AWS Analytic and Transactional Database Services, talks about some of the key trends we see in data lakes and analytics, and he describes how they shape the services we offer at AWS. Specific trends include the rise of machine generated data and semi-structured/unstructured data as dominant sources of new data, the move towards serverless, SPI-centric computing, and the growing need for local access to data from users around the world.
As Amazon's consumer business continues to grow, so does the volume of data and the number and complexity of the analytics done in support of the business. In this session, we talk about how Amazon.com uses AWS technologies to build a scalable environment for data and analytics. We look at how Amazon is evolving the world of data warehousing with a combination of a data lake and parallel, scalable compute engines, such as Amazon EMR and Amazon Redshift. Complete Title: AWS re:Invent 2018: Under the Hood: How Amazon Uses AWS Services for Analytics at a Massive Scale (ANT206)
Modern application build-and-deploy workflows are creating new challenges for traditional security models. Traditional workflows need to be recast in new datasets, and new workflows need to be added to cover the expanding threat surface area. In this session, we explore the security challenges created by modern application build-and-deploy pipelines. We also discuss basic considerations for security defense, example use cases, and a customer case study to illustrate the concepts. This session is brought to you by AWS partner, Sumo Logic. Complete Title: AWS re:Invent 2018: Security Challenges & Use Cases in the Modern Application Build-and-Deploy Pipeline (ANT209-S)
Genomic sequencing is growing at a rate of 100 million sequences a year, translating into 40 exabytes by the year 2025. Handling this level of growth and performing big data analytics is a massive challenge in scalability, flexibility, and speed. In this session, learn from pioneering genomic sequencing company WuXi NextCODE, which handles complex and performance-heavy database and genomic sequencing workloads, about moving from on premises to all-in on the public cloud. Discover how WuXi NextCODE was able to achieve the performance that its workloads demand and surpass the limits of what it was able to achieve previously in genomic sequencing. This session is brought to you by AWS partner, NetApp, Inc.
Organizations need to gain insight and knowledge from a growing number of IoT, APIs, clickstreams, and unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we introduce key ETL features of AWS Glue, we cover common use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL pipelines for your data lake. We also discuss how to build scalable, efficient, and serverless ETL pipelines using AWS Glue. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
As data volumes grow and customers store more data on AWS, they often have valuable data that is not easily discoverable and available for analytics. Learn how AWS Glue makes it easy to build and manage enterprise-grade data lakes on Amazon S3. AWS Glue can ingest data from variety of sources into your data lake, clean it, transform it, and automatically register it in the AWS Glue Data Catalog, making data readily available for analytics. Learn how you can set appropriate security policies in the Data Catalog and make data available for a variety of use cases, such as run ad-hoc analytics in Amazon Athena, run queries across your data warehouse and data lake with Amazon Redshift Spectrum, run big data analysis in Amazon EMR, and build machine learning models with Amazon SageMaker and AWS Glue. Additionally, Robinhood will share how they were able to move from a world of data silos to building a robust, petabyte scale data lake on Amazon S3 with AWS Glue. Robinhood is one of the fastest-growing brokerages, serving over five million users with an easy to use investment platform that offers commission-free trading of equities, ETFs, options, and cryptocurrencies. Learn about the design paradigms and tradeoffs that Robinhood made to achieve a cost effective and performant data lake that unifies all data access, analytics, and machine learning use cases.
Amazon Kinesis makes it easy to speed up the time it takes for you to get valuable, real-time insights from your streaming data. In this session, we walk through the most popular applications that customers implement using Amazon Kinesis, including streaming extract-transform-load, continuous metric generation, and responsive analytics. Our customer Autodesk joins us to describe how they created real-time metrics generation and analytics using Amazon Kinesis and Amazon Elasticsearch Service. They walk us through their architecture and the best practices they learned in building and deploying their real-time analytics solution.
Enabling interactive data and analytics for thousands of users can be expensive and challenging-from having to forecast usage, provisioning and managing servers, to securing data, governing access, and ensuring auditability. In this session, learn how Amazon QuickSight's serverless architecture and pay-per-session pricing enabled the National Football League (NFL) and Forwood Safety to roll out interactive dashboards to hundreds and thousands of users. Understand how the NFL utilizes embedded Amazon QuickSight dashboards to provide clubs, broadcasters, and internal users with Next Gen Stats data collected from games. Also, learn about Forwood's journey to enabling dashboards for thousands of Rio Tinto users worldwide, utilizing Amazon QuickSight readers, federated single sign-on, dynamic defaults, email reports, and more.
Customers are migrating their analytics, data processing (ETL), and data science workloads running on Apache Hadoop/Spark to AWS in order to save costs, increase availability, and improve performance. In this session, AWS customers Airbnb and Guardian Life discuss how they migrated their workload to Amazon EMR. This session focuses on key motivations to move to the cloud. It details key architectural changes and the benefits of migrating Hadoop/Spark workloads to the cloud.
Data lakes are emerging as the most common architecture built in data-driven organizations today. A data lake enables you to store unstructured, semi-structured, or fully-structured raw data as well as processed data for different types of analytics-from dashboards and visualizations to big data processing, real-time analytics, and machine learning. Well-designed data lakes ensure that organizations get the most business value from their data assets. In this session, you learn about the common challenges and patterns for designing an effective data lake on the AWS Cloud, with wisdom distilled from various customer implementations. We walk through patterns to solve data lake challenges, like real-time ingestion, choosing a partitioning strategy, file compaction techniques, database replication to your data lake, handling mutable data, machine learning integration, security patterns, and more.
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. In this session, we dive deep into best practices for Kinesis Data Streams and Kinesis Data Firehose to get the most performance out of your data streaming applications. Comcast uses Amazon Kinesis Data Streams to build a Streaming Data Platform that centralizes data exchanges. It is foundational to the way our data analysts and data scientists derive real-time insights from the data. In the second part of this talk, Comcast zooms into how to properly scale a Kinesis stream. We first list the factors to consider to avoid scaling issues with standard Kinesis stream consumption, and then we see how the new fan-out feature changes these scaling considerations.
With Amazon Elasticsearch Service's simplicity comes a multitude of opportunity to use it as a back end for real-time application and infrastructure monitoring. With this wealth of opportunities comes sprawl - developers in your organization are deploying Amazon Elasticsearch Service for many different workloads and many different purposes. Should you centralize into one Amazon Elasticsearch Service domain? What are the tradeoffs in scale and cost? How do you control access to the data and dashboards? How do you structure your indexes - single tenant or multi-tenant? In this session, we'll explore whether, when, and how to centralize logging across your organization to minimize cost and maximize value and learn how Autodesk has built a unified log analytics solution using Amazon Elasticsearch Service.
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. In this session, we live demo exciting new capabilities the team have been heads down building. SendGrid, a leader in trusted email delivery, discusses how they used Athena to reinvent a popular feature of their platform.
As customers are looking to build Data lakes to AWS, managing security, catalog and data quality becomes a challenge. Once data is put on Amazon S3, there are multiple processing engines to access it. This could be either through a SQL interface, programmatic, or using API. Customers require federated access to their data with strong controls around Authentication, Authorization, Encryption, and Audit. In this session, we explore the major AWS analytics services and platforms that customers can use to access data in the data Lake and provide best practices on securing them.
In this session, we learn how Instacart reimagined its catalog data processing pipeline to utilize Snowflake, the data warehouse built for the cloud. Instacart grew from a hand-entered catalog to one that processes billions of data points daily. Keeping pace with customer demand prompted Instacart to take an entirely new approach to addressing the unique challenges of grocery catalog curation. Through Snowflake's unique architecture, which separates compute from storage, Instacart has increased their ability to quickly scale while improving the accuracy, traceability, and quality of their reporting. In turn, better information leads to offering more customized grocery catalog options that delight their customers. This session is brought to you by AWS partner, Snowflake Computing.
Artificial Intelligence (AI) is right here, right now-and it's changing our lives. The need for business optimization, combined with explosive growth in data and recent advances in applied statistics and cloud computing, have created a perfect storm of innovation. TIBCO brings real-time AI to business challenges with the TIBCO Connected Intelligence Cloud. In this session, we show real-time AI in action; utilizing Amazon SageMaker, TIBCO Connected Intelligence Cloud, and open source-with at-scale, in-database compute; visual composition and notebooks; Slack-style collaboration among users; and model lifecycle deployment via low-code tooling such as TIBCO Live Apps. We include case studies in equipment surveillance, dynamic pricing, risk management, route optimization, and customer engagement. This session is brought to you by AWS partner, TIBCO Software Inc.
What does 'Cloud First' really mean for your processes, customers, and the future of your business? In this session, learn from Hub International's CISO and Head of Architecture how the insurance company is transforming their business by migrating to the cloud for greater scale, cost savings, performance, and security. With a goal to go all-in on AWS by end of 2018, Hub International needed real-time visibility across its cloud and hybrid environments to mitigate risks and ensure seamless migrations-thereby keeping customers happy and their data safe. See how Hub International is harnessing machine data and predictive analytics to secure applications and infrastructure, control costs and improve capacity planning. This session is brought to you by AWS partner, Splunk.
Amazon Elasticsearch Service (Amazon ES) makes it easy to deploy and use Elasticsearch in the AWS Cloud to search your data and analyze your logs. In this session, you get key insights into Elasticsearch, including information on how you can optimize your expenditure, minimize your index sizes to lower costs, as well as best practices for keeping your data secure. Also hear from youth sports technology company SportsEngine, about their experience engineering a member-management product of over 260 million documents on top of Elasticsearch. Relive their harrowing journey through tens of thousands of shards, crushed clusters, mountains of pending tasks, and never-ending snapshots. Hear how they went from disaster to delight with Amazon ES.
The mountain of data generated by your deployment can be a valuable source of insight for your security practice IF you take advantage of some key tools on the AWS Cloud. In this session, learn how to build an analytics process that uses security tools, whether from AWS or the community, to create a continuous feedback loop to maintain and improve the security of your deployments. Additionally, learn when to use AWS security services and other tools in your deployment and how to manage the output, thus creating an analytics workflow alongside a feedback loop in order to streamline the entire process. The end result is an automated feedback loop aimed at making sure that your deployment is doing what you intend ... and only what you intend. This session is brought to you by AWS partner, Trend Micro.
The Internet of Things (IoT) is creating massive amounts of data, but users can't easily access that data and quickly make decisions from it. Data projects are too often funded and delivered without enough consideration for how users will access and consume that data. Domo connects the people, data, and systems to give users the data they need to do their jobs, anytime and from anywhere. In this session, learn how LifeConEx, DHL's temperature management specialist, uses Domo to get insights from their IoT data to its users, and see the impressive results they have achieved. This session is brought to you by AWS partner, Domo.
Amazon EMR is one of the largest Spark and Hadoop service providers in the world, enabling customers to run ETL, machine learning, real-time processing, data science, and low-latency SQL at petabyte scale. In this session, we introduce design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long- and short-lived clusters, using notebooks, and other architectural best practices. We discuss lowering cost with Auto Scaling and Spot Instances, and security best practices for encryption and fine-grained access control. We showcase key improvements made to the service in 2017. We cover improvements in using the Amazon EMR API, best practices utilizing Spot instances and Spot Instances with Auto Scaling, improvements toward Amazon S3 performance on Amazon EMR, and security/authorization and authentication. We couple each of these with a demo or customer use case to illustrate the benefits. If you are an existing Amazon EMR user, you walk away with a thorough understanding of improvements made in 2018, and how they benefit you. If you are a new Amazon EMR user, get an understanding of common use cases and how other customers are using Amazon EMR.
Learn about the latest and hottest features of Amazon Redshift. We'll deep dive into the architecture and inner workings of Amazon Redshift and discuss how the recent availability, performance, and manageability improvements we've made can significantly enhance your user experience. We'll also share glimpse of what we are working on and our plans for the future. McDonald's will join us to share how they leverage a data lake powered by Redshift, Redshift spectrum and Athena to get quick insights.
Setting up and managing data lakes today involves a lot of complicated and time-consuming tasks. AWS Lake Formation is a new service (coming soon) that will make it easy to set up a secure data lake in days. You will be able to ingest, catalog, cleanse, transform, and secure your data. Explore how AWS Lake Formation will make it easier to combine analytic tools, like Amazon EMR, Redshift, Athena, Sagemaker, and QuickSight, on data in your data lake.
Discover the power of running Apache Kafka on a fully managed AWS service. In this session, we describe how Amazon Managed Streaming for Kafka (Amazon MSK) runs Apache Kafka clusters for you, demo Amazon MSK and a migration, show you how to get started, and walk through other important details about the new service.
In this session, we take an in-depth look at how modern data warehousing blends and analyzes all your data, inside and outside your data warehouse, without moving the data. This helps you gain deeper insights in running your business. We also cover best practices on how to design optimal schemas, load data efficiently, and optimize your queries to deliver high throughput and performance.
Learn how Fox and Discovery modernized their media processing workflows to positively impact operations and business results. In this session, we examine each company's production architecture and learn how they utilize AWS services such as AWS Elemental Media Services, AWS Lambda, AWS Step Functions, Amazon API Gateway, and container toolsets. You also get insights into new business capabilities enabled by their AWS serverless architecture, including automation of content assembly and quality control as well as increased customer engagement with personalization and improved processing performance.
Keeping track of state and orchestrating the components of a distributed application is complex. AWS Step Functions makes the job simpler, faster, and more intuitive. In this session, learn how to leverage AWS Step Functions to design and run workflows for your serverless, containerized, and instance-based architectures. We explore practical applications of orchestration spanning different industries and workloads. For each, we walk through the architecture, lessons learned, and business outcomes. Expect to leave this session with a practical understanding of how to use orchestration to express your application's business logic more productively while improving its resilience.
In the cloud, modern apps are decoupled into independent building blocks, called microservices, which are easier to develop, deploy, and maintain. Messaging is a central tool used to connect and coordinate these microservices. AWS offers multiple messaging services, which address a variety of use cases. In this session, learn how to choose the service that's best for your use case as we present the key technical features of each. We pay special attention to integrating messaging services with serverless technology. We cover Amazon Kinesis, Amazon SQS, and Amazon SNS in detail with discussion of other services as appropriate.
In this session, we discuss best practices for building serverless applications that handle high throughput and bursty data using Amazon SQS, Amazon SNS, and AWS Lambda, including new features such as message filtering and SNS/SQS as event sources for Lambda will be covered in depth. Hear from our customers Enel and Letgo as they share their experiences and deployment strategies. Enel is a multinational energy company that is present in 34 countries across 5 continents, and serves nearly 71 million end-users. The company uses AWS as its platform for IoT and energy management. Letgo is one of the largest and fastest-growing apps to buy and sell locally, with over 100 million downloads and hundreds of millions of listings. Letgo uses AWS to process their online marketplace transactions.
In this session, learn how Autodesk and FinancialForce have developed integrations between Salesforce and AWS applications, analytics, data lakes, and machine learning. Also learn what's new from Salesforce and AWS to help you build new customer experiences.
Learn how you can build, train, and deploy machine learning workflows for Amazon SageMaker on AWS Step Functions. Learn how to stitch together services, such as AWS Glue, with your Amazon SageMaker model training to build feature-rich machine learning applications, and you learn how to build serverless ML workflows with less code. Cox Automotive also shares how it combined Amazon SageMaker and Step Functions to improve collaboration between data scientists and software engineers. We also share some new features to build and manage ML workflows even faster.
Get a jump on traffic surges with Predictive Auto Scaling. AWS Auto Scaling now responds more quickly by analyzing past traffic trends. The new predictive capability looks at your incoming load and forecasts it into the future. Not only can you see ahead of time when and how your resources will scale, your resources are made available ahead of when they are needed to enable faster, more responsive applications. Come learn how Genesys uses Predictive Scaling to scale the infrastructure used to run their popular contact center solution, PureCloud, worldwide.
Aimed at solutions architects and technical managers, this session focuses on the practical ways our customers achieve cost-efficient architectures through service selection and configuration. We start by discussing the building block services. We cover the main trends, such as containers and serverless, and we explore some of the specific services and configurations customers have used. We also take you through real-life examples that can be implemented to minimize costs while driving innovation and business output. After you attend this session, you will understand what is possible on AWS, and you will know ways in which you can deploy new workloads or modify existing workloads for optimization.
Cloud computing provides a number of advantages, such as the ability to scale your web application or website on demand. If you have a new web application and want to use cloud computing, you might be asking yourself, "Where do I start?" Join us in this session for best practices on scaling your resources from one to millions of users. We show you how to best combine different AWS services, how to make smarter decisions for architecting your application, and how to scale your infrastructure in the cloud.
Do you need your applications to extend across multiple regions? Whether for disaster recovery, data sovereignty, data locality, or extremely high availability, many AWS customers choose to deploy services across regions. Join us as we explore how to design and succeed with active-active multi-region architectures. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour. Complete Title: AWS re:Invent 2018: [REPEAT 2] Architecture Patterns for Multi-Region Active-Active Applications (ARC209-R2)
As industries digitally transform their existing business models to fend off competitors or disrupt new markets, they find their IT to be a limiting factor. In this session, we cover the trends of disruptions and opportunities of digital transformation, and the evolution of IT monoliths to microservices and now cloud native services. We also explore dependency management, or lock in, through a 'choosing, using, and losing' mental model. Finally, we explore chaos architecture as an evolving method for exposing weaknesses before they become real problems.
The unique global cloud infrastructure offered by AWS helps customers build reliable, available, secure, scalable, and fault-tolerant applications. AWS has more experience operating global cloud infrastructures that enables customers to run business-critical workloads in the public cloud than anyone else. In this session, learn how AWS is continuously enhancing and expanding the AWS global infrastructure through more Regions and Availability Zones, custom hardware, purpose-built global network backbone, and innovative energy management systems to deliver to our customers lower latency, greater reliability, greater scalability, and operational efficiencies.
Bajaj Finserv Direct Limited (BFDL) serves millions of customers with its comprehensive portfolio and innovative offerings in financing, general insurance, life and health insurance and retirement and savings. BFDL envisioned building a cloud-native digital platform to offer an unmatched experience to its customers. In this session, hear from BDFL how they built a robust digital backbone on AWS with a scalable microservices architecture deployed using Docker containers. The session also focuses on how a scalable microservices-based architecture can be developed using various AWS services. This session is brought to you by AWS partner, Cognizant Technology Solutions US Corp.
As serverless architectures become more popular, customers need a framework of patterns to help them identify how to leverage AWS to deploy their workloads without managing servers or operating systems. This session describes reusable serverless patterns while considering costs. For each pattern, we provide operational and security best practices and discuss potential pitfalls and nuances. We also discuss the considerations for moving an existing server-based workload to a serverless architecture. This session can help you recognize candidates for serverless architectures in your own organizations and understand areas of potential savings and increased agility.
In this session, Intuit presents how they prepared TurboTax to take the production load, and how they gained the confidence to run their 2017 peak activity entirely on AWS. They discuss resiliency testing, game days, operational run books, working with AWS Support, and how each of these activities impacted their confidence in their reliability and availability.
As more customers adopt Amazon VPC architectures, the features and flexibility of the service are encountering the obstacles of evolving design requirements. In this session, we follow the evolution of a single regional VPC to a multi-VPC, multi-region design with diverse connectivity into on-premises systems and infrastructure. Along the way, we investigate creative customer solutions for scaling and securing outbound VPC traffic, securing private access to Amazon S3, managing multi-tenant VPCs, integrating existing customer networks through AWS Direct Connect, and building a full VPC mesh network across global regions. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
In this talk, we consider the unique challenges of the biosphere that recently opened in downtown Seattle. We address two of those challenges using modern deep learning techniques: computer-vision-based plant health monitoring, and microclimate anomaly detection using autoencoders on time-series data extracted from multiple sensors. Our focus is on architecting the inference pipelines for solving these problems at scale. Specifically, we highlight the inference steps and TensorRT optimizations for AWS Greengrass ML inference.
The emergence of serverless infrastructure and services represents a fundamental shift in how developers approach architecting applications. This is especially relevant in the world of SaaS where systems must efficiently and cost-effectively respond to continually shifting multi-tenant loads and profiles. We'll conduct an end-to-end review of all the elements of a serverless SaaS architecture that leverages a combination of AWS Lambda, Fargate, and Aurora Serverless. We'll look at how serverless influence the core elements of your architecture, including tenant isolation, service decomposition, management and monitoring, deployment, and identity. Complete Title: AWS re:Invent 2018: [REPEAT] Architecting Next Generation Serverless SaaS Solutions on AWS (ARC324-R)
The 2018 FIFA World Cup was the biggest production in Fox Sports 20-year history in terms of personnel, hours, and scale. With 64 matches broadcast, it totalling more than four previous World Cups combined. To deliver this, they deployed a revolutionary joint solution from AWS, Aspera, and a portfolio of APN partners that enabled the live delivery of a broadcast video from Russia to Los Angeles for remote production. In this session, learn the architectural patterns of large-scale data and content movement to the cloud over long distances with low-latency requirements, the high-availability requirements around broadcast workloads, and tradeoffs to consider. We cover how Amazon S3, Amazon EC2, AWS Lambda, Amazon SQS, and other services processed over 1.9 PB of original content over the 30-day tournament.
Netflix built Zuul Push, a massively scalable push messaging service that handles millions of always-on, persistent connections to proactively push time-sensitive data, like personalized movie recommendations, from the AWS Cloud to devices. This helped reduce Netflix's notification latency and the Amazon EC2 footprint by eliminating wasteful polling requests. It also powers Net
At AWS, we obsess over operational excellence. We have a deep understanding of system availability, informed by over a decade of experience operating the cloud and our roots of operating Amazon.com for nearly a quarter-century. One thing we've learned is that failures come in many forms, some expected, and some unexpected. It's vital to build from the ground up and embrace failure. A core consideration is how to minimize the "blast radius" of any failures. In this talk, we discuss a range of blast radius reduction design techniques that we employ, including cell-based architecture, shuffle-sharding, availability zone independence, and region isolation. We also discuss how blast radius reduction infuses our operational practices.
Airbnb is going through tremendous growth internationally, evolving from a home sharing company to a global travel community with many product offerings. The growth driven by the business, increase in traffic, and aggressive hiring created a new challenge for the Production Infrastructure Team. The team has grown from a small team of 10 to a production platform organization with 100 engineers that builds foundational services that support homes, experiences, luxury, and China. We shifted our priority and focus to move away from putting out fires to building a platform that can grow with the company. In this session, we chronicle Airbnb's architectural evolution that aligns with organizational growth strategy, and review how we overcame different architectural challenges leveraging AWS technologies.
SaaS presents developers with a unique blend of architectural challenges. While the concepts of multi-tenancy are straightforward, the reality of making all the moving part work together can be daunting. For this session, we'll move beyond the conceptual bits of SaaS and look under the hood of a SaaS application. The goal here is to examine the fundamentals of identity, data partitioning, and tenant isolation through the lens of a working solution and highlight the challenges and strategies associated with building a next-generation SaaS application on AWS. We'll look at the full lifecycle of registering new tenants, applying security policies to prevent cross-tenant access, and leveraging tenant profiles to effectively distribute and partition tenant data. The goal here is to connect many of the conceptual dots of a SaaS implementation, highlighting the tradeoffs and considerations that will shape your approach to SaaS architecture.
In this talk, we review the challenges of adding a virtual character to AR/VR applications and highlight how Amazon Sumerian solves these challenges. We discuss leading use cases and demonstrate how customers are creating dynamic, interactive virtual concierges using Sumerian hosts integrated with various AWS technologies, such as Amazon Polly, Amazon Lex, Amazon Rekognition, and AWS Lambda.
In this session, Amazon Sumerian leaders provide an overview of the evolution of the augmented reality/virtual reality (AR/VR) industry. They discuss what has changed since Sumerian was announced at re:invent 2017. They also talk about key market trends, and they highlight areas with the highest adoption and potential. This is an immersive session, with examples and demonstrations of immersive experiences created using Amazon Sumerian.
Anyone can create and publish augmented reality (AR), virtual reality (VR), and 3D applications quickly and easily with Amazon Sumerian. In this session, learn how to use Sumerian to build a scene that can be published and viewed on laptops, mobile phones, VR headsets, and digital signage. Take a tour of the Sumerian interface, and learn how to build a scene, add assets and hosts, and add behaviors to create dynamically animated objects and characters in an AR/VR experience. Also see how Sumerian integrates into AWS services such as Amazon Polly, Amazon Lex, AWS Lambda, Amazon S3, and Amazon DynamoDB.
In this session, learn how to provide your users with instant access to critical desktop applications through a browser on any computer. We present an overview of Amazon AppStream 2.0, a fully managed application streaming service, and walk through common application streaming use cases for enterprises, ISVs, 3D design and engineering firms, and educational institutions. We close with a simple walk through to help you get started.
Are you constantly challenged to find the right tools to meet your company's collaboration needs while keeping costs down? You might have decided to use chat and meeting solutions bundled with an enterprise collaboration suite that doesn't meet all your needs. Even worse, you might be overpaying for features that you don't even use. Join us in this session to learn how Connexity was able to roll out Amazon Chime in just a few steps. Costs were not only reduced, but using Chime's pay-as-you-go pricing resulted in collaboration costs dropping by 25% and administrative overhead dropping to zero.
Alexa transformed the smart home market segment and is now transforming how we interact with applications and technology at work. Alexa for Business is a service that helps you manage Alexa devices, users, and skills and helps you voice-enable your workplace. In this session, Collin Davis, GM, Alexa for Business, talks about the new features in Alexa for Business and how customers are adopting Alexa VUI in their workplace.
Tens of thousands of customers are moving their desktops and application infrastructure to the cloud to help IT be more efficient and help their employees be more productive. Join us to hear from AWS leaders and customers alike to learn about end user computing services from AWS. Get insights into how you can use these services. We discuss the most recent announcements from Amazon WorkSpaces and Amazon AppStream, and we explain how you can use AWS capabilities to deliver end user computing solutions for your organization.
Learn how Volkswagen Financial Services is reaching its ambition to have core financial products online, worldwide by 2022. In this session, Daniel Matthies, Head of the Digital Unit at Volkswagen Financial Services, discusses how to leverage AWS for development and create a compliant SaaS office solution. Discover where the areas of freedom are when working within local markets. Learn best practices for engaging subsidiary companies and a cross-location team to ensure everyone is truly enabled, while tackling the doubts some colleagues may have previously had about stepping into the world of cloud technology and solutions. This session is brought to you by AWS partner, Cloudreach Inc.
Amazon WorkDocs is a secure, fully managed, content creation file collaboration and management service with an extensible SDK that runs on AWS. Amazon WorkDocs Drive is an economical and secure alternative to your on-premises network shares and complicated VPN setups that brings all your WorkDocs content into your Windows Explorer or Mac Finder. In this session, learn how you can have billions of files available on your desktop for collaboration and sharing with Amazon WorkDocs Drive, all without taking any hard drive space.
Alexa for Business brings a conversational UI to help you simplify your meeting room experience. In this session, learn how you can bring Alexa for Business to your meeting room and integrate it with your existing meeting room systems. In addition, the Amazon IT team discusses best practices, and they describe how they piloted and deployed Alexa for Business in over 800 conference rooms at Amazon. Complete Title: AWS re:Invent 2018: [REPEAT 1] Build the Next-Gen Meeting Room Experience Using Alexa for Business (BAP303-R1)
Learn why more customers than ever are leaving the complexity and costs of virtual desktop infrastructure (VDI) for cloud desktop solutions like Amazon WorkSpaces. In this session, we discuss how you can use Amazon WorkSpaces to give your employees a responsive, secure, and delightful desktop experience while simplifying your own processes. We demonstrate the flexibility of Amazon WorkSpaces and show how easy it is to get started. We also cover more advanced topics, including using Microsoft Active Directory for end-user management and authentication, and using Amazon WorkSpaces to implement a bring- your-own-device policy.
Hear from Amazon Connect customers on how they seamlessly integrated their business applications with Amazon Connect using a combination of development tools and available AWS services. axialHealthcare demonstrates their HIPAA-compliant phone platform, built around Amazon Connect, which enables its clinical care team to increase provider and patient outreach efficiency, enhance client reporting, and improve provider and patient satisfaction while ensuring the safety of protected health information. Rackspace shares how they used the Amazon Connect Streams API to build a customized agent experience through a native desktop application to accommodate diverse agent configurations and multiple platforms.
With Amazon Connect, a cloud-based contact center service, businesses can create dynamic contact flows that provide personalized caller experiences by taking history and past context into consideration to anticipate callers' needs. Join us to learn how customers are executing successful strategies using Amazon Lex to add NLU chatbots into their Amazon Connect customer experience workflows. Learn how using Amazon Lex, an AI service that enables you to create intelligent conversational chatbots that can turn your contact flows into natural conversations using the same technology that powers Amazon Alexa. Learn how to automate repeatable routine tasks such as password resets, order status, and balance inquiries without the need for an agent.
IT organizations today need to support a modern, flexible, global workforce and ensure that their users can be productive anywhere. Moving desktops and applications to the AWS Cloud offers improved security, scale, and performance with cloud economics. In this session, we provide an overview of Amazon WorkSpaces and Amazon AppStream 2.0, and we discuss the use cases for each. Then, we dive deep into best practices for implementing Amazon WorkSpaces and AppStream 2.0, including how to integrate with your existing identity, security, networking, and storage solutions. Complete Title: AWS re:Invent 2018: Move Your Desktops & Applications to AWS with Amazon WorkSpaces & AppStream 2.0 (BAP323)
In this session, you will learn how Intuit and Hilton are migrating their large scale contact centers to Amazon Connect, a self-service, cloud-based contact center offering based on the same technology used by over 70,000 Amazon Customer Service Associates. We will begin the session with an overview of Amazon Connect and hear from Intuit and Hilton about their experiences and best practices that will help prepare any large scale business planning a migration to Amazon Connect.
Amazon EC2 Fleet makes it easier than ever to grow your compute capacity and enable new types of cloud computing applications while maintaining the lowest TCO by blending EC2 Spot, On-Demand and RI purchase models. In this session, learn how to use the power of EC2 Fleet with AWS services such as Auto Scaling, ECS, EKS, EMR, Batch, Thinkbox Deadline and Opsworks to programmatically optimize costs while maintaining high performance and availability - all without breaking a sweat. We will dive deep into cost optimization patterns for workloads like containers, web services, CI/CD, batch, big data, rendering and more. Complete Title: AWS re:Invent 2018: [REPEAT 1] Better, Faster, Cheaper - Cost Optimizing Compute with Amazon EC2 Fleet #savinglikeaboss (CMP201-R1)
What if you could scale your rendering pipeline to near-limitless capacity- what would that mean for your studio? Learn how Amazon EC2 Spot and AWS Thinkbox Deadline can help scale your VFX and CG rendering pipeline, creating faster feedback cycles and most artist time focused on creating content, and how you can optimize your compute costs along the way. This session focuses on rendering workloads combining Deadline (an AWS rendering pipeline management tool) and Spot for scalable cost-effective computing. Find out how real customers working on Hollywood productions are integrating their pipelines with AWS to realize the elasticity and scale provided by Amazon EC2, as well as how they intend to leverage AWS in the future to scale their superpowers.
In this session, learn about Amazon Linux 2, the next generation Amazon Linux operating system that now comes with five years of support. See what's new with Amazon Linux 2, how it's different from other distributions of Linux, and understand why it's rapidly becoming the go-to operating system for AWS customers. Complete Title: AWS re:Invent 2018: [REPEAT 1] Amazon Linux 2: A Stable, Secure, High-Performance Linux Environment (CMP203-R1)
Although the AWS Cloud provides a new level of durability and resiliency, no workload is immune to disasters-be it due to accidental reasons or malicious intent. Even in the cloud, you have to ensure continuity. Traditional disaster recovery (DR) solutions are not optimized for the cloud and often result in higher costs, increased complexity, and operational challenges. To maintain compliance and business continuity service-level agreements, AWS DR planning requires a completely different approach to deal with cross-account, cross-region workload testing and failover. In this session, learn how you can set up an effective DR plan for your AWS environments. This session is brought to you by AWS partner, Druva.
Learn how DXC helped a Fortune 1000 client drive digital transformation with AWS. An APN Premier Consulting Partner, DXC illustrates through customer testimonials, use cases, and architectures how enterprise clients have transformed their business with the AWS Cloud. This session is brought to you by AWS partner, DXC Technology.
Matt Garman, VP of AWS Compute Services, introduces the latest innovations in the compute space. In this keynote address, we announce new compute capabilities, and we share insights into what makes the AWS compute business unique. We also announce new capabilities for Amazon EC2 instances, EC2 networking, EC2 Spot Instances, Amazon Lightsail, Containers, and Serverless. Matt is joined by executives from our customers and partners who share valuable success stories of how Amazon EC2 has helped their journey to digital transformation.
Amazon EC2 provides resizable compute capacity in the cloud and makes web-scale computing easier for customers. It offers a wide variety of compute instances well suited to every imaginable use case, from static websites to high-performance supercomputing on-demand, all available through highly flexible pricing options. This session covers the latest Amazon EC2 features and capabilities, including new instance families available in Amazon EC2, the differences among their hardware types and capabilities, and their optimal use cases. We also cover some best practices on how you can optimize your expenditure on Amazon EC2 to make the most of your EC2 instances, saving time and money.
In this session, you will learn how to utilize low cost T2 and T3 instances while still having access to sustainable high performance when needed. Designed for applications with variable CPU usage that experience occasional spikes in demand, T instances enable customers' applications to burst seamlessly to meet temporary traffic peaks and then scale back down to operate at typical traffic levels. The next generation T3 instances provide up to 30% better price performance over T2 instances and include unlimited bursting by default, making it a cost-effective choice for general-purpose computing.
With Amazon EBS, you can easily create a simple point-in-time backup for your Amazon EC2 instances. In this deep dive session, you learn how to use Amazon EBS snapshots to back up your Amazon EC2 environment. We review how snapshots work, and we share best practices for tagging snapshots, cost management, and snapshot automation.
In today's environment of increasingly large data sets and resource-intensive algorithmic processing, the challenge of HPC includes keeping pace with the demands of researchers, scientists, engineers, and creative professionals so they can rapidly produce high-value answers to complex questions. AWS HPC solutions deliver significant leaps in compute performance, memory capacity and bandwidth and I/O scalability. The highly customizable computing platform and robust partner ecosystem enable your staff to imagine new approaches so they can fail forward faster, delivering more answers to more questions without the need for costly, on-premises upgrades. This session provides an overview of HPC capabilities on AWS, describes the newest generations of accelerated computing instances, as well as highlighting customer and partner use-cases across industries. Attendees will also learn how the steadily increasing interest in running HPC workloads on the cloud can be combined with the advances in AI/ML to make it a catalyst for sustained innovation in these industries. Complete Title: AWS re:Invent 2018: [REPEAT 1] High Performance Computing on AWS: Driving Innovation without Infrastructure Constraints (CMP302-R1)
The Nitro system is a rich collection of building block technologies that include hardware offload and security components built on AWS. It is powering the next generation of EC2 instances with an ever-broadening selection of compute, storage, memory, and networking options. In this session, we deep dive into the Nitro system, explore its design and architecture, discover how it enables innovative new EC2 instances, and understand how it has made the seemingly impossible, possible.
Dive deep into VMware Cloud on AWS-an integrated cloud offering jointly developed by AWS and VMware. It delivers a highly scalable, secure, and innovative service that enables organizations to seamlessly migrate and extend their on-premises VMware vSphere-based environments to the AWS Cloud. In this session, we cover the cloud challenges addressed by VMware Cloud on AWS, its architecture, its integration with AWS services, its unique benefits, and some popular use cases. We also explore consumption options, pricing, value-add services, total cost of ownership versus other alternatives, and customer success stories. Finally, we show some demonstrations of the service in action.
Amazon EC2 Spot Instances enable you to use spare EC2 computing capacity- capacity that is often 90% less than On-Demand prices. In this session, learn how to effectively harness Spot Instances for production workloads. We explore application requirements for using Spot Instances, best practices learned from thousands of customers, and the services that make it easy to use. Finally, we run through practical examples of how to use Spot for the most common production workloads, the common pitfalls customers run into, and how to avoid them.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances. Complete Title: AWS re:Invent 2018: [REPEAT 1] Deep Dive on Amazon EC2 Instances & Performance Optimization Best Practices (CMP307-R1)
Many customers are using Amazon EC2 instances to run applications with high performance networking requirements. In this session, we provide an overview of Amazon EC2 network performance features- including enhanced networking, ENA, and placement groups-and discuss how we are innovating on behalf of our customers to improve networking performance in a scalable and cost-efficient manner. We share best practices and performance tips for getting the best networking performance out of your Amazon EC2 instances.
EC2 High Memory instances offer 6 TB, 9 TB, and 12 TB of memory in a single instance. These instances are purpose-built to run large in-memory databases, including production deployments of the SAP HANA in-memory database, in the cloud. Join this session for a detailed look into these high-memory instances, and learn how you can use these EC2 instances in your Amazon VPC together with Amazon EBS to run mission-critical SAP HANA workloads to realize greater speed and agility. Hear how Whirlpool was able to leverage the agility and flexibility of this platform to move quicker with its own SAP workload needs. Complete Title: AWS re:Invent 2018: Scale Your SAP HANA In-Memory Database on Amazon EC2 High Memory Instances with up to 12 TB of Memory (CMP309)
Application portability has always been the goal that operators aspire towards. The deployment simplicity in Kubernetes has made this goal attainable, especially when using Kubernetes-as-a-Service like VMware Kubernetes Engine (VKE), built natively on AWS. In this talk, using an application running in the data center, we show how easy it is to implement a hybrid cloud strategy and deploy the same application onto a VKE-managed Kubernetes cluster using Helm. We share best practices on how to implement end-to-end network configuration automation and application monitoring with open-source tooling like Prometheus and Grafana. This session is brought to you by AWS partner, VMware, Inc.
Cloud services built on compute-optimized EC2 instances can serve as your next-generation HPC platform. Learn how to utilize the Rescale platform on AWS to meet the ever-increasing demands on compute resources while avoiding costly capex investments. Custom Intel Xeon processors enable you to meet your HPC needs by taking advantage of the newest technologies in the cloud. This session is brought to you by AWS partner, Intel.
Netflix's container management platform, Titus, powers critical aspects of the Netflix business, including video streaming, recommendations, machine learning, big data, content encoding, studio technology, internal engineering tools, and other Netflix workloads. Titus offers a convenient model for managing compute resources, enables developers to maintain just their application artifacts, and provides a consistent developer experience from a developer's laptop to production by leveraging Netflix container-focused engineering tools.
Amazon EC2 Auto Scaling removes the complexity of capacity planning to help customers improve application availability and reduce costs. In this session, we will deep dive on how EC2 Auto Scaling works to simplify health checking, security patching, continuous deployments, and automatic scaling with changing load. Netflix is spending over $8 billion on programming this year, with shows like Lost In Space, Altered Carbon and Money Heist, and plenty more in the future. They will share how Auto Scaling allows their infrastructure to automatically adapt to changing traffic patterns in order to keep their audience entertained and their costs on target.
Amazon EC2 offers a comprehensive set of instances targeted at a variety of customer workloads optimized across compute, storage, memory, network, as well as accelerators, such as GPUs. For the first time, Amazon EC2 is introducing instances that are powered by CPUs custom built by Amazon on the Arm architecture. In this session, we will deep dive into Amazon EC2 A1 instances and learn about the key capabilities, target usages, benefits, and the overall ecosystem.
AWS License Manager is a new service that makes it easy to bring your existing licenses to the AWS cloud and reduce licensing costs. This service offers a one-stop solution for managing licenses from a variety of software vendors such as Microsoft, Oracle, IBM, SAP, and others, and helps you benefit from the existing investments in enterprise agreements. In this session, we take you through a deep dive of the service capabilities and how to use them. We cover topics such as how to manage your operating system and database licenses using this service, and how to track and report usage on cloud, on-premise and across your organizational accounts.
Kubernetes offers a powerful abstraction layer for managing containerized infrastructure. Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to run Kubernetes on AWS without having to manage master nodes or the etcd operator. In this session, we cover what you need to know to get your application up and running with Kubernetes on AWS. We show how Amazon EKS makes deploying Kubernetes on AWS simple and scalable, including networking, security, monitoring, and logging.
Making AWS services accessible from within OpenShift is seamless. From a single platform, operations teams can administer AWS services, and developers can easily find and consume those services within their applications in a truly hybrid environment. In this session, we dive deep into deploying OpenShift on AWS and deploying the AWS Service Broker. We also share some very interesting use cases. Because this is a deep-dive session, we recommend you have an understanding of containers and native AWS services. This session is brought to you by AWS partner, Red Hat, Inc.
In modern, microservices-based applications, it's critical to have end-to-end observability of each microservice and the communications between them in order to quickly identify and debug issues. In this session, we cover the techniques and tools to achieve consistent, full-application observability, including monitoring, tracing, logging, and service mesh.
Kubernetes is taking off and being rapidly adopted both on-premises and in the AWS Cloud. Today, enterprises are struggling to build, deploy, and manage production-ready environments at scale. The Cisco Hybrid Solution for Kubernetes on AWS makes it easy for customers to run production-grade Kubernetes on-premises. This is achieved by configuring on-premises Kubernetes environments to be consistent with Amazon Elastic Container Service for Kubernetes (Amazon EKS) and by combining Cisco's networking, security, management, and monitoring software with the world-class cloud services of AWS. This enables customers to focus on building and using applications instead of being constrained by where they run.
Microservices are minimal function services that are deployed separately, but can interact together to function as a broader application. Microservices can be built, changed, and deployed quickly with a relatively small impact, empowering developers to speed up the rate of innovation. In this session, we show how containers help enable microservices-based application architectures, discuss best practices for building new microservices, and cover the AWS services that allow you to build performant microservices applications.
Containers make it easy to build and deploy applications by abstracting away the underlying operating system. But how do you build secure and compliant containerized applications in a distributed environment, and without direct access to the operating system your code is running on? In this session, hear how Amazon Elastic Container Service for Kubernetes (Amazon EKS) is integrated into a large-scale regulated enterprise in the areas of network, security, CI/CD, and monitoring to cater to the needs of various business units. We cover the basics in each of these areas in Amazon EKS, and we hear from Fidelity on how it is driving its cloud strategy with Amazon EKS in the heavily regulated finance sector. We also share best practices and common architectures for building containerized application in highly regulated industries. Complete Title: AWS re:Invent 2018: [REPEAT 1] Building PaaS with Amazon EKS for the Large-Scale, Highly Regulated Enterprise (CON309-R1)
You may have heard of the buzzwords 'chaos engineering' and 'containers.' But what do they have to do with each other? In this session, we introduce chaos engineering and share a live demo of how to practice chaos engineering principles on AWS. We walk through chaos engineering practices, tools, and success metrics you can use to inject failures in order to make your systems more reliable.
AWS Fargate makes it easy to run containers by removing the need to provision, scale, or manage servers. In this session, learn the rationale behind some of the design decisions by the Fargate team and how that should influence your application design and best practices for building on Fargate. In addition, Turner Broadcasting System (TBS) dives deep into how it migrated to Fargate, the decisions that helped it along the way, and the tools it created in the process.
Ever wondered how you would get visibility into your application when you go serverless? In this session, we will dive deep into various visibility aspects of your serverless applications on AWS Fargate. We will cover best practices around logging, alerting, metric collection and monitoring health of your containers. We will also learn several ways to troubleshoot container start up issues or application errors. Catalytic will then show how they're using Fargate to perform parallelized bioinformatics workflows and how they gain better visibility into their applications running on Fargate.
In this session, we detail how Thomson Reuters hosted its critical enterprise .NET Framework application on Amazon ECS using Windows containers. We also dive into the company's decision-making process in choosing the right hosting platform, technology, and so on. We describe the unique custom solution Thomson Reuters developed using AWS CodePipeline, AWS CodeBuild, and Amazon Elastic Container Registry (Amazon ECR) that helped it create an end-to-end CI/CD pipeline for its environment. Complete Title: AWS re:Invent 2018: [REPEAT] Thomson Reuters Shows How It Hosted a .NET App on Amazon ECS Using Windows Containers (CON314-R)
KPMG have built a customer due diligence solution for a high-profile banking client in AWS. The solution is made up of a number of microservices which are deployed to containers using AWS Fargate. This presentation will dive into the details of the architecture of the solution, how the infrastructure and applications are deployed using third party tools such as Hashicorp's Terraform and Jenkins, and the best practices when running containers in production workloads. The presentation will cover details on the AWS resources used in the solution, including DynamoDB, ECS, Fargate and S3, CI/CD and automation, with a focus around security to meet banking regulatory requirements. We will look at how KPMG have configured for canary deployments to ECS Fargate, how we manage secrets management and encryption, and how we manage service discovery between the microservices using ECS Service Discovery and Route 53.
How do you ensure that a containerized system can handle the needs of your application? Designing and testing for performance is a critical aspect of operating containerized architectures at scale. In this session, we cover best practices for designing perfomant containerized applications on AWS using Kubernetes. We also show you how State Street deployed a high-performance database at scale using Amazon Elastic Container Service for Kubernetes (Amazon EKS). Complete Title: AWS re:Invent 2018: [REPEAT 1] Running a High-Performance Kubernetes Cluster with Amazon EKS (CON318-R1)
This session will focus on how leveraging Fargate and its serverless approach to deploying and managing containers will help increase operational efficiencies and reduce the time to ramp up your operations to run production containerized workloads. Datree will share their journey to adopt containers and the steps they were able to accelerate and avoid by using Fargate as well do a demo.
Applications built on a microservices-based architecture and packaged as containers bring several benefits to your organization. In this session, Duolingo, a popular language-learning platform and an Amazon ECS customer, describes its journey from a monolith to a microservices architecture. We highlight the hurdles you may encounter, discuss how to plan your migration to microservices, and explain how you can use Amazon ECS to manage this journey.
In this talk, we cover the configuration and networking details needed to run a production-ready Kubernetes cluster with Amazon Elastic Container Service for Kubernetes (Amazon EKS). We walk through the new features and updates for Amazon EKS in 2018.
Join Jess Frazelle, from GitHub, and Clare Liguori and Abby Fuller, from AWS, for a container power hour to kick off your re:Invent. In this session, learn how to use Git and GitHub to run your containers, and build, test, and deploy processes. GitOps and Actions and AWS Fargate-oh my! This session features a demo from Jess on using the new GitHub Actions to deploy to Fargate.
In this session we introduce AWS Cloud Map, a new service that lets you build the map of your cloud. It allows you to define friendly names for any resource such as S3 buckets, DynamoDB tables, SQS queues, or custom cloud services built on EC2, ECS, EKS, or Lambda. Your applications can then discover resource location, credentials, and metadata by friendly name using the AWS SDK and authenticated API queries. You can further filter resources discovered by custom attributes, such as deployment stage or version. AWS Cloud Map is a highly available service with rapid configuration change propagation.
AWS App Mesh is a service mesh that makes it easy to monitor and control communications for containerized microservices running on AWS. Join us to learn about how AWS can give you end-to-end visibility, and help manage traffic routing to ensure high availability for your services. We will cover the benefits of service mesh, capabilities provided by AWS App Mesh and how you can use AWS App Mesh with AWS, partner, and community tools.
This is a practical demo-driven session where you will learn about the best practice to protect applications on AWS. We will give an overview of the threats on AWS, discuss why perimeter defense helps with these threats, and discuss some key techniques that use services such as Amazon CloudFront, Route 53, and WAF to protect your web applications. Lastly, you will learn about the best practices to protect different types of applications - Web/APIs, TCP-based, or Gaming.
In this session, hear engineers from Amazon Prime Video and Amazon CloudFront discuss how they have architected and optimized their video delivery for scaled global audiences. Topics include optimizing the application and video pipeline for use with content delivery networks (CDN), optimizations in the CDN for efficient and performant video delivery, measuring quality, and effectively managing multi-CDN performance and policy. Learn how CloudFront delivers the performance that Prime Video demands, and hear best practices and lessons learned through scaling this fast-growing service.
You don't have to be a big media company to build your own OTT video workflow. Learn how Ravensbourne University created a live-streaming workflow using solutions from AWS Elemental to reach hundreds of schools with educational events featuring the Royal Shakespeare Company. Within months, students and staff from Ravensbourne were able to build, test, and successfully broadcast high-quality video streams to thousands of students.
Learn how TV New Zealand and France Televisions broadcast the Commonwealth Games and a major cycling event using AWS Media Services. Broadcasting live events is unpredictable. Reliability, agility, and scale all play important roles in ensuring a broadcast takes place without issues. Learn how to create channels on the fly for live events and how to seamlessly handle a fluctuation or influx of viewers using AWS Elemental MediaLive, AWS Elemental MediaPackage, and Amazon CloudFront. Complete Title: AWS re:Invent 2018: Broadcasting the World's Largest Sporting Events: AWS Media Services When It Matters Most (CTD206)
Disney Streaming Service is a Direct-To-Consumer (DTC) video streaming service, part of Disney and TrueCar is a digital automotive marketplace. You will learn about their different perspectives on how they built global applications for scale, performance and availability. TrueCar will share how they moved internet operations off premises from their datacenters to the cloud in AWS. Disney Streaming Service will dive deep into how they are leveraging Amazon CloudFront and Lambda@Edge to enable their content APIs to perform at scale through dynamic original selection, latency reduction through the usage of edge caching, and guaranteeing high availability.
AWS Lambda enables you to run code without provisioning or managing servers in an AWS Region. Lambda@Edge provides the same benefits, but runs closer to your end users, enabling you to assemble and deliver content, on-demand, to create low-latency web experiences. Come and join us for examples of how customers can move significant workloads they previously managed with server fleets to truly serverless website backends. Sentient Technologies, an artificial intelligence technology company, will share how they use Lambda@Edge for solving various use cases such as leveraging AI to improve customer engagement and uplift website conversions, and many more.
Learn how the BBC and Nine Network have enhanced the user experience and created new business models and opportunities, giving audiences more choices with more features, all while maintaining broadcast-grade service. When your business is providing live content to viewers, you know that there are no second chances. Audiences have come to expect a faultless live broadcast, and providers know they must ensure reliability with failover and redundancy plans. Learn how migrating to the cloud offers a new way to architect live-streaming workflows while maintaining the highest standards of resilience.
The way viewers expect to consume sports is completely changing, so Pac-12 Networks is completely changing the core of its video delivery infrastructure by shifting from on-premises to AWS. This move enables Pac-12 Networks to get more content than ever to more fans in more locations and on more devices. AWS is helping the network quickly set up new workflows at scale, such as live-to-VOD and OTT monetization, with thousands of hours of live content added annually. Learn how Pac-12 Networks is using AWS media and machine learning services to delight fans and accelerate innovation.
Reddit is one of the world's most trafficked websites with over 330 million monthly active users. They've built the vast majority of the infrastructure on AWS to support the increasing growth. As the world shifts to video-first, Reddit has redesigned their site and launched a new video platform. In this talk, we'll go over design paradigms, tradeoffs made, and lessons learned from rearchitecting one of the internet's top sites. See how this infrastructure operates at scale and how the team leverages ETS, Serverless, and Compute to serve over one billion videos a month.
Increase your organization's agility by diving deep and discovering how Amazon CloudFront integrates with other services to accelerate your DevOps workflows. In this session, which is jointly presented with Realtor.com, we cover four main areas of DevOps with customer success stories. Build: Programmatically launch and configure your CloudFront distributions by using AWS CloudFormation or Terraform templates as infrastructure as code (IaC). Test: Confirm that your updates deliver the intended result with A/B testing before moving all your traffic by using CloudFront and Lambda@Edge. Release: Continuously manage and deploy your application to the Amazon CloudFront Global Edge Network with AWS CodeStar. Monitor: Uncover actionable insights hiding in your CloudFront logs by leveraging Amazon CloudWatch, Amazon Athena, or AWS Marketplace partners for intelligent monitoring and alerting.
This is the general what's-new session for Amazon DynamoDB in which we cover newly announced features and provide an end-to-end view of recent innovations. We also share some customer success stories and use cases. Come to this session to learn all about what's new for DynamoDB.
This session covers the features and enhancements in our Redis-compatible service, Amazon ElastiCache for Redis. We cover key features, such as Redis 5, scalability and performance improvements, security and compliance, and much more. We also discuss upcoming features and customer case studies.
Amazon Relational Database Service (Amazon RDS) is a fully managed relational database service that enables you to launch an optimally configured, secure, and highly available database with just a few clicks. It manages time-consuming database administration tasks, freeing you to focus on your applications and business. We review the capabilities of the service and review the latest available featurese.
Amazon Aurora is a MySQL- and PostgreSQL-compatible relational database with the speed, reliability, and availability of commercial databases at one-tenth the cost. This session provides an overview of Aurora, explores recently announced features, such as Serverless, Multi-Master, and Performance Insights, and helps you get started.
Shawn Bice, VP of Non-Relational Databases at AWS, discusses a purpose-built strategy for databases, where you choose the right tool for the job. Shawn explains why your application should drive the requirements of a database, not the other way around. We introduce AWS databases that are purpose-built for your application use cases. Learn why you should select different database services to solve different aspects of an application, and watch a demonstration in which application use cases lend themselves well to specific data services. If you're a developer building modern applications that require high performance, scale, and functional databases, and you're trying to determine which relational and non-relational data services to use, this session is for you.
We're witnessing an unprecedented growth in the amount of data collected and stored in the cloud. Getting insights from this data requires database and analytics services that scale and perform in ways not possible before. AWS offers the broadest set of database and analytics services to process, store, manage, and analyze all your data. In this session, we provide an overview of the database and analytics services at AWS, new services and features we launched this year, how customers are using these services, and our vision for continued innovation in this space.
Learn how to convert and migrate your relational databases, nonrelational databases, and data warehouses to the cloud. AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) can help with homogeneous migrations as well as migrations between different database engines, such as Oracle or SQL Server, to Amazon Aurora. Hear from Verizon about how they intend to migrate critical databases to Amazon Aurora with PostgreSQL compatibility from their current on-premises Oracle databases, and learn how they intend to deal with challenges such as conversion of legacy code and complex data types, supporting business resiliency, and maintaining data synchronization during the transition phase.
We have recently seen some convergence of different database technologies. Many customers are evaluating heterogeneous migrations as their database needs have evolved or changed. Evaluating the best database to use for a job isn't as clear as it was ten years ago. We'll discuss the ideal use cases for relational and nonrelational data services, including Amazon ElastiCache for Redis, Amazon DynamoDB, Amazon Aurora, Amazon Neptune, and Amazon Redshift. This session digs into how to evaluate a new workload for the best managed database option. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
In this session, we provide a behind the scenes peek to learn about the design and architecture of Amazon ElastiCache. See common design patterns with our Redis and Memcached offerings and how customers use them for in-memory data processing to reduce latency and improve application throughput. We review ElastiCache best practices, design patterns, and anti-patterns. Complete Title: AWS re:Invent 2018: [REPEAT 1] ElastiCache Deep Dive: Design Patterns for In-Memory Data Stores (DAT302-R1)
In this session, learn about the security features built into Amazon DynamoDB and how you can best use them to protect your data. We show you how customers are using the available options for controlling access to their tables and the content stored within those tables. We also show you how customers are protecting the contents of their tables with encryption, and how they monitor access to their data.
Amazon Aurora is a fully managed relational database service that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. With Aurora, we've completely reimagined how databases are built for the cloud, providing you higher performance, availability, and durability than previously possible. In this session, we dive deep into the architectural details of Aurora with MySQL compatibility, and we review recent innovations, such as parallel query, backtrack, serverless, and multi-master. We also share best practices for utilizing the power of relational databases at cloud scale.
Amazon Aurora with PostgreSQL Compatibility is a relational database service that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. We review the functionality in order to understand the architectural differences that contribute to improved scalability, availability, and durability. We also dive deep into the capabilities of the service and review the latest available features. Finally, we walk through the techniques that can be used to migrate to Amazon Aurora.
Build faster, more scalable database applications with Amazon Aurora, a MySQL- and PostgreSQL-compatible relational database built for the cloud. We cover Aurora Serverless, which automatically scales your database up and down to meet demand; Fast Database Cloning, which makes data instantly available for application development; Backtrack, which rolls back your database between test runs; and Performance Insights, which helps assess the load on your database and optimize your SQL queries.
AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) can help migrate databases from many supported data sources to supported targets. In this session, we review how the combination of AWS DMS and AWS SCT can help migrate your NoSQL databases, such as MongoDB and Cassandra, to Amazon DynamoDB. We provide an overview of AWS DMS and AWS SCT, and we demonstrate migrating a sample Cassandra database into DynamoDB.
When do you need to use a graph database? What kinds of applications can benefit from using a graph-based approach? In this session, learn how customers are using graph databases to accomplish use cases from knowledge graphs to recommendations to network security. Hear how PricewaterhouseCoopers (PWC) is using graph-based approaches with Amazon Neptune and partners to build new applications. See how Tom Sawyer Software helps to visualize Amazon Neptune graphs.
Amazon Relational Database Service (Amazon RDS) continues to be a popular choice for Oracle DBAs moving new and legacy workloads to the cloud. In this session, we discuss how Amazon RDS for Oracle helps DBAs focus their time where it matters most. We cover recent RDS Oracle features, and we go deep on key functionality that enables license optimization, performance, and high availability for Oracle databases. We also hear directly from an AWS customer about their journey to Amazon RDS and the best practices that helped make their move successful.
Organizations today are looking to free themselves from the constraints of on-premises commercial databases and leverage the power of cloud-native and open-source systems. Amazon Aurora is a MySQL- and PostgreSQL-compatible relational database that is built for the cloud, with the speed, reliability, and availability of commercial databases at one-tenth the cost. In this session, we provide an overview of Aurora and its features. We talk about the latest advances in migration tooling and automation, and we explain how many of the common legacy features of Oracle and SQL Server map to modern cloud variants. We also hear from Dow Jones about its migration journey to the cloud.
At Airbnb, we use Redis extensively as an in-memory data store to reduce latency and provide sub-millisecond response for our website, search, images, payments, and more. We migrated our self-managed Redis environment from EC2 classic to fully-managed Amazon ElastiCache for Redis to reduce operational overhead and improve availability. Now, all our Redis is in an AWS managed service that provides multi-Availability Zone support, automatic failover, and maintenance. Attend this session to learn how we migrated our Redis environment while ensuring data integrity and zero downtime.
Come to this session to learn how Amazon DynamoDB was built as the hyper-scale database for internet-scale applications. In January 2012, Amazon launched DynamoDB, a cloud-based NoSQL database service designed from the ground up to support extreme scale, with the security, availability, performance, and manageability needed to run mission-critical workloads. This session discloses for the first time the underpinnings of DynamoDB, and how we run a fully managed nonrelational database used by more than 100,000 customers. We cover the underlying technical aspects of how an application works with DynamoDB for authentication, metadata, storage nodes, streams, backup, and global replication.
In recent years, MySQL has become a top database choice for new application development and migration from overpriced, restrictive commercial databases. In this session, we provide an overview of the MySQL and MariaDB options available on AWS. We also do a deep dive on Amazon Relational Database Service (Amazon RDS), a fully managed MySQL service, and Amazon Aurora, a MySQL-compatible database with up to 5X the performance, and many additional innovations.
Amazon Relational Database Service (Amazon RDS) provides a managed service to run SQL Server databases in AWS. While Amazon RDS handles provisioning and maintaining the SQL Server instance, there are things you can do to ensure that the SQL Server instance is healthy. We'll review some best practices involved in configuring the Amazon RDS SQL Server instance, focusing on availability, security and migration. We'll also hear from our customer Allstate, sharing details about their use of Amazon RDS.
In this session, we provide an overview of the PostgreSQL options available on AWS, and do a deep dive on Amazon Relational Database Service (Amazon RDS) for PostgreSQL, a fully managed PostgreSQL service, and Amazon Aurora, a PostgreSQL-compatible database with up to 3x the performance of standard PostgreSQL. Learn about the features, functionality, and many innovations in Amazon RDS and Aurora, which give you the background to choose the right service to solve different technical challenges, and the knowledge to easily move between services as your requirements change over time.
As Oath, a Verizon company, prepared for GDPR compliance during integration of AOL and Yahoo! Internet properties, a global user consent and terms of service application was developed and deployed. The cloud-native service was deployed across multiple AWS Regions globally and leveraged Amazon DynamoDB global tables to enable data synchronization. The service was launched in April 2018, scaling to meet the needs of hundreds of millions of Oath users. Come to this session to learn more about how Oath was able to accomplish all of this. Complete Title: AWS re:Invent 2018: How Oath (a Verizon Company) Built a Multi-Region GDPR Application with Amazon DynamoDB (DAT325)
GE Aviation sells $1.7B annually in parts and services through a customer portal called myGEAviation. To enhance its customers' experience, the portal enables users to input specific variables and build custom reports for later viewing. These applications, plotting, and data-query were experiencing issues with cost, scalability, and performance. In this session, GE discusses how it rearchitected plotting and data-query to resolve those issues by using Amazon DynamoDB.
Amazon Aurora Serverless is an on-demand, autoscaling configuration for Aurora (MySQL-compatible edition) where the database automatically starts up, shuts down, and scales up or down capacity based on your application's needs. It enables you to run your database in the cloud without managing any database instances. Aurora Serverless is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. In this session, we explore these use cases, take a look under the hood, and delve into the future of serverless databases. We also hear a case study from a customer building new functionality on top of Aurora Serverless.
Do you need a ledger database? Let's talk about the kinds of problems that Amazon Quantum can solve, and answer your questions about when and why you would use a ledger database. We'll showcase a presentation with benefits and use cases for Amazon Quantum.
DynamoDB transactions enables developers to maintain correctness of their data at scale by adding atomicity and isolation guarantees for multi-item conditional updates. With transactions, you can perform a batch of conditional operations including, PutItem, UpdateItem, and DeleteItem with guarantees. Come to this session to learn about how DynamoDB transactions works, the primary use cases in enables, and how to build modern applications that require transactions.
This session is for those who already have some familiarity with DynamoDB. The patterns and data models discussed in this session summarize a collection of implementations and best practices leveraged by Amazon.com to deliver highly scalable solutions for a wide variety of business problems. The session also covers strategies for global secondary index sharding and index overloading, scalable graph processing with materialized queries, relational modeling with composite keys, and executing transactional workflows on DynamoDB.
Despite the importance of cloud databases as a core foundation for applications, many businesses face challenges in identifying database performance issues. Visibility into database performance is difficult due to a wide range of incomplete tools that can be difficult to install, configure, and maintain. While these tools may provide a wide range of statistics, they lack a standard methodology for analyzing the statistics to identify performance problems. In this session, learn how Amazon Relational Database Service (Amazon RDS) changes this by providing database performance monitoring that is automatically configured, easy to use, and based on a clear actionable methodology.
What can you do with Apache TinkerPop and Gremlin or RDF and SPARQL? How does Neptune provide multi-Availability Zone high availability? Learn about the features and details of Amazon's fully managed graph database service.
While stateless services are suitable for many architectures, stateful services are also useful and sometimes overlooked. In this session, we hear from Netflix about the unique challenges of upgrading stateful services in the cloud, architectural advice to make iterating on stateful services easy, and concrete tools and infrastructure you can use on AWS to make upgrading easy.
Join this demo-driven session to learn about a simple way to move data rapidly into Amazon S3. Then, see how to access it on-premises with AWS Storage Gateway for local applications. You'll leave understanding how to improve data transfer performance without the manual labor of extensive scripting or the cost of third-party licensed software.
aibo" is an autonomous pet robot series by Sony, coming soon to the US. "aibo" cloud built on AWS uses various services, especially serverless one such as AWS Lambda, Amazon API Gateway, and AWS IoT Core. In this session, we introduce how we use AWS services on aibo, and we share some of our serverless practices. Complete Title: AWS re:Invent 2018: [REPEAT 1] How We Made "aibo" Smart: A Journey through Serverless & IoT on AWS (DEM107-R1
Develop and test your applications more quickly with Amazon Aurora, a MySQL- and PostgreSQL-compatible database built for the cloud. We'll show you how Aurora helps you scale your applications, prepare data for testing, and optimize your database queries.
The days of the one-size-fits-all monolithic database are behind us, and developers are now building highly functional and performant applications using a multitude of purpose-built databases. In this demo, we'll use an ecommerce application and match use cases with the best database for the job.
In this demo, learn how Anki used the Amazon Lex chatbot capability to build interactive games that help students learn better.
In this demo, learn how AWS AppSync can help you develop data-driven mobile apps.
Join us to learn how AWS enables developers to build enaging experiences across the web and mobile devices.
With an increasingly software-based value chain, Sysco Foods has been aggressively moving its infrastructure to AWS in a bid to maintain competitive position against digitally native rivals. Critical to this transition has been a holistic agile and DevOps transformation, which has accelerated cloud adoption through an IT product, platform, and service team structure specifically designed to automate and consume AWS services. In this session, we share the key lessons learned on how agile transformation can accelerate AWS migrations, and the implications of cloud enablement on IT organizations, talent, and culture. This session is brought to you by AWS partner, Deloitte Consulting LLP.
For today's digital organizations, even a few minutes of downtime can mean millions of dollars lost and customers who go elsewhere. To keep up with customer expectations, organizations must handle and prioritize real-time operations at a scale that didn't exist before. However, developing this competency is easier said than done, especially without a solid understanding of the capabilities needed to drive real-time operations across cloud and on-premises environments. In this session, we explore how innovations around machine learning, automation, and analytics, when combined with modern incident management best practices, can improve operational performance, team productivity, and drive business results. This session is brought to you by AWS partner, PagerDuty, Inc.
Traditional disk-to-disk to tape backup strategies can't keep up with the massive growth of data, the importance of rapid restore, and data reuse. These backup environments store to disk, purpose-built backup appliances, or tape, creating issues with restore times, cost, silos, scalability, and data reuse. A modern flash-to-flash-to-cloud backup environment uses local flash storage for fast recovery, and tiers data seamlessly to Amazon Web Services (AWS) for cost-effective long-term retention. In this session, hear how this is possible with Pure Storage FlashBlade and StorReduce on AWS. Hear how IDT Corporation uses StorReduce to effortlessly modernize its backups. This session is brought to you by AWS partner, Pure Storage.
In the world of security monitoring and alerting, there is an increasing number of opportunities and advanced technologies. People look for better ways to gain insights from large datasets and are tasked with the responsibility of communicating that data throughout the entire organization. In this talk, we explore how to democratize the security of your next-gen infrastructure by building measurement directly into systems, factoring in security-related KPIs and OKRs. Attendees learn how everyone, from SMBs to enterprises, securely scale their infrastructure while continuing to enable innovation at the speed of business. This session is brought to you by AWS partner, Threat Stack.
This session examines the key approaches and technologies required to obtain a unified view across server, network, code, database, container, and cloud. Learn the five core components for full-stack visibility and optimal application performance in AWS and hybrid cloud environments. We start with the cloud maturity journey and the typical behaviors of each stage. Next, we discuss application dependency mapping and the importance of knowing how every component is connected before migrating. We also explore the demands of serverless and container monitoring (Kubernetes, Amazon EKS, Amazon ECS). For post migration, we cover the importance of business-centric application performance metrics that compare on-premises and AWS cloud states. This session is brought to you by AWS partner, AppDynamics.
The allure of the cloud is compelling and offers greater agility, elasticity, and reduced capex. Businesses seek to reap these benefits by migrating to AWS, all while enforcing corporate governance and security policies to minimize risk. To accomplish this objective, businesses must continuously monitor the performance of complex applications, which is not practical with point solutions, such as bytecode instrumentation. In this session, learn how NETSCOUT's smart data platform enables continuous monitoring in hybrid cloud environments to minimize risk and control costs. Hear real-life examples of how businesses optimized their AWS migration, gaining visibility and deep insights into both the physical and virtual worlds, to maintain the continuity and security of the services throughout the migration process.This session is brought to you by AWS partner, NETSCOUT Systems.
Come join us as we take a deeper look at Amazon's approach to releasing mission-critical software. In this session, we take a journey through the release process of an AWS Tier 1 service on its way to production. We follow a single code change from idea to release, and we focus on how Amazon updates critical software quickly and safely for its global customers. Throughout the talk, we demonstrate how our internal software release processes map to AWS developer tools, and we highlight how you can leverage AWS CI/CD services to create your own robust release process.
DevOps is currently one of the most sought after engineering models. One reason is that it helps enterprise transformations. The Amazon transformation to DevOps was born out of the desire to be even more customer obsessed, more agile, and more innovative. Come and learn from our journey as we share the playbook that helped us successfully implement and adopt DevOps as well as the lessons we learned the hard way.
Developers increasingly rely on Salesforce Heroku and AWS services to support rapid, secure development and iteration. Join us in this session to learn how Heroku's visual, team-based continuous delivery workflow brings structure, insight, and simplicity to app development. We explore best practices for building applications composed of both Heroku and AWS data services. You also hear about customer examples and learnings you can use to supercharge the results and productivity of your application development teams. This session is brought to you by AWS partner, Salesforce.
In this session, learn how to architect a predictive and preventative remediation solution for your applications and infrastructure resources. We show you how to collect performance and operational intelligence, understand and predict patterns using AI & ML, and fix issues. We show you how to do all this by using AWS native solutions: Amazon SageMaker and Amazon CloudWatch.
Come learn what's new with Amazon CloudWatch, and watch as we leverage new capabilities to better monitor our systems and resources. We also walk you through the journey that BBC took in monitoring its custom off-cloud infrastructure alongside its AWS cloud resources.
AWS CloudFormation, in combination with other tools for continuous integration and delivery pipelines, can help automate and standardize frequent deployments for many types of applications, from traditional compute and autoscaling groups to serverless applications. In this session, we will present several use cases combining CloudFormation with build and pipeline automation tools to achieve repeatable, consistent and compliant deployments without sacrificing agility. Complete Title: AWS re:Invent 2018: [REPEAT 1] Earn Your DevOps Black Belt: Deployment Scenarios with AWS CloudFormation (DEV308-R1)
To get the most out of the agility afforded by serverless and containers, it is essential to build CI/CD pipelines that help teams iterate on code and quickly release features. In this talk, we demonstrate how developers can build effective CI/CD release workflows to manage their serverless or containerized deployments on AWS. We cover infrastructure-as-code (IaC) application models, such as AWS Serverless Application Model (AWS SAM) and new imperative IaC tools. We also demonstrate how to set up CI/CD release pipelines with AWS CodePipeline and AWS CodeBuild, and we show you how to automate safer deployments with AWS CodeDeploy.
Companies that are building and deploying modern applications need to have observability across metrics, logs, and traces to gain operational visibility of systems and resources, debug and analyze applications, and optimize customer experience. In this session, we leverage Amazon CloudWatch and AWS X-Ray to highlight best practices for addressing monitoring challenges that most customers face. We showcase an IoT application built using common AWS services, create multiple monitoring challenges, and demonstrate best practices. After the session, we will make demonstration available for you to test using your own AWS account. Complete Title: AWS re:Invent 2018: Breaking Observability Chaos: Best Practices to Monitor AWS Cloud Native Apps (DEV311)
The service mesh is becoming the most critical component of the cloud-native stack, with users ranging from small startups to Internet giants and traditional enterprises. While still early in terms of adoption, this new infrastructure layer has massive implications for the way companies build and operate distributed systems. In this session, SignalFx provides an overview of distributed systems and service instrumentation, then examines how a service mesh approach addresses inter-service communication, testing, and other fundamental challenges of adopting microservices architecture. Learn about the considerations for monitoring and observability, as well as the trade-offs, of implementing service mesh. This session is brought to you by AWS partner, SignalFx.
Even the best continuous delivery and DevOps practices cannot guarantee that there will be no issues in production. The rise of Site Reliability Engineering (SRE) has promoted new ways to automate resilience into your system and applications to circumvent potential problems, but it's time to 'shift-left' this effort into engineering. In this session, learn to leverage AWS Lambda functions as 'remediation as code.' We show how to make it part of your continuous delivery process and orchestrate the invocation of Self-Healing Lambda functions in case of unexpected situations impacting the reliability of your system. Gone are the days of traditional operation teams-it's the rise of 'shift-lefters'! This session is brought to you by AWS partner, Dynatrace.
Coinbase is a secure online platform for buying, selling, transferring, and storing digital currency. This talk covers its journey from a small band of engineers working on reliability to a centralized SRE organization, and the lessons learned along the way. We dive into the processes that we created-both technical and organizational-that enabled us to quickly build a world-class reliability engineering group. We also cover what reliability really means, and more importantly, how we measure it. This session is brought to you by AWS partner, Datadog.
As a global media organization, Reuters delivers a wide array of event-specific content and applications tied to the news of the day. While the size and scale of each event may vary between a national election to regional breaking news, one thing they all have in common is that they are short-lived, topical, and time-critical. As a result, there is only one chance to get it right. The AWS Cloud is a perfect enabler for that, with a wide range of services. In this session Reuters shares its approach to building, managing, and monitoring robust systems for live events. This session is brought to you by AWS partner, Datadog.
Continuous delivery (CD) enables teams to be more agile and quickens the pace of innovation. Too often, however, teams adopt CD without putting the right safety mechanisms in place. In this talk, we discuss opportunities for you to transform your software release process into a safer one. We explore various DevOps best practices, showcasing sample applications and code with AWS CodePipeline and AWS CodeDeploy. We discuss how to set up delivery pipelines with nonproduction testing stages, failure cases, rollbacks, redundancy, canary testing and blue/green deployments, and monitoring. We discuss continuous delivery practices for deploying to Amazon EC2, AWS Lambda, and containers such as Amazon ECS or AWS Fargate.
In this demonstration-heavy session, we illustrate our latest techniques, tools, and libraries for developing end-to-end applications with .NET Core. We focus on serverless applications, but the techniques are broadly relevant. We start by showing you some useful features and best practices for authoring your serverless application, including debugging locally from the IDE and in production. From there, we demonstrate some helpful tools that make it easy to set up your CI/CD workflow from the start. Finally, we deploy our application with AWS Lambda.
Today, more teams are adopting continuous integration (CI) techniques to enable collaboration, increase agility, and deliver a high-quality product faster. Cloud-based development tools such as AWS CodeCommit and AWS CodeBuild can enable teams to easily adopt CI practices without the need to manage infrastructure. In this session, we showcase best practices for code reviews and continuous integration, drawing on practices used by Amazon engineering teams. We'll incorporate demos to not just explain the practices but show you how.
In this session, learn how AWS is helping enterprises adopt the DevOps model and automation in their journey to the cloud. By using the DevOps principle to treat your infrastructure environments as code, you can automate and easily scale your lifecycle environments. Learn how services like AWS CloudFormation and AWS OpsWorks enable you to automate your instance provisioning and configurations as code, ensuring consistent, compliant, and scalable cloud infrastructure. Understand how OpsWorks is enabling enterprises to accelerate their migration to the cloud, leveraging popular tools like Chef or Puppet and their respective communities. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
AWS CloudFormation is one of the most widely used tools in the AWS ecosystem, enabling infrastructure as code, deployment automation, repeatability, compliance and standardization. In this session, we cover the latest improvements and best practices for AWS CloudFormation customers in particular, and for seasoned infrastructure engineers in general. We cover new features and improvements that span many use cases, including programmability options, cross region and cross account automation, operational safety, and additional integration with many other AWS services.
We're working on a new major version of the AWS Command Line Interface (AWS CLI), a command-line tool for interacting with AWS services and managing your AWS resources. AWS CLI v2 will include features to improve workflows and make it even easier to manage AWS resources through the AWS CLI. Come hear from the core developers of the AWS CLI as we highlight some of the new features and major improvements in AWS CLI v2. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Come learn how Elastic Beanstalk can help you go from code to running application in a matter of minutes, without the need to provision or manage any of the underlying Amazon Web Services (AWS) resources. Hear how Qualcomm is able to migrate application to AWS faster than before through Forge, an internally built application platform that leverages Elastic Beanstalk to simplify the development and deployment of applications to AWS with security and organizational best practices out of the box.
Are you spending hours trying to understand how customers are impacted by performance issues and faults in your service-oriented applications? In this session, we show you how customers are using AWS X-Ray to reduce mean time to resolution, get to the root cause faster, and determine customer impact. In addition, one of our X-Ray customers, ConnectWise, presents a case study and best practices on how it is leveraging X-Ray in its production environment. We also show you how to use X-Ray with applications built using AWS services, such as Amazon Elastic Container Service for Kubernetes (Amazon EKS), AWS Fargate, Amazon Elastic Container Service (Amazon ECS), and AWS Lambda to achieve the above.
The AWS SDK for Java 2.0 includes a number of new features and performance improvements. Using real code examples, we'll build a serverless application that makes use of the SDKs new HTTP2-based event-streaming APIs and deploy it using AWS Java tooling introduced in 2018. You'll learn what's new in 2.0 and the benefits of upgrading, as well as how to take advantage of new tooling in AWS's already rich Java ecosystem.
In addition to the basic infrastructure as code capabilities provided by AWS CloudFormation, AWS now offers various programmability constructs to power complex provisioning use cases. In this talk, we present several advanced use cases of declarative, imperative, and mixed coding scenarios that cloud infrastructure developers can leverage. Examples include demonstrating how to create custom resources and leveraging transforms, like the AWS Serverless Application Model (AWS SAM), to create both simple and complex macros with AWS CloudFormation. Complete Title: AWS re:Invent 2018: [REPEAT 1] Beyond the Basics: Advanced Infrastructure as Code Programming on AWS (DEV327-R1)
Discover how AWS Service Catalog is enabling Deloitte ConvergeHEALTH to streamline application deployment for its family of Life Sciences analytics software products. In this session, learn how Deloitte has reduced deployment time from days to minutes by leveraging an automated, consistent, and compliant approach to application deployment. This session is designed for application developers, applications owners, and anyone who wants to improve their time to value and business agility using the automated deployment capabilities of AWS Service Catalog.
Cloud engineering teams at Corteva Agriscience, Agriculture Division of DowDuPont, have a challenge: how to support a global business of research scientists and software developers in building a world-class innovation organization. Modern agriculture produces larger and more varied data types, so their approach must be not only scalable and flexible, but also commit to operational excellence while remaining adoptable. This session will walk through how Corteva Agriscience builds container-based infrastructures with CI/CD pipelines that remove undifferentiated heavy lifting and allow teams to empower developers. Members of the cloud engineering team will discuss problems they face, solutions they implement, and show an example of how they leverage AWS services (AWS CodeCommit, AWS CodePipeline, AWS CloudFormation, AWS Fargate) to deploy a novel machine learning algorithm for scoring genetic markers.
As companies employ DevOps practices to push applications faster into production through better collaboration and automated testing, security is often seen as an inhibitor to speed. The challenge for many organizations is getting applications delivered at a fast pace while embedding security at the speed of DevOps. In this session, learn how AWS Marketplace products and customers help make DevSecOps a well-orchestrated methodology to ensure the speed, stability, and security of your applications.
You've migrated your business to the cloud. You've embraced DevOps. All your engineering teams operate the systems they write. You don't need central teams any longer ... or do you? In this talk, we discuss how Netflix balances the need for product teams to stay loosely coupled yet how it maximizes the leverage for productivity and velocity that healthy central teams provide.
The AWS Cloud Development Kit (AWS CDK) is a new open-source framework from AWS that enables developers to harness the full power of modern programming languages to define reusable cloud components and applications and provision them through AWS CloudFormation. The AWS CDK is shipped with a rich class library that encapsulates the details-defining infrastructure on AWS and enables you to focus on your application. In this session, we discuss why we decided to build the AWS CDK; we describe some of the high-level concepts; and we write some code on stage to demonstrate why we think the AWS CDK is going to be your best friend.
In this session, we will talk about many of the challenges with managing application log data. We will walk through how to ingest, manage, and analyze large volumes of log data using Amazon CloudWatch Logs. This enables you to solve operational problems faster and debug your applications more easily. Complete Title: AWS re:Invent 2018: Managing & Analyzing Large Volumes of Logs Data in Amazon CloudWatch Logs (DEV375)
Digital User Engagement
This session, we describe how AWS provides the Amazon customer-centric culture of innovation, key technology building blocks, and a user engagement platform to help companies better engage their users. You also learn how Disney Streaming Services is utilizing the Amazon approach to engage its users. The intended audience is developers and business professionals who are responsible for digitally transforming their company.
In this session, we demonstrate how to easily deploy an AWS solution that ingests all Tweets from any Twitter handle, uses Amazon Comprehend to generate a sentiment score, and then automatically engages customers with a personalized message. The intended audience includes developers and marketers who want to leverage AWS to create powerful user engagement scenarios. We highlight how quickly a machine-learning marketing solution can be deployed. We cover the AWS services Amazon Pinpoint, a digital user engagement service, and Amazon Comprehend, a natural language processing service that uses artificial intelligence and machine learning to find insights and relationships in text.
At technical community gatherings, Meetups, or events, the majority of attendees and speakers tend to be men. Women often feel uncomfortable attending such events, leading women in technology to start technical communities for only for women. Although the purpose of these women-in-tech communities is to help more women feel welcomed and create equality between men and women in the industry, these communities might be inadvertently doing the exact opposite. In this talk, we share perspectives on gender diversity in technology, and we discuss the value of participating in mixed-gender Meetups. We also share insights from AWS user communities worldwide, and we discuss steps we take to make our AWS community in Israel-one of the largest in the world, with over 6,000 members-more inclusive. This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share first-hand, technical insights on trending topics.
Did you know that there are over 300 AWS User Groups worldwide? In this session, join a panel discussion featuring AWS community leaders from around the world, and learn the value of attending community-led AWS Meetups in your region. Community leaders share their experiences, talk through how local communities help developers solve problems and achieve their goals, and discuss the benefits of participating in peer-to-peer AWS knowledge sharing and networking activities. This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share first-hand, technical insights on trending topics.
Learn the tips, techniques, and tricks for accelerating your team's cloud transformation with an education framework that scales. As director of cloud engineering at Capital One, Drew Firment founded a cloud engineering college that was integrated within a Cloud Center of Excellence. As the Dean of Cloud Computing, Drew earned a patent for measuring cloud maturity and demonstrated how cloud education program can accelerate adoption. Come to this session to hear key lessons from his experience, and learn how to apply the framework to your organization's cloud transformation journey. This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share first-hand, technical insights on trending topics.
When analyzing information for fraud detection, tasks must be run periodically. When building a fraud detection system, start by preparing the data, and work with small chunks of data and run parallel jobs so your machine learning (ML) models can predict fraudulent activity. For that, you schedule computer resources and, of course, the script. With AWS Batch, you only worry about your application job and run it at scale. With containers, you think in small processes and let AWS Batch run them concurrently. In this session, learn to build a fraud detection system and integrate it with other AWS services. This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share first-hand, technical insights on trending topics.
Amazon SageMaker is a powerful tool that enables us to build, train, and deploy at scale our machine learning-based workloads. With help from AWS CI/CD tools, we can speed up this pipeline process. In this talk, we discuss how to integrate Amazon SageMaker into a CI/CD pipeline as well as how to orchestrate with other serverless components. This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share first-hand, technical insights on trending topics.
Red teamers, penetration testers, and attackers can leverage the same tools used by developers to attack AWS accounts. In this session, two technical security experts demonstrate how an attacker can perform reconnaissance and pivoting on AWS, leverage network, AWS Lambda functions, and implementation weaknesses to steal credentials and data. They then show you how to defend your environment from these threats. This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share first-hand, technical insights on trending topics.
Chaos engineering focuses on improving system resilience through controlled experiments, exposing the inherent chaos and failure modes in our system before they manifest in production and impact users. However, much of the publicized tools and articles focus on killing Amazon EC2 instances, and the efforts in the serverless community have been largely limited to moving those tools into Lambda functions. How can we apply the same principles of chaos to a serverless architecture built around AWS Lambda functions? Can we adapt existing practices to expose the inherent chaos in these systems? What are the limitations and new challenges that we need to consider? Come to this session and find out. This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share first-hand, technical insights on trending topics.
This talk dives into Trustpilot's journey to serverless compute. The journey starts at re:Invent 2016 and follows how the company fast-tracked its adoption within its engineering organization using a "serverless first" engineering principle. A representative from Trustpilot shares lessons learned and insights gained from running over 200 AWS Lambda functions with 12M invocations/day in production. Also covered are fun stories of what helped the company adopt serverless, how to make those stories actionable, a review of architectural patterns, and a discussion of why they choose serverless over traditional compute every day. This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share firsthand technical insights on trending topics.
The tsunami of technology disruption is far from over. The public cloud is disrupting the global IT industry, similar to how Uber and Airbnb re-invented the taxi and hotel industries. The disruption brings new norms of doing things and a paradigm shift towards automation of everything. This paradigm shift can drastically reduce and, in some cases, eliminate long-standing job functions in IT. While certain job functions will be eliminated, the people performing these functions still hold great value to the enterprise. Learn how you can take control of your career so you are not left behind in the journey. This session is brought to you by AWS partner, HPE.
Have you ever had sleepless nights because you couldn't meet your Recovery Point and Time Objectives? What about recovering data in the event of a disaster? If you're a backup or storage architect, the answer is most likely "yes." Come to this session to learn how Cohesity can help you build an enterprise-grade solution for long-term retention, development and testing, and disaster recovery. Hear how Airbud Entertainment is using the Cohesity DataPlatform and AWS storage services, such as Amazon S3, and Amazon Glacier, to simplify their backup and long-term retention strategy and architecture. This session is brought to you by AWS partner, Cohesity, Inc.
Consolidating the data center continues to be imperative for most enterprises. There is a good chance that you've been asked to use the cloud as a disaster recovery (DR) solution and eliminate use of secondary on-premises sites. How should you think about this strategy? What are the key requirements you should consider? In this session, learn how Cohesity can help you build a bulletproof plan for disaster recovery on AWS. Hear how a joint customer is using the Cohesity DataPlatform on the AWS Cloud to meet audit requirements for DR and at the same time enable the archiving of backup data. This session is brought to you by AWS partner, Cohesity, Inc.
In this session, we outline the five levels of cloud operations automation, providing a clear path and maturity model for achieving security, compliance, and architecture best practices. Using real-world case studies from Fortune 100 enterprises, we demonstrate how secure AWS Landing Zones and policy-based, automated guardrails accelerate the safe migration and ongoing operation of hundreds of enterprise applications, putting your team on the road to DevSecOps maturity. This session is brought to you by AWS partner, Turbot HQ, Inc.
Enterprises are leveraging hybrid cloud architectures to accelerate their cloud migration journey and to adopt modern application strategies. VMware Cloud on AWS provides an easy cloud migration path for enterprises and empowers modern workloads with consistent infrastructure, operations, and reduced costs. In this session, learn how customers are using VMware Cloud on AWS for cloud migrations, data center extension, disaster recovery, next-generation applications, and app modernization. Complete Title: AWS re:Invent 2018: [REPEAT 1] Top Strategic Priorities You Can Tackle with VMware Cloud on AWS (ENT215-R1)
Many customers lack the experience and skills to optimize performance and cost in the cloud. The breadth of AWS offerings meets any application need at any time. However, realizing full elasticity while adapting to dynamic application demands is a challenge that is beyond human scale. AI-powered software is the answer. Turbonomic workload automation lets applications self-manage cloud resources, anywhere, in real time, making intelligent sizing and placement decisions for cloud migrations. In this session, we discuss how an oil and gas company used the Turbonomic AI-powered decision engine to make the right continuous tradeoffs, maximizing efficiency without sacrificing elasticity. This session is brought to you by AWS partner, Turbonomic.
DevOps is a powerful movement that can help enterprises speed up their rate of innovation. But many customers think DevOps can work only with their cloud-native applications. Enterprise DevOps is a set of best practices anchored by real-life customer experiences that enable large organizations to apply the speed and agility of DevOps to all of their applications without sacrificing security and compliance. And it all begins with production-ready migration. In this session, you learn 1) how to execute your migration with successful ongoing operations in mind, 2) how to integrate existing operational models (e.g., ITIL) with modern cloud best practices (e.g., DevOps), and 3) how enterprises like National Australia Bank are leveraging the Enterprise DevOps framework to run their business.
Corteva Agriscience, the agricultural division of DowDuPont, produces as much DNA sequence data every six hours as existed in the entire public sphere in 2008. On-premises processing and storage could not scale to meet the business demand. Partnering with Sogeti (part of Capgemini), Corteva replatformed their existing Hadoop-based genome processing systems into AWS using a serverless, cloud-native architecture. In this session, learn how Corteva Agriscience met current and future data processing demands without maintaining any long-running servers by using AWS Lambda, Amazon S3, Amazon API Gateway, Amazon EMR, AWS Glue, AWS Batch, and more. This session is brought to you by AWS partner, Capgemini America. Complete Title: AWS re:Invent 2018: Petabytes of Data & No Servers: Corteva Scales DNA Analysis to Meet Increasing Business Demand (ENT218-S)
In order to increase business agility, drive transformation, and reduce costs, enterprise customers are moving their entire SAP landscapes to AWS. Examples of enterprise customers running their core businesses on AWS include: AIG, BP, BMS, Brooks Brothers, and Zappos. In this session, hear why thousands of customers are running SAP workloads on AWS, and understand how AWS helps Fortune 500 companies migrate and transform their environments accelerate greater innovation with AWS.
The cloud is enabling the transformation of IT in the enterprise at an unprecedented rate. No longer is IT viewed just as a provider of services, but increasingly it is viewed as a strategically important team that plays a central role in the creation of new business value. How are enterprise IT leaders responding, and what are the people, process and technology shifts needed to build agile and innovative organizations?
As customers migrate to the cloud, IT needs to maintain structured compliance and governance while providing developers with the flexibility to manage cloud resources at scale. In this session, learn how AWS management tools provide a set of services to track changes to resources, audit actions, manage change, and gain insights. We also show how you can use built-in safety controls to automatically perform actions and remediation across multiple regions and accounts. This session is beneficial to IT and system administrators who are interested in using native AWS tools to operate secure and compliant infrastructure on AWS.
In this session, Verizon shares how it uses AWS Systems Manager for inventory, compliance, and patch management solutions. Learn about the challenges that large enterprises face when they attempt to retrofit legacy solutions for cloud environments, and discover best practices for using AWS Systems Manager for minimal access policies, custom Amazon Machine Images, tagging policies, encryption, and more.
The cloud offers a first-in-a-career opportunity to constantly optimize your costs as you grow and stay on the leading edge of innovation. By developing a cost-conscious culture and assigning the responsibility for efficiency to the appropriate business owners, you can deliver innovation efficiently and cost effectively. In this session, we share The Vanguard Group's real-world experience of optimizing their costs, and we review a wide range of cost planning, monitoring, and optimization strategies.
Application modernization projects with AWS start with creating an AWS Landing Zone. Based on AWS best practices, AWS Landing Zones help ensure a secure, performant, highly available, and cost-efficient AWS environment. Common hybrid cloud use cases, such as cloud migration, data center extension, disaster recovery, cloud bursting, and edge computing, require data integration, operations management and monitoring, security, and networking as the foundational components of a hybrid cloud architecture. In this session, we dive deep on the networking, security, account management structure, operating management, and monitoring best practices to build your own AWS Landing Zone that can be extended into your data center. AWS partner, GreenPages, demonstrates a repeatable hybrid cloud architecture to secure, manage, and integrate your network across on-premises and multiple AWS regions using an AWS Landing Zone. AWS customer, Finch Therapeutics, then discusses how the company utilized the GreenPages hybrid cloud reference implementation to deploy, secure, and manage its hybrid cloud environment.
Many organizations that embark on a journey to the cloud view this effort as an opportunity to transform their legacy operations and development practices. DevOps, Agile software development, and Design Thinking are the popular methodologies for successfully speeding the delivery of new products and features and developing a more customer-centric mindset. In this session, we break down the essential components of each method and provide tip on navigating challenges that are commonly encountered when adopting these methods during a cloud migration.
In this session, learn how to accelerate your journey to the cloud while implementing a cloud-first strategy and without sacrificing the controls and standards required in a large, publicly-traded enterprise. Benefit from the insights developed from working with some of the most recognized brands in the world. Discover how these household names leverage automation, CI/CD, and a modular approach to workload design to ensure the consistent application of their security and governance requirements. Learn which approaches to use when transforming workloads to cloud-native technologies, including serverless and containers. With this approach, business users can finally receive properly governed resources without delaying or disrupting their need for agility, flexibility, and cloud scale. This session is brought to you by AWS partner, 2nd Watch.
When it comes to doing the cloud right, no one size fits all. Yet sometimes organizations become distracted by the day-to-day management of cost, security, and overall operations. They can lose sight of the reasons they chose to embrace the cloud in the first place. How can you possibly manage it all and stay focused on the business outcomes that are most important? In this session, learn how forming a Cloud Center of Excellence (CCoE) has become an increasingly common way to address many of these challenges. When implemented well, the CCoE acts as a bridge, connecting all departments that use, measure, or fund your cloud operation. This session is brought to you by AWS partner, CloudHealth Technologies.
A successful transition to a modern elastic, containerized, microservice architecture requires automating all things, including your monitoring and alerting infrastructure. In this talk, we share some of the techniques and best practices we learned at New Relic for applying "infrastructure as code" (IaC) techniques to monitoring and alerting during our 10-year journey from a single-region monolithic application to a global multi-region deployment of hundreds of microservices. This session is brought to you by AWS partner, New Relic.
DevOps is a powerful movement that can help enterprises speed up their rate of innovation. But many large organizations struggle to implement DevOps at scale due to conflicts (real and perceived) with existing IT processes. Enterprise DevOps is the convergence of the speed and agility from modern development processes with the governance, security, and compliance control from traditional IT operations processes. In this session, learn how to implement enterprise DevOps in your organization through building a culture of inclusion, common sense, and continuous improvement. Also learn how to incorporate the knowledge from subject matter experts across your business into your automated DevOps guardrails to create the positive feedback loop we call "patterns of efficiency." This session contains actionable advice for leaders from IT as well as finance, compliance, and security departments.
Are you an expert data center operations engineer looking to sharpen your AWS skills? Are you an IT operations manager looking to speed up your team's cloud learning curve for operating in a hybrid cloud environment? Are you a DevOps engineer looking to grow your operations experience? This session follows two AWS operations experts throughout their day as they solve real-life problems in complex, enterprise hybrid cloud AWS environments. Expect to learn actionable hacks and tricks that you won't get in standard training classes, practical advice for solving common and not-so-common issues, and insights in to the top things our experts wish they knew when they were getting started with AWS.
VMware Cloud on AWS is the best path to move enterprise workloads into AWS. In this technical session, we walk through the VMware Cloud on AWS platform and demonstrate how you can quickly move production workloads to AWS. VMware Cloud on AWS is jointly engineered by VMware and AWS to bring the best aspects of VMware and AWS together into one unified service. Join our product team and be prepared to dive deep into how the product works. This session is brought to you by AWS partner, VMware, Inc..
At re:Invent 2014, we announced AWS Lambda and ushered in a whole new world of application design, one without the need to manage or think about traditional server infrastructure. Since then, serverless has become one of the hottest topics in the industry. Customers like Capital One and Coca Cola talk about how serverless saved them time and money, helped them reduce their operational burden, and drove developer agility and innovation. What is serverless, and what are the key trends you should be aware of? Where does one start on the journey of building serverless applications? We cover all of this and more in this session.
In this session, we explore landing zone considerations as they apply to compliance and auditing. We include such topics as a repeatable approach to SCP and IAM policy creation, internal separation of duty & "need to know", compliance scope ringfencing, Region scoping, scope of impact limitation, and mandatory access control. We review approaches for log and event analytics and log record lifecycle management (including redaction where necessary) and alerting. We also discuss how compliance assessment tools can be deployed in multi-account environments and their output sensibly interpreted. We encourage you to attend the full AWS Landing Zone track, including SEC303. Search for #awslandingzone in the session catalog.
Lifion is ADP's next generation platform, born in the cloud and built on an ecosystem of containerized microservices. Initially developed entirely on EC2, Lifion is undergoing a cloud native transformation and embracing AWS managed services. We will discuss our strategic architectural objectives, transformational journey as well as our learnings in adopting Kinesis, Aurora, Dynamo DB, and ElastiCache at scale.
Thomson Reuters is the world's leading source of news and information for professional markets. Our customers rely on us to deliver the intelligence, technology, and expertise they need to find trusted answers. We recently had the opportunity to take part in the divestiture of one of our largest business and technology segments. In this session, learn about the process of separating out and duplicating the fundamental systems that allow technology teams to operate in the cloud. From monitoring, DNS, and networking to service management and automation of standard processes, we discuss the technology that was in place and how this was accomplished with a small team in under three months.
Customers migrating to AWS can use AWS Migration Hub to obtain a single view of all migrations into AWS. In this session, we provide an in-depth walkthrough of migration execution best practices and automated migration tracking. Learn how to use the Migration Hub migration dashboard to quickly understand the current state and velocity of your application migrations and effortlessly provide your CEO, CIO, and other key stakeholders an up-to-date status of migrating your portfolio.
In this session, learn how GoDaddy achieved self-service, standardization, and governance through AWS Service Catalog in the first 100 days of their cloud migration journey. We walk through GoDaddy's use case of how they migrated to AWS with AWS Landing Zone, AWS Service Catalog, and the initial 100 days to establish their Cloud Center of Excellence to increase their speed of delivery and improve performance and reliability, while not sacrificing on security and financial controls. Complete Title: AWS re:Invent 2018: Drive Self-Service & Standardization in the First 100 Days of Your Cloud Migration Journey (ENT320)
Do you want to have a strong understanding of governance across all of your AWS accounts? Are you struggling to get centralized visibility across your entire organization? Join us in this session as we explore AWS Config, a service that enables centralized governance and resource monitoring. Learn best practices for enabling governance policies through a central account across multiple accounts in your organization, and monitor their compliance status using the multi-account, multi-region data aggregation capability. Also learn about recent launches and how customers are using AWS Config in their enterprises today.
Learn how Cox Automotive started its journey with GitHub Enterprise. Hear how the company improved its processes around managing GitHub Enterprise on AWS and its plans to streamline operations even further in the future. Millions of developers and thousands of businesses rely on GitHub to collaborate on code and build better software faster. GitHub Enterprise is the self-hosted solution for businesses that you can deploy and manage in your own secure environment, and what better place to do that than on AWS. This session is brought to you by AWS partner, GitHub.
Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run HPC applications requiring high levels of inter-instance communications, like computational fluid dynamics, weather modeling, and reservoir simulation, at scale on AWS. It uses a custom-built operating system bypass technique to enhance the performance of inter-instance communications, which is critical to scaling HPC applications. With EFA, HPC applications using popular HPC technologies like Message Passing Interface (MPI) can scale to thousands of CPU cores. Get a deep dive on EFA and learn how to use EFA to enhance application performance for your HPC workloads.
Learn how Wellington Management, a global investment management firm that manages more than 1 trillion USD on behalf of its clients, is executing an all-in strategy to exit all of its physical data centers by 2019. The migration includes both commercial applications and a large number of custom-developed analytical, portfolio management, and trading applications. We share the lessons learned, both positive and constructive, by a team that has been on this journey for over five years. We also discuss usage of many key AWS services, including Amazon Virtual Private Cloud, AWS Direct Connect, Amazon EC2, Amazon ECS, AWS Lambda, Amazon Redshift, Amazon Relational Database Service, and others.
Customer demands for higher levels of service and value, constantly evolving technology capabilities, and stringent regulatory requirements are all powerful forces reshaping retail banking. Built exclusively on AWS, Starling Bank's 100% cloud-based, mobile-only banking solution satisfies regulators in terms of its resilience, security, and reliability. It also satisfies consumers by giving them greater control over their data, streamlining the account opening process, accelerating payments, and providing access to innovative new services developed from scratch with open APIs, a developer platform, integration with Apple Pay, Google Pay, and Fitbit Pay and a custom backend ledger and payments integrations. Starling Bank is leading the open banking revolution. In this session, learn how Starling Bank delivers value to their customers and innovates at a very fast pace in a sector that can be slow to evolve.
How can Financial Services companies meet growing customer demands for personalized, high-quality service while satisfying regulatory obligations? In this session, learn how Amazon Connect, a self-service, cloud-based contact center, can integrate with machine learning services on AWS such as Amazon Transcribe, Amazon Comprehend, and Amazon Lex to enable financial institutions to deliver transformational omni-channel experiences to their customers while complying with regulations like MiFID II, the GDPR, and the SEC's data retention rules. Learn how to use Amazon Connect to easily set up a cloud-based contact center solution that scales to support businesses of any size. Then learn how to integrate Amazon Connect with machine learning services on AWS to make contact center content available for search and analysis by natural language processing tools, which can yield valuable insights into customer sentiment, customer preferences, and the most common issues customers raise during service interactions. Complete Title: AWS re:Invent 2018: Financial Svcs: Build Customer-Centric Contact Centers with Amazon Connect & Machine Learning (FSV301)
As financial institutions look to accelerate and scale their use of machine learning, they need to address questions related to specific results, such as the version of the code and the data that lead to a particular inference. The use of disparate and increasingly non-traditional data sources for activities such as targeted marketing, fraud detection, and improved returns is driving a need for structured development of machine learning models. In this session, we'll discuss how we can use a combination of AWS services including Amazon SageMaker, AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline to create a workflow that will help financial institutions meet their requirements and drive business results.
Late in 2017, Mutual of Omaha began a cloud journey to modernize its legacy contact centers. Using Amazon Connect-supported by Amazon Lex, Amazon Polly, AWS Lambda, and Kibana, Accenture helped Mutual of Omaha improve customer engagement, developed self-service features using leading-edge speech recognition, and developed powerful analytics to continuously drive positive change. Mutual of Omaha plans to reduce TCO annually with Amazon Connect compared with its legacy solution. As of August 2018, three contact centers are live in Amazon Connect, with several more scheduled to go live in 2018. This session is brought to you by AWS partner, Accenture.
With over $5 trillion in assets under management, Vanguard requires a secure, flexible, and fast data and analytics platform. By deploying Tableau on AWS, Vanguard was able to move from their private cloud to the AWS Cloud, and significantly reduce the administrative workload of Vanguard's IT team, allowing them to focus on innovating. In this session, learn how mission-critical processes such as configuring deployments, adding nodes, and creating backups are automated via scripts. Expect to leave with a clear picture of how a global deployment of Tableau on AWS supports peak surges and avoids the sunk costs of on-premises hardware. This session is brought to you by AWS partner, Tableau Software.
Curious to understand how operating model changes are key to cloud adoption? Last year, Vanguard presented its journey to the cloud. CTO Jeff Dowds anticipated that there was a lot more work to do organizationally, especially around operating models. In this session, Jeff discusses the lessons learned from implementing new organizational structures tailored towards delivering business value in the cloud. You have an opportunity to learn how to optimize and run cloud operations optimally and efficiently. One of the core tenants of DevOps is continuous improvement and continuous learning. As enterprises mature in their cloud journey, their focus shift from 'how do we build in the cloud' to 'how do we optimize for the cloud'. Vanguard discusses its journey into the adoption of chaos engineering in a highly regulated environment where testing in production is typically not allowed. This session is brought to you by AWS partner, Deloitte Consulting LLP.
In this session, learn how Supercell architected its analytics pipeline on AWS. We dive deep into how Supercell leverages Amazon Elastic Compute Cloud (Amazon EC2), Amazon Kinesis, Amazon Simple Storage Service (Amazon S3), Amazon EMR, and Spark to ingest, process, store, and query petabytes of data. We also dive deep into how Supercell's games are architected to accommodate scaling and failure recovery. We explain how Supercell's teams are organized into small and independent cells and how this affects the technology choices they make to produce value and agility in the development process.
Activision Blizzard provides a real-time, immersive, second screen Alexa experience for Call of Duty WWII players in a first-to-market major title integration. In this session, we illustrate how Activision Blizzard leverages AWS services to power the Call of Duty Alexa skill, providing real-time, 1:1 personalized interactive answers and coaching. Participants gain an understanding of how AWS Lambda, Amazon CloudFront, Amazon S3, Amazon Polly, and Alexa skills and management are used to deliver AI-generated, customized responses to user requests at scale, giving Call of Duty Alexa users a competitive and fun advantage. Complete Title: AWS re:Invent 2018: Activision Blizzard: Giving Call of Duty Gamers an Edge with Alexa & AWS (GAM302)
Understanding gamer behavior is critical in acquiring, retaining, and monetizing users effectively. The cycle requires constant fine-tuning through expensive and complex live operations to offer fresh, fun, and challenging experiences. In this session, learn how Rovio uses machine learning (ML) to make this process faster, more efficient, and more accurate. Learn how to leverage AWS compute, analytics, and database services, such as Amazon S3, Amazon EC2, Amazon EMR, Amazon Athena, Amazon DynamoDB, and Amazon Redshift, to build an ML workflow that predicts the future interests and behaviors of gamers to better serve your needs and theirs.
Greater China Region
In this session, we provide an update on the AWS China Region, and we discuss the business and technical best practices occurring there. Whether you're new to the AWS China Region or you have familiarity with it, you will find useful information. For those who are new, learn how to register an AWS China account, complete China's ICP filing, optimize network performance between China and the globe, and more. For those who already have experience with the AWS China Region, learn about the new services launched there in the past year.
Global Partner Summit
In this session, we describes the steps for listing your product on AWS Marketplace. Learn how leading ISVs are reaching new customers and decreasing the time it takes to close transactions using AWS Marketplace.
In this session, learn about the new, seller-specific features in AWS Marketplace that make it easy for sellers to close transactions with over 190K active customers. Learn how to publish and update products as part of your feature release process with Self Service Listings. We review how you can target specific customers with special or customized pricing using Seller Private Offers and how you can accelerate your contract negotiations with the Enterprise Contract for AWS Marketplace. Lastly, we discuss how to combine these and other AWS Marketplace features to grow your business by reaching new buyers and converting perpetual licensed customers to a subscription model.
There is no single approach to building SaaS applications on AWS. Domain, compliance, performance, legacy considerations, and business forces all play a big role in shaping the architecture of your solutions. While there are many strategies for implementing SaaS on AWS, there are some common architectural patterns that are used to address the varying needs of SaaS providers. In this session, we review in detail a collection of SaaS reference architectures that represent a spectrum of approaches to addressing identity, onboarding, storage partitioning, tenant isolation, billing, deployment, regional distribution, and operational models. Our goal is to provide a menu of concrete solutions that can provide insights into how AWS constructs are leveraged to realize SaaS best practices on AWS.
Come to this session to learn a new approach in reducing risk and costs while increasing productivity, organizational alacrity, and customer experience, resulting in a competitive advantage and assorted revenue growth. We share how a de-identified data lake on AWS can help you comply with General Data Protection Regulation (GDPR) and California Consumer Protection Act requirements by solving the issue at its causal element. Complete Title: AWS re:Invent 2018: Data Privacy & Governance in the Age of Big Data: Deploy a De-Identified Data Lake (GPSTEC303)
Customers have compelling business reasons to modernize and migrate mainframe workloads to AWS. Mainframes typically process complex and critical applications. Fortunately, we have accumulated experience and learned lessons based on the many successful customer modernization projects to AWS. In this session, we present patterns and best practices that facilitate successful mainframe to AWS initiatives.
In this session, we review how technology and consulting partners can utilize AWS PrivateLink, a networking service that allows for a service behind a load balancer to be privately placed into other VPCs as well as on-premises. You can use PrivateLink to help scale a SaaS service, simplify microservices, simplify the network connectivity of managed service providers, and create a more secure environment for partner products inside customer VPCs. In this session, we focus on the design and service architecture requirements as well as the business considerations for implementing PrivateLink for your product or service. We also hear from APN Partner, Snowflake, and its customer, ARC, about how they deployed PrivateLink.
Everywhere there is talk of modernization, containers, Kubernetes, and 900K containerized applications on Docker Hub. Yet only 35% of container workloads are in production. In this session, we explore why so many modernization journeys fail to move from the proof of concept or evaluation phase into production, and how application platforms are helping customers succeed. We explore how application platforms are mitigating the complexities and pitfalls, and how they assist enterprise customers not only in modernizing workloads but building hybrid application workloads and accelerating cloud adoption.
You've spent the time designing, architecting, setting up, and configuring your Kubernetes cluster. Now, it's on to day two. "Day two" refers to the functions of scaling, optimizing, monitoring, securing, and in general keeping the lights on. In this talk, we discuss the tools that you have available to help you build a reliable and resilient Kubernetes cluster and run workloads in production. We discuss how to control the network, secure your environment using threat detection, scan your containers for vulnerabilities, use monitoring tools, and create scalable containers and clusters.
In this session, we cover architecture opportunities available through the partner network, with solutions such as CloudBees Jenkins, BlazeMeter, Runscope, and others, along with AWS services such as AWS CodeBuild to leverage capabilities included with Amazon EC2 Spot instances. We walk through development, build, and deployment opportunities to leverage different architectural choices best suited to customer designs and requirements.
When you consider migrating your on-premises storage workloads to AWS, it's important to consider both performance and features. In this session, you learn how to use I/O profiling before you move your workload to AWS in order to understand your performance needs. Learn to translate your performance and feature requirements into solutions which might include AWS services and partner solutions. In addition we show you how to keep monitoring your storage workload once you're running on AWS.
There is a lot of interest these days in migrating data from commercial relational databases to open-source relational databases. PostgreSQL is a great choice for migration, offering advanced features, high performance, rock-solid data integrity, and a flexible open-source license. PostgreSQL is compliant with ANSI SQL. It supports drivers for nearly all development languages, and it has a strong community of active committers and companies to provide support. In this talk, we demonstrate an overall approach for migrating an application from your current Oracle database to an Amazon Aurora PostgreSQL database.
In this session, we dive deep on best practices and design considerations for running Microsoft SQL Server on AWS. We cover how to choose between running SQL Server on Amazon EC2 and Amazon RDS. We also cover how to optimize the performance of SQL Server on AWS, how to leverage the new the Optimize CPU feature, and how to deploy SQL Server on Linux. We also review best practices for storage, monitoring, availability, security, and backup and recovery for SQL Server.
Blockchain continues to be called the next generation of technology, so why does it mystify so many? In this session, we discuss how AWS and its ecosystem will help deliver value beyond just infrastructure for blockchain. We include the blockchain competency announcement, the blockchain value proposition broken down, a customer story involving Intel and T-mobile, and a blockchain delivery kit featuring Accenture and AWS.
In this session, learn how Peak's Artificial Intelligence System (AIS) embeds Amazon SageMaker to solve business problems with outstanding results. We show you how Peak worked backwards from two customer problems to create a machine learning (ML) solution that used multiple models, trained, and then deployed on Amazon SageMaker. We highlight the challenges, classifying PII data and integrating data from multiple sources. Next, we walk through the ML model training phase for each customer, showing you how new data sources were used to improve the accuracy of the ML models. Finally, the results: Regit and Footasylum were able to use the intelligent predictions provided by Peak.AI to deliver a personalized service to their customers, resulting in a 30% increase in revenue.
Blockchain is a distributed ledger-based technology that enables you to uniquely solve inefficiencies in business networks. However, like any exchange of value, it makes sense only when used by multiple parties. With the myriad of existing protocols and proposed applications, it can be difficult to decide on the right approach to implement a blockchain solution that best fits a given use case. In this talk, we dissect use cases and blockchain architectures built for multi-party consortiums in energy and financial services sectors. Our partners GuildOne and Kaleido highlight the architectural approaches to shared IT and consortium building using Corda and Ethereum protocols.
Demand for new products, lower prices, and higher quality drives manufacturers to invest in tools that can help them compete in a global marketplace. The availability of IoT data and edge computing is helping manufacturers create new business models, add new functionality into existing products, create more accurate maintenance predictions, and optimize product design. In this session, we explore how AWS IoT, AWS Greengrass, and machine learning (ML) impact product design and production. Topics include: IoT, secondary sensing, IT vs. OT networks, ML at the edge, and digital twin. We discuss AWS services like AWS IoT, AWS IoT Analytics, AWS Greengrass, and Amazon SageMaker, and we describe both AWS Partner Network (APN) Technology Partner solutions and industry-specific AWS reference architectures.
Many businesses need to quickly and reliably produce a transcript from audio- or video-based content. Up until now, businesses had to incur both the expense and the lengthy process of hiring staff or a service to transcribe this content. Moreover, to produce a multi-language version of the content required translating it into the target language and possibly over-dubbing the original content with a new audio track. In this session, we walk you through the capabilities, process, and coding approach for creating subtitled and translated videos using Amazon Transcribe, Amazon Translate, and Amazon Polly.
Modernization involves implementing business processes and technology that provide your business applications with high availability, agility, and elasticity. Nowhere is this more important than in breaking apart the monolith. Modernizing an application as part of a migration can be extremely successful if you follow the AWS migration methodology of 'discover, plan, migrate, and optimize' as you move that application to the cloud. In this session, we share what we learned from over 400 successful migrations. We also show you how to virtually break a monolith to a modernized architecture as part of the planning phase and accelerate your migration using container technologies and application discovery tools.
Take a journey with Preston, an Amazon Sumerian host, to diagnose and explore a 3D printed jet engine in virtual reality. Preston follows your commands to control and explore different parts of the engine. Preston can also show you the future state of your machine in virtual reality and provide recommendations by analyzing data collected from a physical jet turbine engine using IoT sensors. Learn how to build your own virtual assistant with Amazon AI services, AWS IoT services, and Amazon Sumerian virtual reality.
Enterprises are seeing big business benefits by moving their SAP workloads to AWS. However, the migration to AWS is often just the first step in their innovation journeys. In this session, we share how enterprises are realizing their business transformations in the four major areas: big data & analytics, IoT, app & APIs, and DevOps, supported by a solid foundation of machine learning and compute services on AWS. We provide demonstrations so you can learn firsthand how to help your customers innovate using AWS solutions with SAP applications. Expect to leave this session with reference architectures and best practices for implementing these innovations in their SAP landscapes.
IoT devices often reside in environments where many people can access them. Join us to learn what you can do to protect the data and credentials on IoT devices when they are in the field and understand common attack vectors for IoT devices. Hear from our APN partner Zymbit on how their hardware-based security components can be integrated with AWS Greengrass and AWS IoT.
Complex applications generally require a way to provision resources at scale to enable an organization to onboard customers in a frictionless way while remaining operationally efficient. In this session, we describe how you can architect a control plane built on AWS that is responsible for provisioning and maintaining infrastructure and application resources for multiple customers across a number of AWS accounts, VPCs, and AWS services.
In this hands-on session, we crack open the IDE and transform a SaaS web app comprised of several monolithic single-tenant environments into an efficient, scalable, and secure multi-tenant SaaS platform using ReactJS and NodeJS serverless microservices. We use Amazon API Gateway and Amazon Cognito to simplify the operation and security of the service's API and identity functionality. We enforce tenant isolation and data partitioning with OIDC's JWT tokens. We leverage AWS SAM and AWS Amplify to simplify authoring, testing, debugging, and deploying serverless microservices, keeping operational burden to a minimum, maximizing developer productivity, and maintaining a great developer experience.
In this Healthcare Leadership Session, learn our vision for continued innovation and key cloud computing trends for healthcare providers and payers. Come hear from our AWS Healthcare Business Development leads about applications that healthcare companies like GE, Cerner, and Philips have built on top of AWS. Learn how you can use AWS services to support clinical messaging platforms such as HL7 and FHIR, and Amazon SageMakers to help with clinical diagnostics and reimbursement. Also learn how the tools that AWS provides can help you meet your regulatory needs to support local data privacy laws.
In this session, learn how to architect for AWS for healthcare compliance. Join Pat Combes, AWS Healthcare Technical Lead, and hear the latest on AWS HIPAA-Eligible Services, European General Data Protection Regulation (GDPR), and ISO 13485. Learn about some general patterns and common architectures that you can use to decouple protected data from processing and orchestration. Understand how to track where data flows though automation, and learn how to have logical boundaries between protected and general workflows
Maintaining a compliant environment is critical for regulated industries such as healthcare, but with the advent of GDPR and other regional data privacy frameworks, compliance is becoming just another cost of doing business. In this session, we dive deep into how Cloudticity built the Cloudticity Oxygen managed services framework as an example of what a compliance framework looks like and how maintaining a compliant posture-and being able to prove that to auditors and regulators-can and should be a native part of your infrastructure. Compliance doesn't need to be something you add on. It should be deeply ingrained in your environment. Learn specific AWS services and techniques to track and maintain compliance in a fully automated manner directly from Cloudticity's founder. This session is brought to you by AWS partner, Cloudticity.
In this session, learn how to better analyze your data for patterns and inform decisions by pairing relational databases with a number of AWS services, including the graph database service, Amazon Neptune. Additionally, hear about the use of AWS Glue and Apache Ranger for data cataloging and as a baseline for query and dataset resolution. Learn about the use of AWS Fargate and AWS Lambda for serverless provisioning of complex data and how to do data rights management at scale on an enterprise data lake. As a case study, hear how Change Healthcare is building an Intelligent Health Platform (IHP) using these services to help standardize and simplify a number of healthcare workflows, including payment processing, which have traditionally been both complex and disconnected from healthcare event data. Complete Title: AWS re:Invent 2018: Data Patterns & Analysis with Amazon Neptune: A Case Study in Healthcare Billing (HLC303)
Patterns of IoT project success are starting to emerge across industries and project types. In this session, we identify and review high-level challenges, and we describe the most common solutions to those challenges. Leave this session with an understanding of the common phases and personae necessary for your project as well as general guidelines for orienting your project and organization toward success. A representative from Pentair discusses the company's IoT project as an example case study.
The intersection of AI and IoT presents new opportunities to create value for your business, capturing new insights from the vast amounts of IoT data available, which results in stronger customer relationships and new efficiencies. In this session, we discuss the future of operations and product development when AI and IoT meet to make autonomous decisions faster and better.
Helping you manage the security of your IoT fleet is a top priority for AWS. You can use AWS IoT Device Defender to audit device fleets for best practices and drift in security settings, detect abnormal device behavior, and receive alerts to investigate issues. In this session, we will show you how you can use AWS IoT Device Defender to implement and maintain secure policies and controls to keep data and devices secure. Come away understanding how to spot insecure device configurations and how to set up metrics that can be used to spot a DDoS and botnet attacks. We will also look at how AWS IoT Device Defender works with AWS IoT Core and AWS IoT Device Management to respond to security alerts.
Edge computing is all about moving compute power to the source of the data instead of having to bring it to the cloud. The edge is a fundamental part of IoT, and it is not only about connecting things to the internet. In this sesssion, we discuss how AWS Greengrass, which is an IoT edge software, can power devices small and large, from a sensor all the way to a wind turbine. With AWS Greengrass, these IoT devices can securely gather data, keep device data in sync, and communicate with each other while still using the cloud for management, analytics, and durable storage. Join us to learn more about the edge of IoT.
IoT deployments often consist of thousands to millions of devices deployed across multiple locations. It is essential to have a solution to track, monitor, and manage your connected device fleets at scale. In this session, learn how AWS IoT Device Management can help you onboard, remotely manage, and monitor your connected devices. In this session, we discuss key features of AWS IoT Device Management, including fleet indexing and search, remote commands, over-the-air (OTA) updates, and device monitoring. Learn how AWS IoT Device Management can help you scale your fleets and reduce the cost and effort of managing IoT device deployments. Also, representatives from Hudl, the world leader in sports video analysis, discuss how the company is changing the recording and upload process for high schools across the country with their newest product, Hudl Focus.
In this presentation, we take a deeper look at Amazon FreeRTOS. As OEMs work to squeeze more functionality onto cheaper and smaller IoT devices, they face a series of challenges in development and operations that results in security vulnerabilities, inefficient code, compatibility issues, and unclear licensing. With Amazon FreeRTOS, it is now easier to build, deploy, and update connected microcontroller-based devices quickly and economically, while retaining confidence that the devices are secure. Also, learn how Pentair, a leading water treatment company, is developing an IoT solution with the help of Amazon FreeRTOS and Espressif Systems, a hardware partner.
Edge computing is all about moving compute power to the source of the data instead of having to bring it to the cloud. The edge is a fundamental part of IoT, and it is not only about connecting things to the internet. In this sesssion, we discuss how AWS Greengrass, which is an IoT edge software, can power devices small and large, from a sensor all the way to a wind turbine. With AWS Greengrass, these IoT devices can securely gather data, keep device data in sync, and communicate with each other while still using the cloud for management, analytics, and durable storage. Join us to learn more about the edge of IoT.
Whether it's connected cars, smart home devices, or industrial applications, IoT applications are rapidly becoming more intelligent. Edge computing is helping lead this transformation as IoT devices not only collect and transmit data, but also perform predictive analytics and respond to local events, even without cloud connectivity. In this session, learn about ML inference at the edge, why it matters, and how to use it to build intelligent IoT applications. Through customer use cases, we demonstrate how to use AWS Greengrass to locate cloud-trained ML models, deploy them to your AWS Greengrass devices, enable access to on-device computing power, and apply the models to locally generated data without connection to the cloud.
In this session delivered by the VP of AWS IoT, we cover how AWS IoT is being deployed across consumer, commercial, and industrial applications. See how customers are securely connecting and managing devices and creating analytics and machine learning (ML) based on IoT data. AWS IoT applications run in the cloud to enable massive scalablity or at the edge to enable real-time local action. Come away with an understanding how IoT is transforming business and what's new from AWS IoT.
See how our customers are using AWS IoT Analytics.
In this session, we explain how combining IoT and AI technologies, such as computer vision, enables you to increase the productivity of a manufacturing process. Using AWS IoT services and analytics, we show you how to sense and control environmental conditions. Finally, we show you how to quickly transition from a patrol-based model to a notification-based model for replenishment scenarios.
Edge computing is defined by taking the specific timing-sensitive parts of your application and moving them closer to where they are needed-whether that need is a user or a source of interesting data. In this session, learn how to take advantage of cloud computing at the edge with New Relic. This session is brought to you by AWS partner, New Relic.
Connecting with more people and learning about their challenges so you can inform them of your offerings is vital to fueling the growth of your business. Understanding and tracking all your touchpoints to find the right prospects requires a valuable customer and marketing toolset. In this session, learn how a leading automation company is using NetApp Cloud Volumes and AWS to break free from their data center and reach more clients. They are now able to move their heavily file-reliant database environment to AWS to reach clients that were otherwise unreachable and grow their business. This session is brought to you by AWS partner, NetApp.
Voice is a natural interface to interact not just with the world around us, but also with physical assets and things, such as connected home devices, including lights, thermostats, or TVs. In this session, we discuss how you can connect and control devices in your home using the AWS IoT platform and Alexa Skills Kit. We also hear from customer, VIZIO, on how they are using AWS IoT and Alexa to bring a voice-controlled television experience to their hundreds of thousands of customers.
The overwhelming majority of today's industrial machines is not internet-ready. This is because many industrial machines lack connectivity or have proprietary or industry-specific interfaces. The value of these machines is in their access to data and metadata that can be used to unlock new business opportunities. In this session, we provide a detailed view into how to use AWS Greengrass and AWS Lambda to connect machines with industrial interfaces, such as OPC UA, Modbus, Ethernet/IP, and EuroMap63. We also walk you through a demonstration and show you how to connect to industrial machines and provide data and metadata at the edge and in the cloud.
The IoT transformation begins at home. Today, device manufacturers and network providers are building groundbreaking connected devices and applications to make your home smarter. In this session, we dive into the best practices and common patterns for building connected devices and applications for a Smart Home. This session focuses on securely onboarding your devices to a consumer's account, leveraging IoT, using ML at the edge, and video streaming for home security. By the end of the session, you'll have a set of best practices for how to build IoT products in the Smart Home. Also hear from Vestel about their smart home application.
AWS IoT Analytics makes it easy to run and operationalize sophisticated analytics on massive volumes of IoT data without having to worry about the cost and complexity required to build your own IoT analytics platform. It collects and prepares data for analysis and also lets you explore and visualize your IoT data so you can make better and more accurate business decisions. AWS IoT Analytics is a fully managed service that makes it easy to run and operationalize sophisticated analytics on massive volumes of IoT data without having to worry about the cost and complexity typically required to build an IoT analytics platform. It is the easiest way to run analytics on IoT data and get insights to make better and more accurate decisions for IoT applications and machine learning use cases. Models built and trained in AWS IoT Analytics can be run on connected devices. Join us for a deep dive and demo on how to operationalize your analytical workflows with AWS IoT Analytics.
With AWS Greengrass, you can bring local compute, messaging, data caching, sync, and machine-learning inference capabilities to edge devices. Join us in this session to learn about new features that extend the capabilities of AWS Greengrass devices.
AWS IoT services help you connect devices to AWS services and other devices, secure data, interactions, and process, act on device data, and enable applications to interact with devices, even when they are offline. In this session, we provide tips for using the portfolio to create applications faster and deploying applications at the edge. We show you how to save time when adding new kinds of devices to your applications.
In this session, we discuss the different ways to understand the state of your operations, how to use AWS IoT services, and how to take appropriate action using AWS IoT services, like the AWS IoT Rules Engine, to improve operational efficiency.
IoT (IIoT) bridges the gap between legacy industrial equipment and infrastructure and new technologies, such as machine learning, cloud, mobile, and edge computing. In this session, we focus on how you can extract data from your industrial data sources and build operational insights using AWS IoT services. We cover how to bridge traditional on-premises applications and data stores with new cloud-based IoT applications.
Your devices are being shipped across the globe. You have consumers who use their hardware across different countries. How can you build an IoT application that reflects the geographic reach of your devices? In this session, we walk you through the stages of going multi-region with AWS IoT. We first tackle common challenges around setting up your accounts and permissions for AWS IoT. We then dive into different modes of multi-region deployments using multiple AWS services. We also cover the nuances of moving devices across locations and how you can plan, monitor, and execute on your IoT application. Throughout this session, we dive into code and architectures that show the good, the bad, and the ugly of multi-region deployments in IoT, and we share how best to tackle them on day 1 as you take your applications global. We also highlight a customer example from Analog Devices.
In this session, learn our vision for continued innovation in the life sciences industry and how AWS services can help. Gain insight into the key cloud computing trends for biotechnology, pharmaceutical, and medical device companies. Hear how you can use the AWS cloud to accelerate research, modernize clinical trials, build smarter manufacturing processes, and create deeper partnerships with healthcare providers and payers through real world evidence and digital therapeutics.
Pfizer needed the ability to perform rapid analysis on its set of real-world evidence (RWE) data to improve patient outcomes, but its existing platform could not scale and meet its objectives. Pfizer collaborated with Deloitte to transform its real-world data and analytics capabilities that maximize insights and avoid duplicative investments by migrating their existing RWE data and analytics environment to the AWS Cloud. Learn how these strategies for planning, executing, and validating the success of these capabilities helped position Pfizer to use the AWS Cloud environment as the cornerstone of its patient-centric analytics to expand and incorporate new AI/ML capabilities, such as Amazon SageMaker. This session is brought to you by AWS partner, Deloitte Consulting LLP.
Informatics systems used by research scientists today have significant limitations, since they come from many vendors, use different data formats, and were developed with various UI standards. These limitations create barriers to accessing and integrating heterogeneous, siloed research data in a meaningful way to facilitate innovation and collaboration. In this session, Accenture and Merck discuss the expanded capabilities and benefits-for drug discovery organizations and software providers-of a newly launched research platform that gives the research science world a highly elastic, cloud infrastructure with a single UI and advanced computing power that accelerates drug discovery activities and enables competitive differentiation. This session is brought to you by AWS partner, Accenture.
In this session, learn how to use AWS IoT services to build devices that can be used in regulated industries, like healthcare and pharma manufacturing. Come hear from AWS solution architects about how you can use the AWS IoT Core to enable your devices, services like AWS Greengrass to build devices that have local compute, messaging, data caching, sync, and machine learning inference capabilities and AWS IoT Analytics to run sophisticated analytics on massive volumes of IoT data without having to worry about all the cost and complexity typically required to build your own IoT analytics platform. You will also hear about how you can set up the fine-grained access control, auditability, and automated guardrails necessary for the creation and maintenance of regulated workloads following Good Laboratory, Clinical, and Manufacturing Practices (GxP) and other industry standards and ISOs.
Media & Entertainment
In this wide-ranging keynote session, first hear from AWS VP Carla Stratfold on the major forces affecting the industry, then learn from AWS Global M&E Tech Lead Usman Shakeel about the latest and most exciting releases coming out of re:Invent relevant to the M&E industry. And finally, hear how technical leaders at the forefront of the industry are responding to accelerating changes in the media landscape.
Content lake architecture can evolve the media workflow by providing efficiency from content security all the way to value-added services, such as machine learning and content monetization. In this session, technical leaders from 21st Century Fox, Warner Bros., and Astro Malaysia discuss the migration of their petabyte-scale video libraries (production and distribution archives) to the cloud in order to increase the customer reach and value of their media archives. Discover some of the lessons learned, the TCO analysis around various different storage tiers, the challenges and best practices from 10s of petabytes ingest, storage, and value-added compute at scale.
Learn how the world's third-largest ticketing company uses AWS Service Catalog to automate its entire PCI-compliant platform to better manage peak demand during major concert ticket sales for some of the world's largest venues, including the 100,000-seat Melbourne Cricket Ground in Australia. In this session, Deloitte's Zack Levy and Ticketek CTO Matt Cudworth discuss taking automation to another level-from manually managing ‚Äòhot shows' to using AWS Service Catalog to automate multiple AWS services (Amazon EC2, Amazon Route 53, Amazon VPC, Amazon ELB, and AWS CloudFormation), enabling Ticketek to scale and run multiple hot shows concurrently across multiple jurisdictions. This session is brought to you by AWS partner, Deloitte Consulting LLP.
Join us as we describe the vision and possibilities for platform businesses, including an in-depth look at OpenAP, TV's first open platform for cross-publisher audience targeting. Open platforms are breaking down barriers, enabling companies to connect best-of-breed cloud services to solve problems, respond faster, and create a competitive advantage. The TV industry open platform, built by OpenAP and Accenture, is highly available, with end-to-end security and massive scalability for both advertisers and publishers. By building the application from the ground up and leveraging AWS services, the Accenture team released the product ahead of schedule-only five months from kickoff to launch. This session is brought to you by AWS partner, Accenture.
The world's leading content creators, broadcasters, OTT providers, and distributors rely on Deluxe's experience and expertise to globally create, transform, localize, and distribute hundreds of thousands of assets per month. This scale presents a unique set of challenges and opportunities, further multiplied by the benefits that cloud orchestration, automation, and self-service bring to technical teams. In this session, we dive into how we are applying machine learning to workflows such as fingerprinting and conformance, and specifically how we are operationalizing these as a set of primitives that can be consumed by both customers and services alike. We share our rationale behind our selection of machine learning technologies for workflows such as supply chain, creative and visual effects, where they make sense, and we discuss how we make them available to internal development and data science teams through our MLOps/DevOps pipelines.
Semiconductor design companies, electronic design automation (EDA) vendors, and foundries remain competitive by innovating and reducing time to market. AWS is deeply invested in semiconductor use cases, including EDA, emulation, and smart manufacturing, including data lake and IoT/AI. We care about this because Amazon depends on faster semiconductor innovation from our suppliers and in our own silicon teams. We have a wide breadth of services that will directly benefit the entire industry. In this session, learn how to achieve the maximum possible performance and throughput from design and engineering workloads running on AWS. We demonstrate specific optimization techniques and share architectures to accelerate batch and interactive workloads on AWS. We also demonstrate how to extend and migrate on-premises, high performance compute workloads with AWS, and use a combination of On-Demand Instances, Reserved Instances, and Spot Instances to minimize costs. Learn how semiconductor customers address security as they move to the cloud as they discuss the AWS capabilities and controls available to secure sensitive design IP and offer strategies for data classification, management, and transfer to third parties.
Manufacturing companies collect a large amount of process data, but common issues, such as disparate data sources, stranded data, and ownership, make it difficult to identify insights. In this session, learn how to build a data lake on AWS using services such as Amazon EC2, Amazon S3, AWS Lambda, and IAM. Also, we review a reference architecture supporting data ingestion, event rules, analytics, and the use of machine learning (ML) for manufacturing analytics. We also discuss how a large printer manufacturer created a data lake by combining streaming manufacturing plant data with batch data from their SAP system and suppliers to create a single source of truth.
AWS global infrastructure continues to innovate and scale. To sustain innovation and growth, Amazon uses AWS to design the next generation of cloud infrastructure. Accelerating the RTL to GDSII workflow, Amazon uses AWS for semiconductor design and Electronic Design Automation (EDA) tools. In this session, we discuss the infrastructure and architectures that our own silicon teams use to design the next generation of cloud computing infrastructure. From switch technology to specialized hardware, the immense capacity, elasticity, and agility that AWS provides is powered by Amazon processors. Through partnerships and collaborations with many EDA vendors and semiconductor customers, Amazon continues to quickly advance technology at an unprecedented pace.
Do you have an idea for an app but don't know where to start? In this session, we walk you through the process of taking your idea to reality, and we show you all the infrastructure you need to understand along the way. We also show you how AWS platform services and SDKs can help you get to a quality release faster and then scale for success with serverless technologies. In addition, we demonstrate how you can build a scalable production-ready app quickly with GraphQL and machine language capabilities. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Do you wonder what AWS thinks about mobile development? In this session, learn the very latest about the many AWS services that web and mobile developers can leverage to make cloud-enabled development possible, and hear what might be in store for the future.
Building a scalable API for your mobile app can be daunting. Between authentication, authorization, database, real-time and offline considerations and the soft requirements of serverless scalability, expandability, security, and availability, the API can become complicated. In this session, learn how to easily build a serverless GraphQL API for your next app without needing to be an API expert, then integrate it into your app with just a few lines of code. Accelerate your app development by simplifying your backend design.
The architecture of modern native mobile apps is an ever-changing field. In this session, we review the latest Android architecture blueprints for native apps and how you can use these architecture components to easily authenticate and engage users, access data and build responsive mobile apps.
In this fast-moving world of mobile app development, cloud services complement well architected front-end techniques to improve user experience. In this session, you learn how to build a scalable and secure backend for your iOS native app and how to apply the best client-side development techniques to access your backend while providing real-time and offline data, authentication, and analytics that you can leverage to make your app a success.
At Hulu, notifying our viewers when their favorite teams are playing helps us drive growth and improve viewer engagement. However, building this feature was a complex process. Managing our live TV metadata, while generating audiences in real time in high-scalability scenarios, posed unique challenges for the engineering team. In this session, we discuss the challenges in building our real-time notification platform, how Amazon Pinpoint helped us with our goals, and how we architected the solution for global scale and deliverability.
Building real-time collaboration applications can be difficult, and adding intelligence to an app to make it stand out remains a challenge. In this session, learn how to build real-time chat serverless apps infused with AWS machine learning (ML) services. We dive into enhancing a real-time chat application with search capabilities, chatroom bots providing automated responses , and on-demand message translation using Amazon AI/ML services. Complete Title: AWS re:Invent 2018: Bridging the Gap Between Real Time/Offline & AI/ML Capabilities in Modern Serverless Apps (MOB310)
ALDO wanted to improve in-store customer experience by offering rapid personalized assistance. In this session, follow ALDO's journey in adopting GraphQL and serverless technologies for their in-store modern apps. Learn how this global fashion brand is offering elevated real-time, personalized customer experiences while optimizing in-store retail operations. Hear how they integrated with their existing infrastructure, along with other challenges. They also share best practices that they are carrying forward for future apps.
As you move your modern app to production, you need to consider how to scale, secure, and maintain your backend APIs. In this session, we provide some of the tips, tricks, and best practices for running serverless GraphQL APIs reliably on AWS. We cover topics such as versioning, multiple environments, CI/CD, advanced schema design, monitoring, alerting, and advanced search scenarios.
Modern apps require special consideration for the security and privacy of user data, especially in today's compliance-driven world. In this session, we provide some of the common use cases and design patterns to secure user data in a globally available GraphQL API, and discuss best practices for authentication and authorization in AWS AppSync.
In this session, we walk through the fundamentals of Amazon VPC. First, we cover build-out and design fundamentals for VPCs, including picking your IP space, subnetting, routing, security, NAT, and much more. We then transition to different approaches and use cases for optionally connecting your VPC to your physical data center with VPN or AWS Direct Connect. This mid-level architecture discussion is aimed at architects, network administrators, and technology decision makers interested in understanding the building blocks that AWS makes available with Amazon VPC. Learn how you can connect VPCs with your offices and current data center footprint.
In this introductory session, we cover how to secure your resources in the cloud for common AWS workloads such as Amazon EC2 computing, database, and serverless. We cover security best practices recommended by AWS for each workload using simple and effective identity and networking techniques. Learn how and why these controls do what they do, and come away with the ability to interpret and apply AWS identity and network access controls.
Join Dave Brown, VP of EC2 Networking at AWS, to learn about the new services and features we launched this year. Dave also share our vision for the future of connectivity in the cloud and the ongoing evolution of networking capabilities. Dave covers the entire suite of networking services, including Amazon Virtual Private Cloud (Amazon VPC), Elastic Load Balancing, AWS PrivateLink, VPN, and AWS Direct Connect. In addition, Dave reviews some real-world customer scenarios and how AWS networking solves those in a secure, reliable, flexible, and highly performant way.
AWS PrivateLink is a networking service that allows you to increase the security, scale, and resiliency of your services. In this session, we review the way AWS PrivateLink works, best practices, and how to increase availability and security. We review how to set up both the consumer and provider sides of PrivateLink, use cases, and interoperability with other AWS services. Whether you want to consume services in a more scalable and private way or you have services you want to share with others, we help you understand best practices for AWS PrivateLink.
Amazon Virtual Private Cloud (Amazon VPC) enables you to have complete control over your AWS virtual networking environment. Given this control, have you ever wondered how new Amazon VPC features might affect the way you design your AWS networking infrastructure, or even change existing architectures that you use today? In this session, we explore the new design and capabilities of Amazon VPC and how you might use them. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Many enterprises on their journey to the cloud require consistent and highly secure connectivity among their existing data center, their staff, and AWS environments. In this session, we walk through the different architecture options for establishing this connectivity using AWS VPN solutions. With each option, we evaluate the considerations and discuss risk, performance, high availability, encryption, and cost.
The AWS Global Network provides a secure, highly available, and high- performance infrastructure for customers. In this session, we walk through the architecture of various parts of the AWS network such as Availability Zones, AWS Regions, our Global Network connecting AWS Regions to each other and our Edge Network which provides Internet connectivity. We explain how AWS services such as AWS Direct Connect and Amazon CloudFront integrate with our Global Network to provide the best experience for our customers. We also dive into how the AWS Global Network connects to the rest of the Internet through peering at a global scale. If you are curious about how AWS network infrastructure can support large-scale cat photo distribution or how Internet routing works, this session answers those questions. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
As customers put more workloads into AWS, the number of Virtual Private Clouds (VPCs) a customer needs to manage grows. Scaling out an AWS environment can create challenges in manageability, workload segmentation, and security. SD-WAN solutions offered by AWS Partners can enable organizations to scale up the number of VPCs as needed while segmenting and isolating workloads for easier management, application quality monitoring, and security. In this session, we walk through a customer example of how an SD-WAN implementation simplified the management of a multi-VPC footprint while also improving application performance to WAN-connected branch offices. Complete Title: AWS re:Invent 2018: [REPEAT 2] Use SD-WAN to Manage Your AWS Environment & Branch Office Connectivity (NET311-R2)
Making decisions today for tomorrow's technology-from DNS to AWS Direct Connect, ELBs to ENIs, VPCs to VPNs, the Cloud Network Engineering team at Netflix are resident subject matter experts for a myriad of AWS resources. Learn how a cross-functional team automates and manages an infrastructure that services over 125 million customers while evaluating new features that enable us to continue to grow through our next 100 million customers and beyond.
With Amazon Virtual Private Cloud (Amazon VPC) you can build your own virtual data center networks in seconds. Every VPC is free, but it comes with enterprise-grade capabilities that would cost millions of dollars in a traditional data center. How is this possible? Come hear how Amazon VPC works under the hood. We uncover how we use Amazon-designed hardware to deliver high-assurance security and ultra-fast performance that makes the speed of light feel slow. Leave with insights and tips for how to optimize your own applications, and even whole organizations, to deliver faster than ever.
Everyone appreciates the power of the global Internet while still understanding its performance and latency limitations as a best-effort system. Although many organizations understand that direct connectivity into the public cloud helps ensure network consistency, it's not always clear how integrated combinations of backbone, routing, and orchestration can optimize cloud applications. In this session, CenturyLink presents a number of innovative approaches to optimizing application deployment through control of the underlying network infrastructure. Join CenturyLink for a demonstration of emerging workloads, and the difference your cloud connection can make when milliseconds matter. This session is brought to you by AWS partner, CenturyLink.
VMware Cloud on AWS enables customers to have a hybrid cloud platform by running their VMware workloads in the cloud while having seamless connectivity to on-premises and AWS native services. In this session, we do a technical deep dive on SDDC networking and NSX-T's recent announcement on full routing over AWS Direct Connect to enable optimized migrations and cloud extension use cases. We also demonstrate a live vMotion for on-premises workload to VMware SDDC cluster on AWS with minimum to no network distribution over AWS Direct Connect.
Vanguard and Bloomberg's use of AWS PrivateLink as they moved from a small number of large accounts to a large number of small accounts reduced blast radius at the management plane but introduced significant complexity at the network layer. In this session, we introduce the type of network segmentation that is required to implement a zero-trust network for a highly regulated financial investment company like Vanguard-one that adds additional complexity.
This session introduces AWS Global Accelerator, a new global service that enables you to optimally route traffic to your multi-regional endpoints via static Anycast IP addresses that are announced from the expansive AWS edge network. This session walks through the various features and customer use cases for Global Accelerator. Several example use cases demonstrate how you can use Ubiquity to achieve near-zero application downtime and reduce latency for your global applications. We will walk you through the architecture and will also include a demo of the workflow. Attend this session if you are looking at ways to accelerate performance of your global applications, achieve high availability for your mission critical applications or easily manage multiple IP addresses through a static Anycast IP that fronts your applications.
To deliver your applications to millions of users you need to scale your network across thousands of VPCs. AWS Transit Gateway helps scale your workloads and vastly simplifies how you connect your AWS networks. AWS Transit Gateway also makes it easier to connect your on-premises networks across those VPCs. Using secure operational controls, you can implement and maintain centralized policies to connect Amazon VPCs with each other and with your on-premises networks. This session will enable you to get started quickly and get an insight into the various capabilities that AWS Transit Gateway introduces.
[NEW LAUNCH!] AWS Transit Gateway and Transit VPCs, Reference Architectures for Many VPCs (NET402) In this session, we will review the new AWS Transit Gateway and new networking features. We compare AWS Transit Gateway and Transit VPCs and discuss how to architect your accounts and VPCs. This session will be helpful if the developers have been let loose, and you are planning lots of VPCs or accounts. How should you connect them; what limits do you need to be aware of; and how does routing work with many VPCs? We dive into the details of recent launches and how to work with concepts like Transit VPCs, account strategies, scaling services, using firewalls, and direct connect gateways to solve problems of many VPCs.
AWS Direct Connect provides a more consistent network experience for accessing your AWS resources, typically with greater bandwidth and reduced network costs. This session dives deep into the features of AWS Direct Connect, including public and private virtual Interfaces, Direct Connect Gateway, global access, local preference communities, and more.
Elastic Load Balancing (ALB & NLB) automatically distributes incoming application traffic across multiple Amazon EC2 instances for fault tolerance and load distribution. In this session, we go into detail on ELB configuration and day-to-day management. We also discuss its use with Auto Scaling, and we explain how to make decisions about the service and share best practices and useful tips for success. Finally, Netflix joins this session to share how it leveraged the authentication functionality on Application Load Balancer to help solve its workforce identity management at scale.
Oil & Gas
BP is a global energy company with a wide reach across the world's energy system. Its network spans 75 countries providing connectivity to 400 offices, thousands of retail sites, production facilities, remote exploration locations, and data centers. To become a cloud-first company supporting thousands of remote sites, BP had to re-architect and evolve its operating model for delivering network services. In this session, representatives from BP share best practices for delivering high-bandwidth low-latency interconnectivity between BP and AWS. They outline the benefits of using native AWS networking and security features, and they share the lessons they learned around security segmentation, access policies, trust boundaries, and connectivity to untrusted networks. Join BP to learn how to prepare for mass migration to the cloud and enable at-scale cloud-native application development.
Power & Utilities
In this session, we introduce you to the application of Intel and Amazon technologies in building smart home solutions benefiting both consumers and power utilities. We cover the opportunities and challenges associated with balancing savvy consumers and energy supply and demand amidst the increasing deployment of distributed energy resources (DERs). Learn about valuable use cases and a multi-tenant services architecture that supports distributed analytic workloads and services from the edge to the cloud. Come and learn how to start applying these joint technologies to create new possibilities with advanced energy management, control, and monetization. This session is brought to you by AWS partner, Intel.
? In this session, we'll discuss how to utilize natural language processing (NLP) to analyze data sources, such as user sentiments, conversational intent, and social media. Machine learning solutions help us bring deeper insights and relationships in texts to reduce the analysis time from weeks to days. We will highlight how quickly a machine learning-based solution can be deployed. We will dive deep into AWS services, such as AWS Lambda, Amazon SageMaker, Amazon Comprehend (Classification), Topic Modeling, and Amazon Transcribe, to rapidly develop a natural language search and analysis application to meet such requirements. We will also demonstrate how to ingest social media Tweets to generate a sentiment score to engage with the customer more effectively. Complete Title: AWS re:Invent 2018: Improving Customer Experience: Enhanced Customer Insights Using Natural Language Processing (PUT301-i)
Discover how AWS IoT services, Amazon Alexa, and AWS services help businesses leverage the AWS Cloud to make millions of homes safer, more efficient, and healthier for their occupants. The initial pilots on low-income customer base saw over 6-20% reduction in electricity bills using the minimal customer configuration of just Alexa and very basic smart device control. Discover how companies are using Amazon Experts to deploy Alexa and AWS IoT into millions of smart homes across wide geographies. Learn how AWS IoT Core, digital twin, AWS Lambda, and data analytics are used to execute real-time coordination logic, implement scheduled actions, and execute event-based bidirectional control architected to manage millions customers simultaneously. Learn how coupling this in-home IoT capability with cloud-based big data/ML algorithms enables new customer-centric outcomes, including appliance performance detection, improving the value of solar assets, security monitoring, and better ways to schedule daily energy-intensive tasks, such EV charging and hot water heaters.
The AWS suite of managed services for IoT enables companies to quickly and easily deploy devices to the edge and synchronize their industrial time-series data from multiple sites to the AWS Cloud, where advanced analytics and machine learning can generate valuable insights about their business. In this session, learn how EDF Renewables used AWS Greengrass, AWS IoT Core, AWS IoT Analytics, and AWS Lambda to facilitate the collection, aggregation, and quality assurance of operational data from solar installations. Hear how working with AWS Professional Services transformed its approach to product development, and learn what challenges and solutions came with choosing leading-edge services form AWS.
Retail & Wholesale
In this session, learn how Bonobos, an online retailer for men's clothing and accessories, powers their personalized customer experiences on top of AWS. We start by exploring the foundational elements required to build an effective retail data platform as well as the building blocks provided by AWS to deliver these experiences. Learn how Bonobos leverages Segment in their architecture, and hear from Bonobos and Segment on the objectives, challenges, and outcomes realized by Bonobos through their journey in constructing and deploying their personalization platform.
In this session, learn how data scientists in the retail industry, from companies like Tapestry, Coach, and Kate Spade, are finding new, counterintuitive consumer insights using AWS artificial intelligence services in a data lake. By leveraging data from various retail systems, including CRM, marketing, e-commerce, point of sale, order management, merchandising, and customer care, we show you how these consumer insights might influence new and interesting retail use cases while establishing a data-driven culture within the organization. Services referenced include Amazon S3, Amazon Machine Learning, Amazon QuickSight, Amazon SageMaker, among others.
In this session, we review the most modern e-commerce architectures, patterns, and technologies that are being used by digital innovators. We examine micro-services as well as event-driven and event-sourced architectures. We provide examples of how these patterns are implemented on AWS, and we review the benefits of each one. Learn how to select a pattern for your workload and which combination of AWS services can be used to build them. We use real-world customer examples so you can see the practical applications of these patterns.
Robots are no longer just the subject of sci-fi movies. They're now prevalent in our lives, helping us carry out tedious housework, distribute warehouse inventory, automate manufacturing, and research lunar landscapes. Until now, developing, testing, and deploying intelligent robotics applications was difficult and time consuming. We announced AWS RoboMaker, a new service that makes it easy for developers to develop, test, and deploy robotics applications, as well as build intelligent robotics functions using cloud services. We'll invite our launch customer up for a demonstration - Robot Care Systems, a company that is enabling elderly and disabled people to live independently.
Security, Identity, and Compliance
As with everything in life there is an easy way and a hard way when it comes to adopting security framework recommendations. Featuring the AWS Well-Architected and Cloud Adoption Frameworks, we will walk you through a complete security journey. We'll start with identification of requirements, then move through a series of how-tos from classifying your data, automating controls, to running fun incident response game days. There will be code giveaways and more!
In this session, we cover the most common cloud security questions that we hear from customers. We provide detailed answers for each question, distilled from our practical experience working with organizations around the world. This session is for everyone who is curious about the cloud, cautious about the cloud, or excited about the cloud.
New to AWS? Given the number of AWS services there are, you may think that it's going to take a lot of work to get your security house in order in the cloud. In fact, across AWS, there are only a few simple patterns you need to know to be effective at security in the cloud. In this session, we'll focus on the permissions controls offered by Identity and Access Management (IAM) and the network security controls offered by Virtual Private Cloud (VPC). You'll walk away having seen concrete examples that illustrate the patterns that enable you to properly secure any workload in AWS. Complete Title: AWS re:Invent 2018: [REPEAT 1] A Practitioner's Guide to Securing Your Cloud (Like an Expert) (SEC203-R1)
Data privacy and security are top concerns for customers in the cloud. In this session, the AWS Automated Reasoning group shares the advanced technologies, rooted in mathematical proof, that help provide the highest levels of security assurance in today's data-driven world. The Automated Reasoning group co-presents with Bridgewater, a customer that has leveraged these technologies to help confirm that security requirements are being met, an assurance not previously available from conventional tools.
In this session, learn how LogMeIn moves quickly and stays secure through the power of automation on AWS. We walk through core AWS security building blocks, such as IAM, AWS CloudTrail, AWS Config, and Amazon CloudWatch. We dive deep into LogMeIn's approach for empowering developers on AWS while also meeting required security controls.
Whether it is per business unit or per application, many AWS customers use multiple accounts to meet their infrastructure isolation, separation of duties, and billing requirements to establish their AWS Landing Zone. In this session, formerly called "Architecting Security & Governance across a Multi-Account Strategy," we discuss the latest updates around establishing your AWS Landing Zone. We cover considerations, limitations, and security patterns when building a multi-account strategy. We explore topics such as thought pattern, identity federation, cross-account roles, consolidated logging, and account governance. In addition, BP shares its journey and approach to establishing its AWS Landing Zone. At the end of the session, we present an enterprise-ready landing zone framework and provide the background needed to implement an AWS Landing Zone. We encourage you to attend the full AWS Landing Zone track; search for #awslandingzone in the session catalog. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour. Complete Title: AWS re:Invent 2018: [REPEAT 1] Architecting Security & Governance across your AWS Landing Zone (SEC303-R1)
In this session, learn how to use AWS Secrets Manager to simplify secrets management and empower your developers to move quickly while raising the security bar in your organization. Also, learn how you can use these changes to more easily meet your compliance requirements. Finally, learn how the service enables you to control access to secrets using fine-grained permissions and centrally audit secret rotation for resources in the AWS Cloud, third-party services, and on-premises.
Join us, and learn how we made AWS our backbone, modularized our software for the cloud, and gained an immediate surge in velocity. In this session, we walk you through some of the unexpected security challenges we faced and hopefully save you a few headaches. Discover what security issues you need to address, how to avoid costly unused instances in your deployments, and why your current security tools won't help. We show you how a major transformation landed us on AWS, and we share how we overcame challenges and advanced our business while innovating in a new direction. This session is brought to you by AWS partner, Barracuda Networks Inc.
According to Gartner, the IaaS market grew at a blistering 42.8% in 2017-twice as fast as SaaS. And with last year's high-profile data exposures, the focus on bolstering IaaS security practices has increased. We've worked with AWS and hundreds of IaaS security professionals to develop a list of security practices specifically designed to protect AWS environments and the applications and data within them. In this session, you'll discover: common yet preventable scenarios that can result in the loss of corporate data, security best practices for user and admin behavior monitoring, secure auditable configuration, Amazon S3 data loss and threat prevention, blueprints for how a solution-based approach (including bridging to your on-premises best practices) can provide IaaS visibility and control, step-by-step guidance on how to gain visibility across all workloads, protect against advanced threats, and discover insights into lateral threat movements, and recommendations for creating a successful DevOps workflow that integrates security.
The race is on. Development teams are moving fast while security teams play catch-up to protect the business. Security is often the department of ‚ÄòNo', slowing DevOps, but imagine what transpires when security says ‚ÄòYes' and collaborates. In this session, Marnie Wilking, CISO and Gavin Martin, VP, Operational Engineering at Orion Health, share their global AWS deployment and steps taken to facilitate cross-team collaboration. The team alignment around security and automation enabled them to a deliver a faster, more secure solution, achieve automation benefits, and meet HIPAA, HITrust, and GDPR compliance. Learn how this was achieved without slowing development and operations teams. This session is brought to you by AWS partner, Trend Micro.
As Moody's AWS presence continues to grow, automation becomes a critical tool that can facilitate the rapid onboarding of new applications, VPCs, and acquisitions while ensuring they are secured appropriately. Moody's has chosen Terraform as their tool of choice to define and deploy their application and security infrastructure for a range of different use cases on AWS. In this session, we dive deep into a range of automation use cases, including: AWS infrastructure creation; deployment of a shared service environment; onboarding of new/existing lines-of-business; and Integration with threat intelligence services such as Amazon GuardDuty. This session is brought to you by AWS partner, Palo Alto Networks.
Security is an optimization problem. A security team's goal is to enable the delivery of maximum business value at minimum risk and at minimum cost. Over investment in security is inefficient and slows the business down. Under investment in security can lead to expensive breaches and slows the business down. Striking the right balance requires careful risk management decisions and judgement calls. There are rarely absolutes when dealing with security, yet we've found a few situations where it's useful to apply absolutes to a field filled with shades of grey. Join this session to learn what they are. Complete Title: AWS re:Invent 2018: 0x32 Shades of #7f7f7f: The Tension Between Absolutes & Ambiguity in Security (SEC310)
Learn how Symantec uses AWS to provide complete, integrated security solutions that monitor and protect companies and governments from hackers. Hear about lessons learned from how Symantec scaled up its infrastructure to analyze billions of logs every day to detect the world's most sophisticated cyber attacks, and you'll see how Symantec integrates with native AWS services, like Amazon GuardDuty, AWS Lambda, and AWS Systems Manager, into its own security solutions to provide even better security in the cloud. This session is brought to you by AWS partner, Symantec Corporation.
Traditional data center environments have regarded the network boundary as a stable perimeter of defense, using gateway firewalls for effective protection. The public cloud, however, is exposing a plethora of hosted services directly to its users, bypassing traditional network filtering technologies, and effectively creating new perimeters around the various services and data element. Examples of these new perimeters include Amazon S3 buckets, Amazon EBS snapshots, and AWS Lambda functions. This session is brought to you by AWS partner, Dome9 Security Inc.
Users are increasingly adopting AWS Cloud for their IT strategy to drive digital transformation. Securing clouds requires shared security responsibility. In this session, learn about the inherent threats and solutions needed to secure your entire cloud stack, from infrastructure to applications. Learn the importance of total visibility across your public clouds, and how to set up security for workloads from both internal and in the perimeter. Avoid issues such as data leaks and crypto-mining attacks through your cloud infrastructure with continuous security monitoring. Learn best practices from real-world examples of customers transparently orchestrating security into their practices and DevOps pipelines. This session is brought to you by AWS partner, Qualys.
Hybrid cloud architecture creates security, governance, user experience, and performance challenges. In this session, learn to use Citrix Workspace to create a secure digital perimeter, enhancing security policy controls. Provide instant access for users to manage SaaS and virtualized Microsoft apps and unique data loss prevention features with on-premises data centers and AWS. Use Citrix to deploy machine learning (ML)-enhanced, user behavior analytics (UBA) with new security insights. Learn how your network can deeply inspect and optimize traffic, increasing the resiliency, performance, and security of hybrid application stacks. Build and move workloads onto AWS, improve legacy approaches to securing data, and create a world-class user experience. This session is brought to you by AWS partner, Citrix.
Are you interested in becoming a IAM policy master and learning about powerful techniques for controlling access to AWS resources? If your answer is yes, this session is for you. Join us as we cover the different types of policies and describe how they work together to control access to resources in your account and across your AWS organization. We walk through use cases that help you delegate permission management to developers by demonstrating IAM permission boundaries. We take an in-depth look at controlling access to specific AWS regions using condition keys. Finally, we explain how to use tags to scale permissions management in your account. This session requires you to know the basics of IAM policies.
Many enterprises want to drive faster cloud adoption and time to market (TTM) by allowing their developers, who have various skill sets, to onboard onto AWS quickly using self-service. However, enabling security, governance, and compliance while not compromising user experience can be a challenge. In this session, Verizon demonstrates how they use the AWS Service Catalog Connector for ServiceNow to create a robust self-service computing environment while meeting Verizon's governance and security controls and achieving their goal of migrating 30% of their applications onto AWS in a short timeframe.
Operating a security practice on AWS brings many new challenges and opportunities that have not been addressed in data center environments. The dynamic nature of infrastructure, the relationship between development team members and their applications, and the architecture paradigms have all changed as a result of building software on top of AWS. In this session, learn how your security team can leverage AWS Lambda as a tool to monitor, audit, and enforce your security policies within an AWS environment.
Enabling AWS CloudTrail for auditing purposes is often a corporate mandate, but do you know how to use CloudTrail events to improve your security and operational posture? Come learn how CloudTrail can help improve your operational monitoring and troubleshooting, security analysis, and compliance auditing processes. Discover best practices for setting up and using CloudTrail; explore use cases for data mining CloudTrail event data; learn how to set up alerts based on activity in your account; and learn about advanced use cases. Also learn how to implement data plane governance automation using data events from Amazon S3 and AWS Lambda. Complete Title: AWS re:Invent 2018: [REPEAT 1] Augmenting Security Posture & Improving Operational Health with AWS CloudTrail (SEC323-R1)
In this session, learn how Vanguard has matured their IAM controls and automation to support a micro-account strategy, providing further agility to developers while reducing blast radius and improving governance. You learn how Vanguard uses STS Federation at the OU level, builds common roles across all micro accounts, implements AWS Organizations SCPs, and uses different network control zones for admin vs. non-admin functions. Vanguard also shares how they are using AWS Lambda to block escalation of privilege.
Protecting data means ensuring confidentiality, integrity, and availability. In this session, we discuss the full range of data protection capabilities provided by AWS along with a deep dive into AWS Key Management Service (AWS KMS). Learn about data protection strategies for ensuring data integrity and availability using AWS native services that provide durability, recoverability, and resiliency for customer data on AWS. In addition, learn how to define an encryption strategy to protect data cryptographically, including managing KMS permissions, defining key rotation, and best practices for using the AWS Encryption SDK with KMS for custom software development.
Whether you are part of a large organization moving your applications to the cloud, or a new application owner just getting started, you always need a baseline security for your web applications. In addition, large organizations with common security requirements frequently need to standardize their security posture across many applications. With compliance initiatives, such as PCI, OFAC, and GDPR, there is a need to effectively manage this posture with minimal error. In this session, learn how to use services like AWS WAF, AWS Shield, and AWS Firewall Manager to deploy and manage rules and protections uniformly across many accounts and resources. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
Credential compromise in the cloud is not a threat that a single company faces. Rather, it is a widespread concern as more and more companies operate in the cloud. Credential compromise can lead to many different outcomes, depending on the motive of the attacker. In certain cases, this has led to erroneous AWS service usage for bitcoin mining or other nondestructive yet costly abuse. In other cases, it has led to companies shutting down due to the loss of data and infrastructure.
Learn about AWS Security Hub, and how it gives you a comprehensive view of your high-priority security alerts and your compliance status across AWS accounts. See how Security Hub aggregates, organizes, and prioritizes your alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. We will demonstrate how you can continuously monitor your environment using compliance checks based on the AWS best practices and industry standards your organization follows.
Most workloads on AWS resemble a finely crafted cake, with delight at every layer. In this session, we help you master identity at each layer of deliciousness: from platform, to infrastructure, to applications, using services like AWS Identity and Access Management (IAM), AWS Directory Service, Amazon Cognito, and many more. Leave with a firm mental model for how identity works both harmoniously and independently throughout these layers, and with ready-to-use reference architectures and sample code. We keep things fun and lively along the way with lots of demos, which will hopefully make up for our decided lack of anything resembling the sweet confections we'll be talking so much about!
Join us for this advanced-level talk to learn about Pokemon's journey defending against DDoS attacks and bad bots with AWS WAF, AWS Shield, and other AWS services. We go through their initial challenges and the evolution of their bot mitigation solution, which includes offline log analysis and dynamic updates of badbot IPs along with rate-based rules. This is an advanced talk and assumes some knowledge of Amazon DynamoDB, Amazon Kinesis Data Firehose, Amazon Kinesis Data Analytics, AWS Firewall Manager, AWS Shield, and AWS WAF.
In this session, we dive deep into the actual code behind various security automation and remediation functions. We demonstrate each script, describe the use cases, and perform a code review explaining the various challenges and solutions. All use cases are based on customer and C-level feedback and challenges. We look at things like IAM policy scope reduction, alert and ticket integration for security events, forensics and research on AWS resources, secure pipelines, and more. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour. Complete Title: AWS re:Invent 2018: Five New Security Automations Using AWS Security Services & Open Source (SEC403)
Modern application development is not a buzzword-it's an innovation strategy that organizations of all sizes can use to increase revenue, lower costs, and outpace the competition. In this session, learn how you can unblock digital product and service innovation for your own organization. Putting technology details aside, we explain what modern application development really is, why it matters to the business, what success metrics you should expect, and how to navigate your own transition.
In today's tech-driven world, an organization's architecture is a competitive differentiator. A key piece of this advantage lies in the ability to move-fast. In this session, we dive into how serverless is changing the way businesses think about speed and cost of innovation. We hear from Comcast on why they made the decision to reinvent with serverless, and the learnings and benefits they've gained along their journey to modern application development. Complete Title: AWS re:Invent 2018: [REPEAT 1] Accelerate Innovation & Maximize Business Value with Serverless Applications (SRV212-R1)
Serverless brings many advantages to software development, but it introduces new monitoring challenges as well. Isolated telemetry on individual functions might not provide enough visibility, and instrumentation in a world where 100 ms of extra execution time could cost thousands of dollars might prove prohibitive. In this session, we explore how New Relic enables full observability of the serverless stack, including its executing context, with minimal impact in performance. Learn from customer case studies and real-world examples. This session is brought to you by AWS partner, New Relic.
Tracing is always a challenge, no matter what your architecture is. Creating an application with serverless functions, such as with AWS Lambda, provides agility and scalability to your application, but it also creates an added challenge for code tracing. In this session, we review Datadog's distributed tracing capabilities and how Trek10 uses those capabilities to improve its customers' applications. Learn how to use AWS X-Ray in a serverless environment. Also, learn strategies for working with traces and logs that explain application errors. Finally, learn how Trek10 uses AWS X-Ray with Datadog to measure and improve its applications' performance. This session is brought to you by AWS partner, Datadog. Complete Title: AWS re:Invent 2018: How Trek10 Uses Datadog's Distributed Tracing to Improve AWS Lambda Projects (SRV304-S)
AWS offers a wide range of cloud computing services and technologies, but we rarely state opinions about which services and technologies customers should choose. When it comes to building our own services, our engineering groups have strong opinions, and they express them in the technologies they pick. Join Tim Bray, Senior Principal Engineer, to hear about the high-level choices that developers at AWS and our customers have to make. Here are a few: Are microservices always the way to go? Serverless, containers, or serverless containers? Is relational over? Is Java over? The talk is technical and based on our experience in building AWS services and working with customers on their cloud-native apps.
Real-time analytics has traditionally been analyzed using batch processing in DWH/Hadoop environments. Common use cases use data lakes, data science, and machine learning (ML). Creating serverless data-driven architecture and serverless streaming solutions with services like Amazon Kinesis, AWS Lambda, and Amazon Athena can solve real-time ingestion, storage, and analytics challenges, and help you focus on application logic without managing infrastructure. In this session, we introduce design patterns, best practices, and share customer journeys from batch to real-time insights in building modern serverless data-driven architecture applications. Hear how Intel built the Intel Pharma Analytics Platform using a serverless architecture. This AI cloud-based offering enables remote monitoring of patients using an array of sensors, wearable devices, and ML algorithms to objectively quantify the impact of interventions and power clinical studies in various therapeutics conditions.
Serverless architecture and a microservices approach has changed the way we develop applications. Increased composability doesn't have to mean decreased auditability or security. In this talk, we discuss the security model for applications based on AWS Lambda functions and Amazon API Gateway. Learn about the security and compliance that comes with Lambda right out of the box and with no extra charge or management. We also cover services like AWS Config, AWS Identity and Access Management (IAM), Amazon Cognito, and AWS Secrets Manager available on the platform to help manage application security.
We are a lean team consisting of developers, lead architects, business analysts, and a project manager. To scale our applications and optimize costs, we need to reduce the amount of undifferentiated heavy lifting (e.g., patching, server management) from our projects. We have identified AWS serverless services that we will use. However, we need approval from a security and cost perspective. We need to build a business case to justify this paradigm shift for our entire technology organization. In this session, we learn to migrate existing applications and build a strategy and financial model to lay the foundation to build everything in a truly serverless way on AWS.
An effective API strategy is critical to digital transformation and rapid innovation. In this session, we deep dive into advanced capabilities of Amazon API Gateway that can enable customers to build modern applications.
In this session, learn how AWS can help you innovate faster with DevOps, microservices, and serverless. Join us for a rare and intimate discussion with AWS senior leaders: David Richardson, VP of Serverless, Ken Exner, director of AWS Developer Tools, and Deepak Singh, director of Compute Services, Containers, and Linux. Hear them share development best practices and discuss key learnings from building modern applications at Amazon.com. Also, learn how developers can leverage containers, AWS Lambda, and developer tools to build and run production applications in the cloud.
Data and events are the lifeblood of any modern application. By using stateless, loosely coupled microservices communicating through events, developers can build massively scalable systems that can process trillions of requests in seconds. In this talk, we cover design patterns for using Amazon SQS, Amazon SNS, AWS Step Functions, AWS Lambda, and Amazon S3 to build data processing and real-time notification systems with unbounded scale and serverless cost characteristics. We also explore how these approaches apply to practical use cases, such as training machine learning models, media processing, and data cleansing.
Are you an experienced serverless developer? Do you want a handy guide for unleashing the full power of serverless architectures for your production workloads? Are you wondering whether to choose a stream or an API as your event source, or whether to have one function or many? In this session, we discuss architectural best practices, optimizations, and handy cheat codes that you can use to build secure, high-scale, high-performance serverless applications. We use real customer scenarios to illustrate the benefits.
Serverless computing allows you to build and run applications and services without thinking about servers. Serverless applications don't require you to provision, scale, and manage any servers. However, under the hood, there is a sophisticated architecture that takes care of all the undifferentiated heavy lifting for the developer. Join Holly Mesrobian, Director of Engineering, and Marc Brooker, Senior Principal of Engineering, to learn how AWS architected one of the fastest-growing AWS services. In this session, we show you how Lambda takes care of everything required to run and scale your code with high availability
Tape backups. Yes, they're still a thing. If you want to stop using tapes but need to store immutable backups for compliance or operational reasons, attend this session to learn how to make an easy switch to a cloud-based virtual tape library (VTL). AWS Storage Gateway provides a seamless drop-in replacement for tape backups with its Tape Gateway. It works with the major backup software products, so you simply change the target for your backups, and they go to a VTL that stores virtual tapes on Amazon S3 and Amazon Glacier. Come see how it works.
In this session, we focus on best practices for AWS block and file storage when supporting enterprise workloads (like SAP, Oracle, Microsoft applications, and home directories). We discuss migrating mission-critical workload data, selecting volumes or file systems, optimizing performance, and designing for durability and availability. We also review optimizing for cost to ensure that your lift-and-shift project is a success.
Learn best practices for Amazon S3 performance optimization, security, data protection, storage management, and much more. In this session, we look at common Amazon S3 use cases and ways to manage large volumes of data within Amazon S3. We discuss the latest performance improvements and how they impact previous guidance. We also talk about the Amazon S3 data resilience model and how architecture for the AWS Regions and Availability Zones impact architecture for fault tolerance.
Flexibility is key when building and scaling a data lake. The analytics solutions you use in the future will almost certainly be different from the ones you use today, and choosing the right storage architecture gives you the agility to quickly experiment and migrate with the latest analytics solutions. In this session, we explore best practices for building a data lake in Amazon S3 and Amazon Glacier for leveraging an entire array of AWS, open source, and third-party analytics tools. We explore use cases for traditional analytics tools, including Amazon EMR and AWS Glue, as well as query-in-place tools like Amazon Athena, Amazon Redshift Spectrum, Amazon S3 Select, and Amazon Glacier Select. Complete Title: AWS re:Invent 2018: [REPEAT 1] Data Lake Implementation: Processing & Querying Data in Place (STG204-R1)
AWS offers a variety of data migration services and tools to help you easily and rapidly move everything from gigabytes to petabytes of data using your networks, our networks, the mail, or even a tractor trailer. Learn about the available data migration options, including the AWS Snowball family, AWS Storage Gateway, Amazon S3 Transfer Acceleration, and other approaches. We provide the guidance to help you find the right service or tool to fit your requirements, and we share examples of customers who have used these options in their cloud journey. Complete Title: AWS re:Invent 2018: [REPEAT 1] Migrating Data to the Cloud: Exploring Your Options from AWS (STG205-R1)
Veeam has made significant enhancements to its platform, focusing on the availability of AWS workloads over the past year. Join this technical deep dive where representatives from Veeam demonstrate how the company protects cloud-native workloads on AWS as well as how they back up to and from on-premises environments. They also discuss data protection for VMware Cloud on AWS. Finally, they review the enhancements to Veeam's Backup and Replication feature set, which now includes cloud mobility to AWS and a cloud archive that leverages Amazon S3 for long-term data retention of backed-up workloads.
Are you dealing with legacy system complexities when integrating your backup and recovery solution with the cloud? Rubrik can help you simplify data protection with its policy-based backup, recovery, and archival capabilities for hybrid applications. In this session, learn how University of California San Diego (UCSD) leverages Rubrik and AWS to help simplify data protection, achieve rapid data recovery, and scale for data growth. Join us to learn how UCSD replaced expensive and unreliable backup tapes with AWS storage, and how to move data to AWS and protect your cloud-native workloads running on AWS. This session is brought to you by AWS partner, Rubrik.
CIOs have realized that it's not a question of cloud or on premises-both are critical weapons for their success. Technology and architecture must facilitate using on premises and public cloud together. But today, that can be a challenge, and nowhere is it more evident than at the data tier. Data is the heart of the application, and if you can't easily bridge and move data between on premises and cloud, making applications portable between the two is nearly impossible. In this session, learn how you can unify on premises and cloud with Pure Storage, making it possible to truly run a hybrid cloud storage architecture. This session is brought to you by AWS partner, Pure Storage.
As your data stores grow, managing and operating on your stored objects becomes increasingly difficult to scale. In this session, AWS experts demonstrate Amazon S3 features you can use to perform and manage operations across any number of objects, from hundreds to billions, stored in Amazon S3. Learn how to monitor performance, ensure compliance, automate actions, and optimize storage across all your Amazon S3 objects. We also provide relevant use cases that demonstrate the full range of Amazon S3 capabilities and options, such as copying objects across buckets to create development environments, restricting access to sensitive data, or restoring many objects from Amazon Glacier.
Mai-Lan Tomsen Bukovec, VP of Amazon S3, introduces the latest innovations across all AWS storage services. In this keynote address, we announce new storage capabilities, and we talk about features and services that make AWS storage unique. We focus on new innovations in object storage, file storage, block storage, and data transfer services. You also hear from executives from companies that are major AWS storage customers, Sony and Expedia, about how they're using AWS storage to create a competitive advantage in their businesses.
In this session, we explore the world's first cloud-scale file system and its targeted use cases. Learn about Amazon Elastic File System (Amazon EFS), its features and benefits, how to identify applications that are appropriate to use with Amazon EFS, and details about its performance and security models. The target audience is security administrators, application developers, and application owners who operate or build file-based applications.
You've designed and built a well-architected data lake and ingested extreme amounts of structured and unstructured data. Now what? In this session, we explore real-world use cases where data scientists, developers, and researchers have discovered new and valuable ways to extract business insights using advanced analytics and machine learning. We review Amazon S3, Amazon Glacier, and Amazon EFS, the foundation for the analytics clusters and data engines. We also explore analytics tools and databases, including Amazon Redshift, Amazon Athena, Amazon EMR, Amazon QuickSight, Amazon Kinesis, Amazon RDS, and Amazon Aurora; and we review the AWS machine learning portfolio and AI services such as Amazon SageMaker, AWS Deep Learning AMIs, Amazon Rekognition, and Amazon Lex. We discuss how all of these pieces fit together to build intelligent applications.
In this session, learn best practices for data security in Amazon S3. We discuss the fundamentals of Amazon S3 security architecture and dive deep into the latest enhancements in usability and functionality. We investigate options for encryption, access control, security monitoring, auditing, and remediation.
IT infrastructure teams with on-premises applications have to manage storage arrays throughout their never-ending lifecycle, including capacity planning guesswork, hardware failures, system migrations, and more. There are cloud-enabled alternatives to buying more and more storage arrays. With AWS Storage Gateway, you can start using Amazon S3, Amazon Glacier, and Amazon EBS in hybrid architectures with on-premises applications for storage, backup, disaster recovery, tiered storage, hybrid data lakes, and ML. In this session, learn how to use AWS Storage Gateway to seamlessly connect your applications to AWS storage services with familiar block-and-file storage protocols and a local cache for fast access to hot data. We demonstrate our latest capabilities and share best practices from experienced customers.
Data lakes are transforming the way enterprises store, analyze, and learn insights from their data. While data lakes are a relatively new concept, many enterprises have already generated significant business value from the insights gleaned. In this session, AWS experts and technology leaders from Sysco, a Fortune 50 company and leader in food distribution and marketing, explain why Sysco decided to evolve its data management capabilities to include data lakes and how they customized them to support diverse querying capabilities and data science use cases. They also discuss how to architect different aspects of a data lake-ingestion from disparate sources, data consumption, and usability layers-and how to track data ingestion and consumption, monitor associated costs, enforce wanted levels of user access, manage data file formats, synchronize production and non-production environments, and maintain data integrity. Services to be discussed include Amazon S3 and S3 Select, Amazon Athena, Amazon EMR, Amazon EC2, and Amazon Redshift Spectrum.
In this session, we explore the persistent local disk storage service for Amazon EC2 and its targeted use cases. Learn about Amazon EBS features and benefits, how to identify applications that are appropriate to use with Amazon EBS, and details about its performance and security models. The target audience is security administrators, application developers, application owners, and infrastructure operations personnel who build or operate block-based applications or SANs.
Migrating enterprise applications to the cloud requires thorough planning and consideration for a number of variables. Should you move your application to a similar infrastructure in the cloud (in a lift-and-shift scenario)? Or should you refactor your application to take advantage of cloud-native services for object storage, serverless, auto-scaling, and so on? In this session, an AWS expert walks through the ten commandments that enterprises should follow when moving applications to the cloud and refactoring them for optimal performance. Then, a representative of Sysco Corporation, a Fortune 50 company, shares how the company migrated mission-critical legacy business systems and modernized them to take advantage of the AWS Cloud. Learn how the company moved its enterprise purchasing system, which processes millions of dollars in sales daily, to the AWS Cloud while achieving a 60% decrease in run costs. Also discover the lessons learned and highlights of the migration, which resulted in 30% increase in performance, 3x improvement in user accessibility, and a significant decrease in order backlogs and outages.
AWS offers fully managed file system services that enable you to quickly and simply lift and shift or build new applications that access file data in the AWS Cloud. In this session, join Wayne Duso, the leader of file storage, hybrid-edge storage, and data transport services, to learn about our full set of file services and latest launches. Learn all about file storage, and get firsthand input on how you can accelerate your journey to AWS as you move from on-premises or do-it-yourself implementations to fully managed file storage solutions. Hear how AWS file storage solutions enabled LoanLogics to migrate its applications to the cloud, enabling better scalability, performance, and availability. Also discover why customers choose AWS file systems for migrating their mission-critical enterprise applications and compute-intensive workloads to deliver the performance they need in a cost-effective way, saving time and money.
An introduction of Amazon FSx for Lustre, a new service that provides a fully managed parallel file system that enables compute-intensive applications to process large data sets at up to hundreds of gigabytes per second of throughput, millions of IOPS, and consistent low latencies. Learn how Amazon FSx is seamlessly integrated with Amazon S3, making your in-cloud data sets available to your compute-intensive applications. In this introductory session, we will provide a deep dive on this new service, discuss performance capabilities, AWS integrations, and highlight use cases for this high-performance file storage offering.
AWS DataSync is a new online data transfer service that automates movement of data between on-premises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS). In this session, we will introduce the service, showing how you can use DataSync to move active on-premises data to the cloud for one-time migration, timely in-cloud analysis, and replication for data protection and recovery. We'll demonstrate how to get started with DataSync, and you'll hear how it is helping Cox Automotive to move their archive of millions of images to AWS.
SFTP is used for the exchange of data across many industries, including financial services, healthcare, and retail. In this session, we will introduce you to AWS Transfer for SFTP, a service that helps you easily migrate file transfer workflows to AWS, without needing to modify applications or manage SFTP servers. We will demonstrate the product and talk about how to migrate your users so they continue to use their existing SFTP clients and credentials, while the data they access is stored in S3. You will also learn how FINRA is using this new service in conjunction with their Data Lake on AWS.
Netflix is using AWS Snowball Edge to deliver post-production content to our asset management system, called Content Hub, in the AWS Cloud. Production companies have been historically using LTO tapes to move data around, and that has well-known complications. In order to accelerate and secure our media workflows Netflix has shifted to using Snowball Edge devices for data migration. Please join us to learn how Netflix is using the Snowball Edge service at scale.
Amazon S3 supports a range of storage classes that can help you cost-effectively store data without impacting performance or availability. Each of our storage classes offer different data-access levels, retrieval times, and costs to support various use cases. In this session, Amazon S3 experts dive deep into the different Amazon S3 storage classes, their respective attributes, and when you should use them. Complete Title: AWS re:Invent 2018: [REPEAT 1] Deep Dive on Amazon S3 Storage Classes: Creating Cost Efficiencies across Your S3 Resources (STG398-R1)
Do we have a moral role as data scientists? How do you balance your responsibility to do what's best for the company with your moral responsibility as a member of society? As data becomes more accessible to everyone, and as AI and ML technologies become part of our everyday life, the data team takes on an important moral role as the conscience of the corporation. In this presentation, PeriscopeData CEO and cofounder Harry Glaser explores examples in which using ML and AI for classification, prediction, ranking, and more, runs a strong risk of delivering immoral outcomes if unchallenged.
From PR to production, the Deliveroo Application Platform needs to get developers' code into running containers as quickly and as safely as possible. This talk introduces Hopper, our release manager, and our process for safely shipping software with support for automatic rollbacks, service scaling, and configuration management.
Too often, data scientists build potentially high-impact models that never see the light of day due to deployment obstacles. At Convoy, data science is at the core of its trucking network product, so building a robust, frictionless platform to deploy models is critical to the company's success. To minimize the code needed to transition from training a model locally to deploying the same model in production, the company adopted Amazon SageMaker to push various models to production-ready endpoints. In this talk, Convoy's data science director, David Tsai, reviews Convoy's Amazon SageMaker architecture and highlights how it has enabled them to drive meaningful impact.
In this talk, Junaid Kapadia, DevOps manager and staff software engineer for Aetion Systems, speaks about the company's journey from a Chef, Jenkins, and EC2-based architecture to a fault-tolerant, highly available, continuously provisioned, and deployed architecture via AWS CodePipeline, AWS CodeBuild, AWS CloudFormation, AWS Systems Manager Parameter Store, and Amazon Elastic Container Service (Amazon ECS). Kapadia also discusses the history of Aetion's original architecture, explains how it transitioned its movement to the new architecture, and shares specific caveats of the purpose of each service and why it was chosen. He also talks about future developments of the overall architecture.
Over the past few years, the number of companies exploring blockchain solutions has skyrocketed, but for most, it is unclear where to start and what the right use cases are for their business. At ConsenSys, we are building all layers of the Web 3.0 stack, from protocol to application, and we are working with companies across all industries to create new business models based on blockchain. In this session, we dive into blockchain, Ethereum, smart contracts, and more as we discuss real-world examples and where the space is heading.
Join representatives from Uptake Technologies to learn about data engineering the startup way. They cover Uptake's evolution as a startup organization and its journey in building data systems that scale for data science and industrial IoT. Additionally, they discuss the challenges of building systems for data science on industrial data and how Uptake has overcome difficult data engineering problems in this space. Attendees will come away with lessons learned that apply to almost any startup that is learning to build and grow in the cloud.
In 2016, small business financing company, Kabbage, began forging a path to migrating its aging data warehouse infrastructure to something new. In this talk, Clint Hill, head of architecture and platform solutions, and David McGowan, VP of engineering at Kabbage, share how the Atlanta-based company moved away from a static relational table, muddled through a project with incomplete vision, little to no design, and a bespoke data management system, and ultimately found its way to a solution comprising several AWS Managed Services-ultimately creating a method that provides user flexibility and solves real business issues.
Data and data-based decisions are at the core of everything online brokerage Robinhood does. In this talk, we dive deep into how Robinhood moved from a world where multiple systems grew to create an unwieldy data architecture, and we discuss how the company used AWS tools, such as Amazon S3, Amazon Athena, Amazon EMR, AWS Glue, and Amazon Redshift, to build a robust data lake that can operate on petabyte scale. We also discuss design paradigms and tradeoffs made to achieve a cost-effective and performant cluster that unifies all data access, analytics, and machine learning use cases.
At iflix, we want to lead the internet entertainment revolution in emerging markets by redefining television for billions of people. While not everyone who wants great content may have a credit card or be able to afford pricey subscriptions, most do have smartphones, and that's where iflix comes in. In this talk, we address how we've had to be extremely scrappy to successfully serve streaming content to our millions of customers, and we discuss how we leverage various AWS services to accelerate our growth-the most recent being AWS Elemental MediaLive & AWS Elemental MediaPackage, which power our live TV option.
Blockchain technology has the potential to dramatically change the financial landscape. Kristen Stone, the product manager on the coinbase cryptocurrency payments team, and Jake Craige, the team's technical lead, join us to discuss the evolution of the blockchain industry and its impact on our day-to-day lives. They dive deep into some of the inner workings of the technology, and they discuss the different types of blockchain models, smart contracts, consensus protocols, and what's on the horizon.
Coinbase has some fairly unusual security and infrastructure requirements. One of these requirements is that every server in our infrastructure is both ephemeral (30 days) and immutable. The deployment process for most applications is fairly straightforward: 12-factor apps are blue/green deployed behind a load balancer. Blockchain nodes, however, present a difficult problem: How do you blue/green deploy a server with 1 TB on disk? In this session, we discuss how we solved this problem in a blockchain-agnostic fashion using a new project, called Snapchain.
Nubank, a Brazilian financial technology company, is empowering customers with a 100% digital banking experience. Born in the cloud, Nubank does this by enabling its engineering teams to create scalable software that can quickly adapt to the changing needs of a complex market and by fostering a culture of microservices, container-based applications, observability, and immutability. In this talk, Nubank engineers Alexandre Cisneiros and Diogo Beato discuss why they treat infrastructure as a software engineering problem, and they address how AWS services, like AWS CloudFormation, Amazon EC2, and IAM, enable them to support millions of customers-transacting millions of daily purchases-without a dedicated infrastructure team.
As BuzzFeed transitioned to a microservice platform build on Amazon ECS, it needed to secure a growing number of independent internal apps. The first solution was an open source OAuth proxy service deployed in front of each app, but this approach had a number of issues. In this talk, software engineer, Shraya Ramani, discusses BuzzFeed's experience and introduces SSO, a centralized authentication solution, which elegantly solved the problems faced by the company.
At Affirm, we lend billions of dollars to millions of customers using data-driven machine learning models that are trained on years of lending history. Feature extraction and validation in a risk management context presents unique challenges, so we have built a pipeline to eliminate training/serving skew in both development and production systems, and to run extensive regression testing on proposed PRs. Running Spark on Amazon EMR clusters provides an economic means to speed up this process. In this talk, we review our signal extraction and decisioning systems architecture and discuss how we use AWS to scale our compute.
Reddit is one of the world's most trafficked websites, with over 330 million monthly active users. Reddit built the vast majority of its infrastructure on AWS to support its increasing growth. As the world shifts to video-first, Reddit has redesigned its website and launched a new video platform. In this talk, we discuss design paradigms, tradeoffs made, and lessons learned from rearchitecting one of the internet's top sites. See how this infrastructure operates at scale and how the team leverages ETS, serverless, and compute to serve over one billion videos a month. Complete Title: AWS re:Invent 2018: 1B Video Views a Month: Reddit's Serverless & Compute Infrastructure at Scale (STP18)
eSports streaming platform, Bebo, has thousands of servers in 15 AWS Regions and a fully dynamic automatically scaling stack with good uptime. Its servers are also managed by a teenager who can't yet get a driver's license. In this talk, listen as Bebo CTO Furqan Rydhan and DevOps engineer Johnny Dallas discuss how 16-year-old Dallas, who graduated from San Francisco's Drew School two years early, built Bebo's fully automated infrastructure in two months and finds the time to balance coding with studying for the SATs.
The Telecom industry is at the cusp of major transformation. On the technological side, the advent of 5G networks with the promise for massive mobile broadband, billions of connected devices, and ultra-low latency applications presents a unique opportunity, but also requires changes to core Telecom systems for operational and business support. On the business side, there are competitive market forces and ecosystem disruptors that challenge the industry to explore new avenues for cost optimization and revenue generation. In this keynote panel, join AWS customers from the telecom industry, along with Jean-Philippe Poirault, an industry veteran who leads an AWS team focused the on the telecom industry, as they discuss their approach to this transformation and how the AWS Cloud enables and accelerates their strategic priorities.
In this session, we cover the strategic and technical innovations that have driven T-Mobile's digital transformation journey, with a focus on AWS serverless (AWS Lambda) and container technologies (Amazon ECS, Amazon EKS). Learn about T-Mobile's Shift-Right strategy that anchors most new development with serverless and containers, which led to the build-out of their open-sourced, Jazz Function-as-a-Service platform. We review workload characteristics that play to the strengths of containers versus Lambda and vice versa, and hear how T-Mobile achieved CPNI compliance requirements while running AWS workloads. We also dive into the analytics data-lake that combines Hadoop on premises, Intel Gluster technology to support ML, along with Amazon S3, Aurora PostGres, Amazon Redshift, and Amazon EMR.
In this session, learn from market-leader Vonage how and why they re-architected their QoS-sensitive, highly available and highly performant legacy real-time communications systems to take advantage of Amazon EC2, Enhanced Networking, Amazon S3, ASG, Amazon RDS, Amazon ElastiCache, AWS Lambda, StepFunctions, Amazon SNS, Amazon SQS, Amazon Kinesis, Amazon EFS, and more. We also learn how Aspect, a multinational leader in call center solutions, used AWS Lambda, Amazon API Gateway, Amazon Kinesis, Amazon ElastiCache, Amazon Cognito, and Application Load Balancer with open-source API development tooling from Swagger, to build a comprehensive, microservices-based solution. Vonage and Aspect share their journey to TCO optimization, global outreach, and agility with best practices and insights.
In this session, we cover practical steps for cost-optimizing your Microsoft workloads on AWS from both a licensing and infrastructure perspective. We discuss ways to diversify and optimize your current licensing investments, how to think strategically about licensing in the cloud, and how to bring your own licenses to AWS. We also cover a variety of additional cost optimization features and approaches, and we explain how these can be applied to Microsoft-specific workloads.
Join this Leadership Session to learn about AWS's strategy, the latest features, and best practices from customers on how to run Microsoft workloads like Windows, SQL Server, Active Directory, and .NET applications on AWS. Sandy Carter, Vice President for Enterprise Applications, discusses the evolution of AWS as the leading platform for hosting business-critical applications, and dispels the myths around running Microsoft workloads on AWS using real customer examples. In this session, we showcase how many organizations are benefitting by running Windows on AWS, from migrating legacy applications helping lower TCO, to building new innovative solutions to drive growth, in areas such as Machine Learning, Containers, and Serverless. Sandy shares the company's vision for continuing to innovate in this space to make AWS the premier place for customers to run Microsoft workloads in the cloud.
Microsoft applications make up 60% of on-premises IT environments. Join us as we discuss the journey of migrating the Unicorn Shop's on-premises Microsoft infrastructure to AWS. We discuss how we migrated core business productivity systems, Microsoft SQL Server, and a number of .NET applications that required seamless migration with minimal downtime. We also walk through the process for building your landing zone on AWS, including fully automated compliance controls, before embarking on your migration.
Migrating SQL Server databases to the cloud is a critical part of a cloud journey and requires planning and architectural considerations. In this session, we cover best practices and guidelines in migrating and/or architecting a hybrid SQL Server architecture on AWS. We compare and contrast various migration methods, including SQL export, backup and restore, and using AWS Database Migration Service (AWS DMS). We also provide guidance on how to migrate products that have approached their end of life, such as SQL Server 2008.
Want to learn about your options for running Microsoft Active Directory on AWS? When you move Microsoft workloads to AWS, it is important to consider how to deploy Microsoft Active Directory in support of group policy management, authentication, and authorization. In this session, we discuss options for deploying Microsoft Active Directory to AWS, including AWS Managed Microsoft AD and deploying Active Directory to Windows on Amazon EC2. We cover such topics as how to integrate your on-premises Microsoft Active Directory environment to the cloud and how to leverage SaaS applications, such as Office 365, with the AWS Single Sign-On service.
In this session, we cover how to leverage Docker for Windows and the Amazon Elastic Container Service (Amazon ECS) as an effective solution for migrating legacy .NET applications to the cloud. We use Microsoft Visual Studio to demonstrate how to containerize a legacy .NET app including the Docker build and deployment process. We also cover how to deploy the container to Amazon ECS using the Amazon EC2 Container Registry (Amazon ECR) service to host the Docker image.
In this session, learn how to architect Microsoft solutions on AWS for both high availability and scalability. Discover how Microsoft solutions can leverage AWS services to achieve more resiliency, replace unnecessary complexity, and provide scalability. We explore hybrid architecture scenarios and common architecture patterns for Microsoft Active Directory and productivity solutions, such as Dynamics AX, CRM, and SharePoint. We also cover common design patterns for .NET applications, including approaches to CI/CD, DevOps, and containerizing .NET applications.
Deploying Microsoft products on AWS is fast, easy, and cost-effective. Before deploying these applications to production, it's helpful to have guidance on approaches for securing them. In this session, we outline the principles for protecting the environment of Microsoft applications hosted on AWS, with a focus on risk assessment, reducing attack surface, adhering to the principle of least privilege, and protecting data.
In this session, we discuss best practices and approaches for managing your Microsoft Windows-based infrastructure on AWS. We describe the AWS services that can help you manage Windows servers at scale and realize the maximum benefit of the cloud. In addition, we show you how to build simple and effective solutions to manage logging, configuration drift, inventory, licensing, and more. Please join us for a speaker meet-and-greet following this session at the Speaker Lounge (ARIA East, Level 1, Willow Lounge). The meet-and-greet starts 15 minutes after the session and runs for half an hour.
In this session, we dive deep on best practices and design considerations for running Microsoft SQL Server on AWS. We cover how to choose between running SQL Server on Amazon EC2 and Amazon RDS. We also cover how to optimize the performance of SQL Server on AWS, how to leverage the new the Optimize CPU feature, and how to deploy SQL Server on Linux. We also review best practices for storage, monitoring, availability, security, and backup and recovery for SQL Server.
In this session, learn how to architect, configure, and deploy an ASP.NET Core microservices application running in containerized AWS Fargate tasks. We cover how to use Amazon DynamoDB for session state and how to use Amazon Cognito for identity management. We also discuss using Amazon ECS for service discovery and AWS CodePipeline to create CI/CD pipelines for each microservice so that each one is individually deployed when an AWS CodeCommit repository is updated. Join us, and learn everything you need to know to start designing and deploying containerized ASP.NET Core applications on AWS.
The World Wide Public Sector Breakfast Keynote will feature Teresa Carlson, VP, World Wide Public Sector for Amazon Web Services. Customers will hear the latest global updates on the public sector business including exciting announcements, program updates, and inspiring customer stories. Don't miss the opportunity to hear directly from AWS customers across the public sector as they share the impact cloud has made on their organizations, people and mission. Breakfast will be provided.
As public sector organizations move to the cloud, they're finding it to be a springboard for innovation. Local and regional organizations around the globe are developing digital services to improve the lives of the citizens they serve, creating platforms where citizens can be heard and increasing engagement within their communities. Join us on this panel as we travel from the Americas to Europe and Singapore and dive into stories of cloud migration and emerging innovations. We hear from Athabasca University, Radio France, the Department for Work and Pensions, and Singapore's Land Transport Authority as they share their individual journeys to the cloud and the learnings they have acquired on the way.
In this session, we look at cloud-ready contracts from around the world. We compare and contrast these contracts through scope identification, end-user eligibility, and primary service offerings (i.e., MSP vs. IaaS resale) to help you determine a contract pathway that meets your mission needs. We discuss considerations for buying cloud services so that end users and organizations can extract the full benefits and power of AWS. Join us and gain an understanding of how to approach shared security, utility pricing, data location, innovative services, governance, and terms and conditions for a successful procurement effort.
In this session, learn how Dr. Julia Lane, Director of the Administrative Data Research Facility (ADRF) at NYU, used AWS and AWS Public Sector Partner Earthling Security to build a software-as-a-service (SaaS) research and analysis environment that hosts sensitive U.S. Census Bureau data. The ADRF hosted almost 50 confidential government data sets from 12 different agencies at all levels of government. The ADRF chose AWS to meet strict security and governance requirements such as FedRAMP compliance, ease of implementation, and robust native security. AWS provided NYU a complete set of infrastructure, application, and security services perfectly suited for U.S. government requirements. In addition, AWS discusses how these principles and practices can be applied to an organization's governance, risk, and compliance needs.
On LinkedIn, the #1 skill for three years in a row is cloud and distributed computing. The pressure on our educational system to produce tech-ready employees, and on companies to source talent, is enormous, with ripe opportunities for disintermediation. In this session, hear from the disruptors who are collaborating with AWS Educate in order to rapidly change the trajectory and impact that global skills gap. Meet both the supply and demand side for cloud jobs.
AWS GovCloud (US) is isolated AWS infrastructure and services designed to allow US government agencies and enterprises in highly regulated industries to move sensitive data and regulated IT workloads to the cloud by addressing specific regulatory and compliance requirements. While enterprises and organizations are increasingly integrating software as a service (SaaS) technologies into their IT environments, they often require SaaS products to address the same compliance features of the AWS GovCloud (US) Region. In this session, we discuss the opportunities for SaaS in AWS GovCloud (US), how SaaS vendors should approach building products in AWS GovCloud (US), key architecture and operational considerations, and best practices for bringing a SaaS product on AWS GovCloud (US) to market. Complete Title: AWS re:Invent 2018: Unlock Highly Regulated Enterprise Workloads with SaaS on AWS GovCloud (US) (WPS303)
In this session, Fannie Mae discusses how they completely re-architected a mission-critical application using AWS native services that process hundreds of thousands of mortgage loans every day in a highly scalable and reliable manner. The transaction-heavy workload uses over 20+ million Amazon S3 transactions a day, each within 150-millisecond response times, thus providing increased uptime and faster response.
In this session, we feature the U.S. National Geospatial-Intelligence Agency (NGA), a key stakeholder and sponsor for the new AWS Secret Region, which supports workloads up to the Secret U.S. security classification level and is readily available to the U.S. Intelligence Community (IC). NGA uses AWS Snowball Edge to support War Fighter, utilizing imagery from NGA's Open Data Store and implementing geospatial applications on the edge. AWS Snowball Edge allows NGA to directly support its mission, providing products and services to decision makers, warfighters, and first responders when they need it most. Enabling the edge changes NGA's ability to share critical resources, data to facilitate user access meets NGA's mission needs, and support the IC and Department of Defense as a whole Complete Title: AWS re:Invent 2018: National Geospatial-Intelligence Agency: Changing the Way the Intelligence Community Moves Data (WPS315)
We Power Tech
Diversity in technology often starts with a focus on women. How do we prioritize the inclusion of women from all communities (race, gender identity, ability status, and other underrepresented and intersectional communities) on technical teams? What can leaders do-from cultivating the pipeline, to hiring, to developing internal strategies-to make the future of tech more diverse and inclusive? Hear from successful executive technical leaders as they explore their journeys through the industry. Leave with solutions on how to prioritize inclusion and drive results. This session is brought to you by AWS partner, Accenture.
Have you ever thought, "I care about a more diverse and inclusive workplace, but what can I do? I'm not the VP of HR or Head of Diversity & Inclusion at my company." If so, you're not alone. Knowing how to be a better ally for underrepresented people working in tech is unfamiliar territory for many of us. Yet, there are everyday actions we can take to create a more diverse and inclusive culture. Come to this session to learn about key challenges women and underrepresented minorities face in our tech workplaces and ways you can make a difference. Learn tips on how to sponsor, champion, amplify, and advocate for all. This session is brought to you by AWS partner, Accenture.
When diversity efforts focus only on gender, they further marginalize the marginalized. How do we make sure that we address the needs of everyone that is underrepresented in the industry? Are we reaching our marginalized customers? What can we do to provide a platform for those voices to be heard and support their efforts. In this session, we hear from technical leaders on the best ways to work with and include marginalized communities. This session is brought to you by AWS partner, Accenture.
Everyone has a role to play in creating and sustaining an inclusive work environment, but white men often opt out of, or are excluded from, diversity and inclusion efforts because the focus is on underrepresented groups. When white men are fully engaged in diversity efforts, they can help effect change at every level, and they also benefit from a healthy, inclusive workspace. Participants explore assumptions regarding white men, diversity, and leadership that impact behavior and shape the work environment, and they identify the next steps they can take to become fully inclusive partners working across differences to foster an equitable workplace culture. This session is brought to you by AWS partner, Accenture. Complete Title: AWS re:Invent 2018: We Power Tech: The Important Role White Men Play in Creating a Culture of Full Inclusion (WPT204-S)