When it comes to cloud computing, Amazon Web Services (AWS) is the biggest game in town. With 30% of global revenue, (that’s $5.16 billion for 2014), it should come as no surprise that AWS often leads the way in terms of new technology, partner ecosystems, and well, cloud in general.
It should also come as no surprise that AWS’s flagship conference, the yearly AWS re:Invent gathering, is one of the biggest events in the cloud computing industry, where thousands of AWS employees, channel partners, ecosystem collaborators, users, and fans get together to share ideas and discuss cloud strategy and best practices.
This year’s re:Invent conference went down in early October is Las Vegas, and in the weeks since the conference, Amazon’s media teams have been busy uploading videos of the hundreds of technical sessions and presentations that took place at the conference. We trolled through the large collection of videos, available here, and pulled a few of the most relevant and educational for your consideration. So check ’em out, you might learn something.
ARC401 – Cloud First: New Architecture for New Infrastructure
“What do companies with internal platforms have to change to succeed in the cloud? The five pillars at the heart of IT solutions in the cloud are automation, fault tolerance, horizontal scalability, security, and cost-effectiveness. This talk discusses tools that facilitate the development and automate the deployment of secure, highly available microservices. The tools were developed using AWS CloudFormation, AWS SDKs, AWS CLI, Amazon RDS, and various open-source software such as Docker. The talk provides concrete examples of how these tools can help developers and architects move from beginning/intermediate AWS practitioners to cloud deployment experts”
ARC340 – Multi-tenant Application Deployment Models
“Shared pools of resources? Microservices in containers? Isolated application stacks? You have many architectural models and AWS services to consider when you deploy applications on AWS. This session focuses on several common models and helps you choose the right path or paths to fit your application needs. Architects and operations managers should consider this session to help them choose the optimal path for their application deployment needs for their current and future architectures. This session covers services such as Amazon Elastic Compute Cloud (Amazon EC2), EC2 Container Services, AWS Lambda, and AWS CodeDeploy.”
ARC313 – Future Banks Live in the Cloud: Building a Usable Cloud with Uncompromising Security
“Running today’s largest consumer bitcoin startup comes with a target on your back and requires an uncompromising approach to security. This talk explores how Coinbase is learning from the past and pulling out all the stops to build a secure infrastructure behind an irreversibly transferrable digital good for millions of users. This session will cover cloud architecture, account and network isolation in the AWS cloud, disaster recovery, self-service consensus-based deployment, real-time streaming insight, and how Coinbase is leveraging practical DevOps to build the bank of the future.”
ARC309 – From Monolithic to Microservices: Evolving Architecture Patterns in the Cloud
“Gilt, a billion dollar e-commerce company, implemented a sophisticated microservices architecture on AWS to handle millions of customers visiting their site at noon every day. The microservices architecture pattern enables independent service scaling, faster deployments, better fault isolation, and graceful degradation. In this session, Derek Chiles, AWS solutions architect, will review best practices and recommended architectures for deploying microservices on AWS. Adrian Trenaman, SVP of engineering at Gilt, will share Gilt’s experiences and lessons learned during their evolution from a single monolithic Rails application in a traditional data center to more than 300 Scala/Java microservices deployed in the cloud.”
BDT404 – Building and Managing Large-Scale ETL Data Flows with AWS Data Pipeline and Dataduct
“As data volumes grow, managing and scaling data pipelines for ETL and batch processing can be daunting. With more than 13.5 million learners worldwide, hundreds of courses, and thousands of instructors, Coursera manages over a hundred data pipelines for ETL, batch processing, and new product development. In this session, we dive deep into AWS Data Pipeline and Dataduct, an open source framework built at Coursera to manage pipelines and create reusable patterns to expedite developer productivity. We share the lessons learned during our journey: from basic ETL processes, such as loading data from Amazon RDS to Amazon Redshift, to more sophisticated pipelines to power recommendation engines and search services. Attendees learn: Do’s and don’ts of Data Pipeline Using Dataduct to streamline your data pipelines How to use Data Pipeline to power other data products, such as recommendation systems What’s next for Dataduct”
Latest posts by Jeff Edwards (see all)
- WATCH: Setting Up Docker on Windows, Mac, and Ubuntu - April 4, 2016
- What’s Changed: Gartner’s 2016 Application Platform as a Service (aPaaS) Magic Quadrant Report - March 31, 2016
- 5 Things Enterprises Didn’t Know They Could Automate - March 28, 2016