The new Ohio Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.It also supports (deep breath) Amazon API Gateway, Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, Amazon CloudWatch(including CloudWatch Events and CloudWatch Logs), AWS CloudTrail, AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Container Registy, Amazon ECS, Amazon Elastic File System,Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, Amazon Elasticsearch Service,Amazon Glacier, AWS Identity and Access Management (IAM), AWS Import/Export Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Lambda, AWS Marketplace, Mobile Hub,AWS OpsWorks, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Route 53,Amazon Simple Storage Service (S3), AWS Service Catalog, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), AWS Storage Gateway, Amazon Simple Workflow Service (SWF), AWS Trusted Advisor, VM Import/Export, and AWS WAF.
The Region supports all sizes of C4, D2, I2, M4, R3, T2, and X1 instances. As is the case with all of our newer Regions, instances must be launched within a Virtual Private Cloud (read Virtual Private Clouds for Everyone to learn more).
Here are some round-trip network metrics that you may find interesting (all names are airport codes, as is apparently customary in the networking world; all times are +/- 2 ms):
- 10 ms to ORD (home to a pair of Direct Connect locations hosted by QTS and Equinix and an Internet exchange point).
- 12 ms to IAD (home of the US East (Northern Virginia) Region).
- 18 ms to JFK (home to another exchange point).
- 52 ms to SFO (home of the US West (Northern California) Region).
- 68 ms to PDX (home of the US West (Oregon) Region].
With just 12 ms of round-trip latency between US East (Ohio) and US East (Northern Virginia), you can make good use of unique AWS features such as S3 Cross-Region Replication, Cross-Region Read Replicas for Amazon Aurora, Cross-Region Read Replicas for MySQL, and Cross-Region Read Replicas for PostgreSQL. Data transfer between the two Regions is priced at the Inter-AZ price ($0.01 per GB), making your cross-region use cases even more economical.
Also on the networking front, we have agreed to work together with Ohio State University to provide AWS Direct Connect access toOARnet. This 100-gigabit network connects colleges, schools, medical research hospitals, and state government across Ohio. This connection provides local teachers, students, and researchers with a dedicated, high-speed network connection to AWS.
14 Regions, 38 Availability Zones, and Counting
Today’s launch of this 3-AZ Region expands our global footprint to a grand total of 14 Regions and 38 Availability Zones. We are also getting ready to open up a second AWS Region in China, along with other new AWS Regions in Canada, France, and the UK.
Since there’s been some industry-wide confusion about the difference between Regions and Availability Zones of late, I think it is important to understand the differences between these two terms. Each Region is a physical location where we have one or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.
Around the office, we sometimes play with analogies that can serve to explain the difference between the two terms. My favorites are “Hotels vs. hotel rooms” and “Apple trees vs. apples.” So, pick your analogy, but be sure that you know what it means!
In order to help these organizations take advantage of the benefits that AWS has to offer while building on their existing investment in virtualization, we are working with our friends at VMware to build and deliver VMware Cloud on AWS.
This new offering is a native, fully managed VMware environment on the AWS Cloud that can be accessed on an hourly, on-demand basis or in subscription form. It includes the same core VMware technologies that customers run in their data centers today including vSphere Hypervisor(ESXi), Virtual SAN (vSAN), and the NSX network virtualization platformand is designed to provide a clean, seamless experience.
VMware Cloud on AWS runs directly on the physical hardware, while still taking advantage of a host of network and hardware features designed to support our security-first design model. This allows VMware to run their virtualization stack on AWS infrastructure without having to use nested virtualization.
If you find yourself in the situation that I described above—running on-premises virtualization but looking forward to the cloud—I think you’ll find a lot to like here. Your investment in packaging, tooling, and training will continue to pay dividends, as will your existing VMware licenses, agreements, and discounts. Everything that you and your team know about ESXi, vSAN, and NSX remain relevant and valuable. You will be able to manage your entire VMware environment (on-premises and AWS) using your existing copy of vCenter, along with tools and scripts that make use of the vCenter APIs.
The entire roster of AWS compute, storage, database, analytics, mobile, and IoT services can be directly accessed from your applications. Because your VMware applications will be running in the same data centers as the AWS services, you’ll be able to benefit from fast, low-latency connectivity when you use these services to enhance or extend your applications. You’ll also be able to take advantage of AWS migration tools such as AWS Database Migration Service, AWS Import/Export Snowball, and AWS Storage Gateway.
Plenty of Options
VMware Cloud on AWS will give you a lot of different options when it comes to migration, data center consolidation, modernization, and globalization:
On the migration side, you can use vSphere vMotion to live-migrate individual VMs, workloads, or entire data centers to AWS with a couple of clicks. Along the way, as you migrate individual components, you can use AWS Direct Connect to set up a dedicated network connection from your premises to AWS.
When it comes to data center consolidation, you can migrate code and data to AWS without having to alter your existing operational practices, tools, or policies.
When you are ready to modernize, you can take advantage of unique and powerful features such as Amazon Aurora (a highly scalable relational database designed to be compatible with MySQL), Amazon Redshift (a fast, fully managed, petabyte-scale data warehouse), and many other services.
When you need to globalize your business, you can spin up your existing applications in multiple AWS regions with a couple of clicks.
I will share more information on this development as it becomes available. To learn more, visit the VMware Cloud on AWS page.
I am happy to announce that we will be opening an AWS Region in Paris, France in 2017. The new Region will give AWS partners and customers the ability to run their workloads and store their data in France.This will be the fourth AWS Region in Europe. We currently have two other Regions in Europe — EU (Ireland) and EU (Frankfurt) and an additional Region in the UK expected to launch in the coming months. Together, these Regions will provide our customers with a total of 10 Availability Zones (AZs) and allow them to architect highly fault tolerant applications while storing their data in the EU.
Today’s announcement means that our global infrastructure now comprises 35 Availability Zones across 13 geographic regions worldwide, with another five AWS Regions (and 12 Availability Zones) in France, Canada, China, Ohio, and the United Kingdom coming online throughout the next year (see the AWS Global Infrastructure page for more info).
As always, we are looking forward to serving new and existing French customers and working with partners across Europe. Of course, the new Region will also be open to existing AWS customers who would like to process and store data in France.
To learn more about the AWS France Region feel free to contact our team in Paris at [email protected].
The new Mumbai region has two Availability Zones, raising the global total to 35. It supports Amazon Elastic Compute Cloud (EC2) (C4,M4, T2, D2, I2, and R3 instances are available) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, and Elastic Load Balancing.It also supports the following services:
- AWS Certificate Manager (ACM)
- AWS CloudFormation
- Amazon CloudFront
- AWS CloudTrail
- Amazon CloudWatch
- AWS CodeDeploy
- AWS Config
- AWS Direct Connect
- Amazon DynamoDB
- AWS Elastic Beanstalk
- Amazon ElastiCache
- Amazon Elasticsearch Service
- Amazon EMR
- Amazon Glacier
- AWS Identity and Access Management (IAM)
- AWS Import/Export Snowball
- AWS Key Management Service (KMS)
- Amazon Kinesis
- AWS Marketplace
- AWS OpsWorks
- Amazon Redshift
- Amazon Relational Database Service (RDS) – all database engines including Amazon Aurora
- Amazon Route 53
- Amazon Simple Notification Service (SNS)
- Amazon Simple Queue Service (SQS)
- Amazon Simple Storage Service (S3)
- Amazon Simple Workflow Service (SWF)
- AWS Support
- AWS Trusted Advisor
- VM Import/Export
There are now three edge locations (Mumbai, Chennai, and New Delhi) in India. The locations support Amazon Route 53, Amazon CloudFront, and S3 Transfer Acceleration. AWS Direct Connect support is available via our Direct Connect Partners (listed below).
This is our thirteenth region (see the AWS Global Infrastructure map for more information). As usual, you can see the list of regions in the region menu of the Console:
There are over 75,000 active AWS customers in India, representing a diverse base of industries. In the time leading up to today’s launch, we have provided some of these customers with access to the new region in preview form. Two of them (Ola Cabs and NDTV) were kind enough to share some of their experience and observations with us:
Ola Cabs’ mobile app leverages AWS to redefine point-to-point transportation in more than 100 cities across India. AWS allows OLA to constantly innovate faster with new features and services for their customers, without compromising on availability or the customer experience of their service. Ankit Bhati (CTO and Co-Founder) told us:
We are using technology to create mobility for a billion Indians, by giving them convenience and access to transportation of their choice. Technology is a key enabler, where we use AWS to drive supreme customer experience, and innovate faster on new features & services for our customers. This has helped us reach 100+ cities & 550K driver partners across India. We do petabyte scale analytics using various AWS big data services and deep learning techniques, allowing us to bring our driver-partners close to our customers when they need them. AWS allows us to make 30+ changes a day to our highly scalable micro-services based platform consisting of 100s of low latency APIs, serving millions of requests a day. We have tried the AWS India region. It is great and should help us further enhance the experience for our customers.
NDTV, India’s leading media house is watched by millions of people across the world. NDTV has been using AWS since 2009 to run their video platform and all their web properties. During the Indian general elections in May 2014, NDTV fielded an unprecedented amount of web traffic that scaled 26X from 500 million hits per day to 13 billion hits on Election Day (regularly peaking at 400K hits per second), all running on AWS. According to Kawaljit Singh Bedi (CTO of NDTV Convergence):
NDTV is pleased to report very promising results in terms of reliability and stability of AWS’ infrastructure in India in our preview tests. Based on tests that our technical teams have run in India, we have determined that the network latency from the AWS India infrastructure Region are far superior compared to other alternatives. Our web and mobile traffic has jumped by over 30% in the last year and as we expand to new territories like eCommerce and platform-integration we are very excited on the new AWS India region launch. With the portfolio of services AWS will offer at launch, low latency, great reliability, and the ability to meet regulatory requirements within India, NDTV has decided to move these critical applications and IT infrastructure all-in to the AWS India region from our current set-up.
Here are some of our other customers in the region:
Tata Motors Limited, a leading Indian multinational automotive manufacturing company runs its telematics systems on AWS. Fleet owners use this solution to monitor all vehicles in their fleet on a real time basis. AWS has helped Tata Motors become to more agile and has increased their speed of experimentation and innovation.
redBus is India’s leading bus ticketing platform that sells their tickets via web, mobile, and bus agents. They now cover over 67K routes in India with over 1,800 bus operators. redBus has scaled to sell more than 40 million bus tickets annually, up from just 2 million in 2010. At peak season, there are over 100 bus ticketing transactions every minute. The company also recently developed a new SaaS app on AWS that gives bus operators the option of handling their own ticketing and managing seat inventories. redBus has gone global expanding to new geographic locations such as Singapore and Peru using AWS.
Hotstar is India’s largest premium streaming platform with more than 85K hours of drama and movies and coverage of every major global sporting event. Launched in February 2015, Hotstar quickly became one of the fastest adopted new apps anywhere in the world. It has now been downloaded by more than 68M users and has attracted followers on the back of a highly evolved video streaming technology and high attention to quality of experience across devices and platforms.
Macmillan India has provided publishing services to the education market in India for more than 120 years. Prior to using AWS, Macmillan India has moved its core enterprise applications — Business Intelligence (BI), Sales and Distribution, Materials Management, Financial Accounting and Controlling, Human Resources and a customer relationship management (CRM) system from an existing data center in Chennai to AWS. By moving to AWS, Macmillan India has boosted SAP system availability to almost 100 percent and reduced the time it takes them to provision infrastructure from 6 weeks to 30 minutes.
We are pleased to be working with a broad selection of partners in India. Here’s a sampling:
- AWS Premier Consulting Partners – Cognizant, BlazeClan Technologies Pvt. Limited, Minjar Cloud Solutions Pvt Ltd, and Wipro.
- AWS Consulting Partners – Accenture, BluePi, Cloudcover, Frontier, HCL, Powerupcloud, TCS, and Wipro.
- AWS Technology Partners – Freshdesk, Druva, Indusface, Leadsquared, Manthan, Mithi, Nucleus Software, Newgen, Ramco Systems, Sanovi, and Vinculum.
- AWS Managed Service Providers – Progressive Infotech and Spruha Technologies.
- AWS Direct Connect Partners – AirTel, Colt Technology Services, Global Cloud Xchange, GPX, Hutchison Global Communications, Sify, and Tata Communications.
Amazon Offices in India
We have opened six offices in India since 2011 – Delhi, Mumbai, Hyderabad, Bengaluru, Pune, and Chennai. These offices support our diverse customer base in India including enterprises, government agencies, academic institutions, small-to-mid-size companies, startups, and developers.
The full range of AWS Support options (Basic, Developer, Business, and Enterprise) is also available for the Mumbai Region. All AWS support plans include an unlimited number of account and billing support cases, with no long-term contracts.
Every AWS region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, and PCI DSS Level 1 (to name a few). AWS implements an information Security Management System (ISMS) that is independently assessed by qualified third parties. These assessments address a wide variety of requirements which are communicated to customers by making certifications and audit reports available, either on our public-facing website or upon request.
Use it Now
This new region is now open for business and you can start using it today! You can find additional information about the new region, documentation on how to migrate, customer use cases, information on training and other events, and a list of AWS Partners in India on the AWS site.
We have set up a seller of record in India (known as AISPL); please see the AISPL customer agreement for details.
Luca told me that the Arduino Code Editor was designed to simplify and streamline the setup and development process. The editor runs within your browser and is hosted on AWS (although we did not have time to get in to the details, I understand that they made good use of AWS Lambda and several other AWS services).
You can write and modify your code, save it to the cloud and optionally share it with your colleagues and/or friends. The editor can also detect your board (using a small native plugin) and configure itself accordingly; it even makes sure that you can only write code using libraries that are compatible with your board. All of your code is compiled in the cloud and then downloaded to your board for execution.
Here’s what the editor looks like (see Sneak Peek on the New, Web-Based Arduino Create for more):
Arduino Cloud Platform
Because Arduinos are small, easy to program, and consume very little power, they work well in IoT (Internet of Things) applications. Even better, it is easy to connect them to all sorts of sensors, displays, and actuators so that they can collect data and effect changes.
The new Arduino Cloud Platform is designed to simplify the task of building IoT applications that make use of Arduino technology. Connected devices will be able to be able to connect to the Internet, upload information derived from sensors, and effect changes upon command from the cloud. Building upon the functionality provided by AWS IoT, this new platform will allow devices to communicate with the Internet and with each other. While the final details are still under wraps, I believe that this will pave the wave for sensors to activate Lambda functions and for Lambda functions to take control of displays and actuators.
I look forward to learning more about this platform as the details become available!
When I was in high school, I read and reported on a relatively new (for 1977) book titled Future Shock. In the book, futurist Alvin Toffler argued that the rapid pace of change had the potential to overwhelm, stress, and disorient people. While the paper I wrote has long since turned to dust, I do remember arguing that change was good, and that people and organizations would be better served by preparing to accept and to deal with it.Early in my career I saw that many supposed technologists were far better at clinging to the past than they were at moving into the future. By the time I was 21 I had decided that it would be better for me to live in the future than in the past, and to not just accept change and progress, but to actively seek it out. Now, 35 years after that decision, I can see that I chose the most interesting fork in the road. It has been a privilege to be able to bring you AWS news for well over a decade (I wrote my first post in 2004).
A Decade of IT Change
Looking back at the past decade, it is pretty impressive to see just how much the IT world has changed. Even more impressive, the change is not limited to technology. Business models have changed, as has the language around it. At the same time that changes on the business side have brought about new ways to acquire, consume, and pay for resources (empowering both enterprises and startups in the process), the words that we use to describe what we do have also changed! A decade ago we would not have spoken of the cloud, microservices, serverless applications, the Internet of Things, containers, or lean startups. We would not have practiced continuous integration, continuous delivery, DevOps, or ChatOps. While you are still trying to understand and implement ChatOps, don’t forget that something even newer called VoiceOps (powered by Alexa) is already on the horizon.
Today, keeping current means staying abreast of developments in programming languages, system architectures, and industry best practices. It means that you spend time every day improving your current skills and looking for new ones. It means becoming comfortable in a new world where multiple deployments per day are commonplace, powered by global teams, and managed by consensus, all while remaining focused on delivering value to the business!
A Decade of AWS
While I hate to play favorites, I would like to quickly review some of my favorite AWS launches and blog posts of the past decade.
First and Still Relevant (2006) – Amazon S3. Incredibly simple in concept yet surprisingly complex behind the scenes, S3 was, as TechCrunch said at the time, game changing!
Servers by the Hour (2006) – Amazon EC2. I wrote the blog post while sitting poolside in Cabo San Lucas. The launch had been imminent for several months, and then became a fact just as I was about to hop on the plane. From that simple start (one instance type, one region, and CLI-only access), EC2 has added feature after feature (most of them driven by customer requests) and is just as relevant today as it was in 2006.
Making Databases Easy (2009) – Amazon Relational Database Service – Having spent a lot of time installing, tuning, and managing MySQL as part of a long-term personal project, I was in a perfect position to appreciate how RDS simplified every aspect of my work.
Advanced Networking (2009) – Amazon Virtual Private Cloud – With the debut of VPC, even conservative enterprises began to take a closer look at AWS. They saw that we understood the networking and isolation challenges that they faced, and were pleased that we were able to address them.
Internet-Scale Data Storage (2012) – Amazon DynamoDB – The NoSQL market was in a state of flux when we launched DynamoDB. Now that the smoke has cleared, I routinely hear about customers that use DynamoDB to store huge amounts of data and to support some pretty incredible request rates.
Data Warehouses in Minutes not Quarters (2012) – Amazon Redshift – Many companies measure implementation time for a data warehouse in terms of quarters or even years. Amazon Redshift showed them that there was a better way to get started.
Desktop Computing in the Cloud (2013) – Amazon WorkSpaces – All too often dismissed as either pedestrian or “great for someone else,” virtual desktops have become an important productivity tool for me and for our customers.
Real Time? How Much Data? (2013) – Amazon Kinesis – Capturing, processing, and deriving value from voluminous streams of data became easier and simpler when we launched Kinesis.
A New Programming Model (2014) – AWS Lambda – This is one of those disruptive, game-changers that you need to be ready for! I have been impressed by the number of traditional organizations that have already built and deployed sophisticated Lambda-powered applications. My expectation that Lambda would be most at home in startups building applications from scratch turned out to be wrong.
Devices are the Future (2015) – AWS IoT – Mass-produced compute power and widespread IP connectivity combine to allow all sorts of interesting devices to be connected to the Internet.
A decade ago, discussion about the risks of cloud computing centered around adoption. It was new and unproven, and raised more questions than it answered. That era passed some time ago. These days, I hear more talk about the risk of not going to the cloud. Organizations of all shapes and sizes want to be nimble, to use modern infrastructure, and to be able to attract professionals with a strong desire to do the same. Today’s employees want to use the latest and most relevant technology in order to be as productive as possible.
I can promise you that the next decade of the cloud will be just as exciting as the one that just concluded. Keep on learning, keep on building, and share your successes with us!
— Jeff;PS – As you can tell from this post, I strongly believe in the value of continuing education. I discussed this with my colleagues and they have agreed to make the entire set of qwikLABS online labs and learning quests available to all current and potential AWS customers at no charge through the end of March. To learn more, visit qwikLABS.com.
From their headquarters in Asti, Italy, NICE delivers products and solutions to customers all over the world. These products help customers to optimize and centralize their high performance computing (HPC) and visualization workloads while also providing tools that are a great fit for distributed workforces making use of mobile devices.For Existing Customers
The NICE brand and team will remain intact and will continue to develop and support the EnginFrame and Desktop Cloud Visualization(DCV) products. Customers will continue to receive world-class support and services, enhanced with the backing of the AWS team. Going forward, NICE and AWS will work together to create even better tools and services.
Still Day 1
As Jeff Bezos often says, it is still day 1 and we don’t have all of the answers yet. However, I did want to share this news with you and let you know that we are looking forward to meeting and working with our new colleagues. We expect the deal to close in Q1 of 2016.
Customers are already running GxP-workloads on AWS! In order to help speed the adoption for other pharma and medical device manufacturers, we are publishing our new GxP compliance resource today.
The GxP position paper (Considerations for Using AWS Products in GxP Systems) provides interested parties with a brief overview of AWS and of the principal services, and then focuses on a discussion of how they can be used in a GxP system. The recommendations within the paper fit in to three categories:
Quality Systems – This section addresses management, personnel, audits, purchasing controls, product assessment, supplier evaluation, supplier agreement, and records & logs.
System Development Life Cycle – This section addresses system development, validation, and operation. As I read this section of the document, it was interesting to learn how the software-defined infrastructure-as-code AWS model allows for better version control and is a great fit for GxP. The ability to use a common set of templates for development, test, and production environments that are all configured in the same way simplifies and streamlines several aspects of GxP compliance.
Regulatory Affairs – This section addresses regulatory submissions, inspections by health authorities, and personal data privacy controls.
We hired Lachman Consultants (an internationally renowned compliance consulting firm), and had them contribute to and review an earlier draft of the position paper. The version that we are publishing today reflects their feedback.
Join our Webinar
If you are interested in building cloud-based systems that must adhere to GxP, please join our upcoming GxP Webinar. Scheduled for February 23, this webinar will give you an overview of the new GxP compliance resource and will show you how AWS can facilitate GxP compliance within your organization. You’ll learn about rules-based consistency, compliance-as-code, repeatable software-based testing, and much more.
Today’s news definitely helps us progress towards our goal of 40% renewable energy for our global infrastructure by the end of 2016 and marks more progress in AWS’s march towards our long-term 100% renewable goal, with much more soon to come. Since the Fowler Ridge project, we have announced three other agreements for new wind and solar projects that will be constructed over the coming months and start generating renewable power in late 2016 and early 2017. Stay tuned for more exciting announcements to come in the future as well.
I am happy to announce that we will be opening an AWS region in Montreal, Québec, Canada in the coming year. This region will be carbon-neutral and powered almost entirely by clean, renewable hydro power.The planned Canada-Montreal region will give AWS partners and customers the ability to run their workloads and store their data in Canada. As a reminder, we currently have 4 other regions in North America—US East (Northern Virginia), US West (Northern California),US West (Oregon), and AWS GovCloud (US)—with a total of 13 Availability Zones, plus the planned but not yet operational region coming to Ohio in 2016.
Today’s announcement means that our global infrastructure now comprises 32 Availability Zones across 12 geographic regions worldwide, with another 5 AWS regions (and 11 Availability Zones) in Canada, China, India, Ohio, and the United Kingdom coming online throughout the next year (see the AWS Global Infrastructure page for more info).
As always, we are looking forward to serving new and existing Canadian customers and to working with partners in the area. Of course, the new region will also be open to existing AWS customers who would like to process and store data in Canada.