Now Open – AWS US East (Ohio) Region

by Jeff Barr | on | in Announcements | Permalink | Comments
As part of our ongoing plan to expand the AWS footprint, I am happy to announce that our new US East (Ohio) Region is now available. In conjunction with the existing US East (Northern Virginia) Region, AWS customers in the Eastern part of the United States have fast, low-latency access to the suite of AWS infrastructure services.The Details
The new Ohio Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.It also supports (deep breath) Amazon API Gateway, Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, Amazon CloudWatch(including CloudWatch Events and CloudWatch Logs), AWS CloudTrail, AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Container Registy, Amazon ECS, Amazon Elastic File System,Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, Amazon Elasticsearch Service,Amazon Glacier, AWS Identity and Access Management (IAM), AWS Import/Export Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Lambda, AWS Marketplace, Mobile Hub,AWS OpsWorks, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Route 53,Amazon Simple Storage Service (S3), AWS Service Catalog, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), AWS Storage Gateway, Amazon Simple Workflow Service (SWF), AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, I2, M4, R3, T2, and X1 instances. As is the case with all of our newer Regions, instances must be launched within a Virtual Private Cloud (read Virtual Private Clouds for Everyone to learn more).

Well Connected
Here are some round-trip network metrics that you may find interesting (all names are airport codes, as is apparently customary in the networking world; all times are +/- 2 ms):

  • 10 ms to ORD (home to a pair of Direct Connect locations hosted by QTS and Equinix and an Internet exchange point).
  • 12 ms to IAD (home of the US East (Northern Virginia) Region).
  • 18 ms to JFK (home to another exchange point).
  • 52 ms to SFO (home of the US West (Northern California) Region).
  • 68 ms to PDX (home of the US West (Oregon) Region].

With just 12 ms of round-trip latency between US East (Ohio) and US East (Northern Virginia), you can make good use of unique AWS features such as S3 Cross-Region Replication, Cross-Region Read Replicas for Amazon Aurora, Cross-Region Read Replicas for MySQL, and Cross-Region Read Replicas for PostgreSQL. Data transfer between the two Regions is priced at the Inter-AZ price ($0.01 per GB), making your cross-region use cases even more economical.

Also on the networking front, we have agreed to work together with Ohio State University to provide AWS Direct Connect access toOARnet. This 100-gigabit network connects colleges, schools, medical research hospitals, and state government across Ohio. This connection provides local teachers, students, and researchers with a dedicated, high-speed network connection to AWS.

14 Regions, 38 Availability Zones, and Counting
Today’s launch of this 3-AZ Region expands our global footprint to a grand total of 14 Regions and 38 Availability Zones. We are also getting ready to open up a second AWS Region in China, along with other new AWS Regions in Canada, France, and the UK.

Since there’s been some industry-wide confusion about the difference between Regions and Availability Zones of late, I think it is important to understand the differences between these two terms. Each Region is a physical location where we have one or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

Around the office, we sometimes play with analogies that can serve to explain the difference between the two terms. My favorites are “Hotels vs. hotel rooms” and “Apple trees vs. apples.” So, pick your analogy, but be sure that you know what it means!


In the Works – VMware Cloud on AWS

by Jeff Barr | on | in Announcements | Permalink | Comments
The long-standing trend toward on-premises virtualization has helped many enterprises to increase operational efficiency and to wring out as much value from their data center as possible. Along the way, they have built up a substantial repertoire of architectural skills and operational experience, but now find that they are struggling to match public cloud economics and the AWS pace of innovation.Because of this, many enterprises are now looking at the AWS Cloud and like what they see. They are enticed by the fact that AWS has data centers in 35 Availability Zones across 13 different locations around the world (with construction underway in five more) and see considerable value in the rich set of AWS Services and the flexible pay-as-you-go model, and are looking at ways to move in to the future while building on an investment in virtualization that often dates back a decade or more.VMware + AWS = Win
In order to help these organizations take advantage of the benefits that AWS has to offer while building on their existing investment in virtualization, we are working with our friends at VMware to build and deliver VMware Cloud on AWS.

This new offering is a native, fully managed VMware environment on the AWS Cloud that can be accessed on an hourly, on-demand basis or in subscription form. It includes the same core VMware technologies that customers run in their data centers today including vSphere Hypervisor(ESXi), Virtual SAN (vSAN), and the NSX network virtualization platformand is designed to provide a clean, seamless experience.

VMware Cloud on AWS runs directly on the physical hardware, while still taking advantage of a host of network and hardware features designed to support our security-first design model. This allows VMware to run their virtualization stack on AWS infrastructure without having to use nested virtualization.

If you find yourself in the situation that I described above—running on-premises virtualization but looking forward to the cloud—I think you’ll find a lot to like here. Your investment in packaging, tooling, and training will continue to pay dividends, as will your existing VMware licenses, agreements, and discounts. Everything that you and your team know about ESXi, vSAN, and NSX remain relevant and valuable. You will be able to manage your entire VMware environment (on-premises and AWS) using your existing copy of vCenter, along with tools and scripts that make use of the vCenter APIs.

The entire roster of AWS compute, storage, database, analytics, mobile, and IoT services can be directly accessed from your applications. Because your VMware applications will be running in the same data centers as the AWS services, you’ll be able to benefit from fast, low-latency connectivity when you use these services to enhance or extend your applications. You’ll also be able to take advantage of AWS migration tools such as AWS Database Migration Service, AWS Import/Export Snowball, and AWS Storage Gateway.

Plenty of Options
VMware Cloud on AWS will give you a lot of different options when it comes to migration, data center consolidation, modernization, and globalization:

On the migration side, you can use vSphere vMotion to live-migrate individual VMs, workloads, or entire data centers to AWS with a couple of clicks. Along the way, as you migrate individual components, you can use AWS Direct Connect to set up a dedicated network connection from your premises to AWS.

When it comes to data center consolidation, you can migrate code and data to AWS without having to alter your existing operational practices, tools, or policies.

When you are ready to modernize, you can take advantage of unique and  powerful features such as Amazon Aurora (a highly scalable relational database designed to be compatible with MySQL), Amazon Redshift (a fast, fully managed, petabyte-scale data warehouse), and many other services.

When you need to globalize your business, you can spin up your existing applications in multiple AWS regions with a couple of clicks.

Stay Tuned
I will share more information on this development as it becomes available. To learn more, visit the VMware Cloud on AWS page.


Coming in 2017 – New AWS Region in France

by Jeff Barr | on | in Announcements | Permalink | Comments
As cloud computing becomes the new normal for organizations all over the world and as our customer base becomes larger and more diverse, we will continue to build and launch additional AWS Regions.Bonjour la France
I am happy to announce that we will be opening an AWS Region in Paris, France in 2017. The new Region will give AWS partners and customers the ability to run their workloads and store their data in France.This will be the fourth AWS Region in Europe. We currently have two other Regions in Europe — EU (Ireland) and EU (Frankfurt) and an additional Region in the UK expected to launch in the coming months. Together, these Regions will provide our customers with a total of 10 Availability Zones (AZs) and allow them to architect highly fault tolerant applications while storing their data in the EU.

Today’s announcement means that our global infrastructure now comprises 35 Availability Zones across 13 geographic regions worldwide, with another five AWS Regions (and 12 Availability Zones) in France, Canada, China, Ohio, and the United Kingdom coming online throughout the next year (see the AWS Global Infrastructure page for more info).

As always, we are looking forward to serving new and existing French customers and working with partners across Europe. Of course, the new Region will also be open to existing AWS customers who would like to process and store data in France.

To learn more about the AWS France Region feel free to contact our team in Paris at [email protected].

Now Open – AWS Asia Pacific (Mumbai) Region

by Jeff Barr | on | in Announcements | Permalink | Comments
We are expanding the AWS footprint again, this time with a new region in Mumbai, India. AWS customers in the area can use the new Asia Pacific (Mumbai) Region to better serve end users in India.New Region
The new Mumbai region has two Availability Zones, raising the global total to 35. It supports Amazon Elastic Compute Cloud (EC2) (C4,M4, T2, D2, I2, and R3 instances are available) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, and  Elastic Load Balancing.It also supports the following services:

There are now three edge locations (Mumbai, Chennai, and New Delhi) in India. The locations support Amazon Route 53, Amazon CloudFront, and S3 Transfer Acceleration. AWS Direct Connect support is available via our Direct Connect Partners (listed below).

This is our thirteenth region (see the AWS Global Infrastructure map for more information). As usual, you can see the list of regions in the region menu of the Console:

There are over 75,000 active AWS customers in India, representing a diverse base of industries. In the time leading up to today’s launch, we have provided some of these customers with access to the new region in preview form. Two of them (Ola Cabs and NDTV) were kind enough to share some of their experience and observations with us:

Ola Cabs’ mobile app leverages AWS to redefine point-to-point transportation in more than 100 cities across India. AWS allows OLA to constantly innovate faster with new features and services for their customers, without compromising on availability or the customer experience of their service. Ankit Bhati (CTO and Co-Founder) told us:

We are using technology to create mobility for a billion Indians, by giving them convenience and access to transportation of their choice. Technology is a key enabler, where we use AWS to drive supreme customer experience, and innovate faster on new features & services for our customers. This has helped us reach 100+ cities & 550K driver partners across India. We do petabyte scale analytics using various AWS big data services and deep learning techniques, allowing us to bring our driver-partners close to our customers when they need them. AWS allows us to make 30+ changes a day to our highly scalable micro-services based platform consisting of 100s of low latency APIs, serving millions of requests a day. We have tried the AWS India region. It is great and should help us further enhance the experience for our customers.

NDTV, India’s leading media house is watched by millions of people across the world. NDTV has been using AWS since 2009 to run their video platform and all their web properties. During the Indian general elections in May 2014, NDTV fielded an unprecedented amount of web traffic that scaled 26X from 500 million hits per day to 13 billion hits on Election Day (regularly peaking at 400K hits per second), all running on AWS.  According to Kawaljit Singh Bedi (CTO of NDTV Convergence):

NDTV is pleased to report very promising results in terms of reliability and stability of AWS’ infrastructure in India in our preview tests. Based on tests that our technical teams have run in India, we have determined that the network latency from the AWS India infrastructure Region are far superior compared to other alternatives. Our web and mobile traffic has jumped by over 30% in the last year and as we expand to new territories like eCommerce and platform-integration we are very excited on the new AWS India region launch. With the portfolio of services AWS will offer at launch, low latency, great reliability, and the ability to meet regulatory requirements within India, NDTV has decided to move these critical applications and IT infrastructure all-in to the AWS India region from our current set-up.


Here are some of our other customers in the region:

Tata Motors Limited, a leading Indian multinational automotive manufacturing company runs its telematics systems on AWS. Fleet owners use this solution to monitor all vehicles in their fleet on a real time basis. AWS has helped Tata Motors become to more agile and has increased their speed of experimentation and innovation.

redBus is India’s leading bus ticketing platform that sells their tickets via web, mobile, and bus agents. They now cover over 67K routes in India with over 1,800 bus operators. redBus has scaled to sell more than 40 million bus tickets annually, up from just 2 million in 2010. At peak season, there are over 100 bus ticketing transactions every minute. The company also recently developed a new SaaS app on AWS that gives bus operators the option of handling their own ticketing and managing seat inventories. redBus has gone global expanding to new geographic locations such as Singapore and Peru using AWS.

Hotstar is India’s largest premium streaming platform with more than 85K hours of drama and movies and coverage of every major global sporting event. Launched in February 2015, Hotstar quickly became one of the fastest adopted new apps anywhere in the world. It has now been downloaded by more than 68M users and has attracted followers on the back of a highly evolved video streaming technology and high attention to quality of experience across devices and platforms.

Macmillan India has provided publishing services to the education market in India for more than 120 years. Prior to using AWS, Macmillan India has moved its core enterprise applications — Business Intelligence (BI), Sales and Distribution, Materials Management, Financial Accounting and Controlling, Human Resources and a customer relationship management (CRM) system from an existing data center in Chennai to AWS. By moving to AWS, Macmillan India has boosted SAP system availability to almost 100 percent and reduced the time it takes them to provision infrastructure from 6 weeks to 30 minutes.

We are pleased to be working with a broad selection of partners in India. Here’s a sampling:

  • AWS Premier Consulting Partners – Cognizant, BlazeClan Technologies Pvt. Limited, Minjar Cloud Solutions Pvt Ltd, and Wipro.
  • AWS Consulting Partners – Accenture, BluePi, Cloudcover, Frontier, HCL, Powerupcloud, TCS, and Wipro.
  • AWS Technology Partners – Freshdesk, Druva, Indusface, Leadsquared, Manthan, Mithi, Nucleus Software, Newgen, Ramco Systems, Sanovi, and Vinculum.
  • AWS Managed Service Providers – Progressive Infotech and Spruha Technologies.
  • AWS Direct Connect Partners – AirTel, Colt Technology Services,  Global Cloud Xchange, GPX, Hutchison Global Communications, Sify, and Tata Communications.

Amazon Offices in India
We have opened six offices in India since 2011 – Delhi, Mumbai, Hyderabad, Bengaluru, Pune, and Chennai. These offices support our diverse customer base in India including enterprises, government agencies, academic institutions, small-to-mid-size companies, startups, and developers.

The full range of AWS Support options (Basic, Developer, Business, and Enterprise) is also available for the Mumbai Region. All AWS support plans include an unlimited number of account and billing support cases, with no long-term contracts.

Every AWS region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, and PCI DSS Level 1 (to name a few). AWS implements an information Security Management System (ISMS) that is independently assessed by qualified third parties. These assessments address a wide variety of requirements which are communicated to customers by making certifications and audit reports available, either on our public-facing website or upon request.

To learn more; take a look at the AWS Cloud Compliance page and our Data Privacy FAQ.

Use it Now
This new region is now open for business and you can start using it today! You can find additional information about the new region, documentation on how to migrate, customer use cases, information on training and other events, and a list of AWS Partners in India on the AWS site.

We have set up a seller of record in India (known as AISPL); please see the AISPL customer agreement for details.


Arduino Web Editor and Cloud Platform – Powered by AWS

by Jeff Barr | on | in Amazon Internet Of Things, Announcements, AWS Lambda | Permalink | Comments
Last night I spoke with Luca Cipriani from Arduino to learn more about the new AWS-powered Arduino Web Editor and Arduino Cloud Platform offerings. Luca was en-route to the Bay Area Maker Faire and we had just a few minutes to speak, but that was enough time for me to learn a bit about what they have built.If you have ever used an Arduino, you know that there are several steps involved. First you need to connect the board to your PC’s serial port using a special cable (you can also use Wi-Fi if you have the appropriate add-on “shield”), ensure that the port is properly configured, and establish basic communication. Then you need to install, configure, and launch your development environment, make sure that it can talk to your Arduino, tell it which make and model of Arduino that you are using, and select the libraries that you want to call from your code. With all of that taken care of, you are ready to write code, compile it, and then download it to the board for debugging and testing.Arduino Code Editor
Luca told me that the Arduino Code Editor was designed to simplify and streamline the setup and development process. The editor runs within your browser and is hosted on AWS (although we did not have time to get in to the details, I understand that they made good use of AWS Lambda and several other AWS services).

You can write and modify your code, save it to the cloud and optionally share it with your colleagues and/or friends. The editor can also detect your board (using a small native plugin) and configure itself accordingly; it even makes sure that you can only write code using libraries that are compatible with your board. All of your code is compiled in the cloud and then downloaded to your board for execution.

Here’s what the editor looks like (see Sneak Peek on the New, Web-Based Arduino Create for more):

Arduino Cloud Platform
Because Arduinos are small, easy to program, and consume very little power, they work well in IoT (Internet of Things) applications. Even better, it is easy to connect them to all sorts of sensors, displays, and actuators so that they can collect data and effect changes.

The new Arduino Cloud Platform is designed to simplify the task of building IoT applications that make use of Arduino technology. Connected devices will be able to be able to connect to the Internet, upload information derived from sensors, and effect changes upon command from the cloud. Building upon the functionality provided by AWS IoT, this new platform will allow devices to communicate with the Internet and with each other. While the final details are still under wraps, I believe that this will pave the wave for sensors to activate Lambda functions and for Lambda functions to take control of displays and actuators.

I look forward to learning more about this platform as the details become available!


Ten Years in the AWS Cloud – How Time Flies!

by Jeff Barr | on | in Announcements | Permalink | Comments
Ten years ago today I announced the launch of Amazon S3 with a simple blog post! It is hard to believe that a decade has passed since then, or that I have written well over 2000 posts during that time.Future Shock
When I was in high school, I read and reported on a relatively new (for 1977) book titled Future Shock. In the book, futurist Alvin Toffler argued that the rapid pace of change had the potential to overwhelm, stress, and disorient people. While the paper I wrote has long since turned to dust, I do remember arguing that change was good, and that people and organizations would be better served by preparing to accept and to deal with it.Early in my career I saw that many supposed technologists were far better at clinging to the past than they were at moving into the future. By the time I was 21 I had decided that it would be better for me to live in the future than in the past, and to not just accept change and progress, but to actively seek it out. Now, 35 years after that decision, I can see that I chose the most interesting fork in the road. It has been a privilege to be able to bring you AWS news for well over a decade (I wrote my first post in 2004).

A Decade of IT Change
Looking back at the past decade, it is pretty impressive to see just how much the IT world has changed. Even more impressive, the change is not limited to technology. Business models have changed, as has the language around it. At the same time that changes on the business side have brought about new ways to acquire, consume, and pay for resources (empowering both enterprises and startups in the process), the words that we use to describe what we do have also changed! A decade ago we would not have spoken of the cloud, microservices, serverless applications, the Internet of Things, containers, or lean startups. We would not have practiced continuous integration, continuous delivery, DevOps, or ChatOps. While you are still trying to understand and implement ChatOps, don’t forget that something even newer called VoiceOps (powered by Alexa) is already on the horizon.

Of course, dealing with change is not easy. When looking in to the future, you need to be able to distinguish between flashy distractions and genuine trends, while remaining flexible enough to pivot if yesterday’s niche becomes today’s mainstream technology. I often use JavaScript to illustrate this phenomenon. If you (like me), as a server-side developer initially brushed off JavaScript as a simple, browser-only language and chose to ignore it, you were undoubtedly taken by surprise when it was first used to build rich, dynamic Ajaxapplications and then run on the server in the form of Node.js.

Today, keeping current means staying abreast of developments in programming languages, system architectures, and industry best practices. It means that you spend time every day improving your current skills and looking for new ones. It means becoming comfortable in a new world where multiple deployments per day are commonplace, powered by global teams, and managed by consensus, all while remaining focused on delivering value to the business!

A Decade of AWS
While I hate to play favorites, I would like to quickly review some of my favorite AWS launches and blog posts of the past decade.

First and Still Relevant (2006) – Amazon S3. Incredibly simple in concept yet surprisingly complex behind the scenes, S3 was, as TechCrunch said at the time, game changing!

Servers by the Hour (2006) – Amazon EC2. I wrote the blog post while sitting poolside in Cabo San Lucas. The launch had been imminent for several months, and then became a fact just as I was about to hop on the plane.  From that simple start (one instance type, one region, and CLI-only access), EC2 has added feature after feature (most of them driven by customer requests) and is just as relevant today as it was in 2006.

Making Databases Easy (2009) – Amazon Relational Database Service – Having spent a lot of time installing, tuning, and managing MySQL as part of a long-term personal project, I was in a perfect position to appreciate how RDS simplified every aspect of my work.

Advanced Networking (2009) – Amazon Virtual Private Cloud – With the debut of VPC, even conservative enterprises began to take a closer look at AWS. They saw that we understood the networking and isolation challenges that they faced, and were pleased that we were able to address them.

Internet-Scale Data Storage (2012) – Amazon DynamoDB – The NoSQL market was in a state of flux when we launched DynamoDB. Now that the smoke has cleared, I routinely hear about customers that use DynamoDB to store huge amounts of data and to support some pretty incredible request rates.

Data Warehouses in Minutes not Quarters (2012) – Amazon Redshift  – Many companies measure implementation time for a data warehouse in terms of quarters or even years. Amazon Redshift showed them that there was a better way to get started.

Desktop Computing in the Cloud (2013) – Amazon WorkSpaces – All too often dismissed as either pedestrian or “great for someone else,” virtual desktops have become an important productivity tool for me and for our customers.

Real Time? How Much Data? (2013) – Amazon Kinesis – Capturing, processing, and deriving value from voluminous streams of data became easier and simpler when we launched Kinesis.

A New Programming Model (2014) – AWS Lambda – This is one of those disruptive, game-changers that you need to be ready for! I have been impressed by the number of traditional organizations that have already built and deployed sophisticated Lambda-powered applications. My expectation that Lambda would be most at home in startups building applications from scratch turned out to be wrong.

Devices are the Future (2015) – AWS IoT – Mass-produced compute power and widespread IP connectivity combine to allow all sorts of interesting devices to be connected to the Internet.

Moving Forward
A decade ago, discussion about the risks of cloud computing centered around adoption. It was new and unproven, and raised more questions than it answered. That era passed some time ago. These days, I hear more talk about the risk of not going to the cloud. Organizations of all shapes and sizes want to be nimble, to use modern infrastructure, and to be able to attract professionals with a strong desire to do the same. Today’s employees want to use the latest and most relevant technology in order to be as productive as possible.

I can promise you that the next decade of the cloud will be just as exciting as the one that just concluded. Keep on learning, keep on building, and share your successes with us!

Jeff;PS – As you can tell from this post, I strongly believe in the value of continuing education. I discussed this with my colleagues and they have agreed to make the entire set of qwikLABS online labs and learning quests available to all current and potential AWS customers at no charge through the end of March. To learn more, visit

Amazon Web Services to Acquire NICE

by Jeff Barr | on | in Announcements | Permalink | Comments
I would like to extend a warm welcome to our new colleagues at NICE. We have signed an agreement to acquire this leading provider of software and services for high performance and technical computing.Products for HPC
From their headquarters in Asti, Italy, NICE delivers products and solutions to customers all over the world. These products help customers to optimize and centralize their high performance computing (HPC) and visualization workloads while also providing tools that are a great fit for distributed workforces making use of mobile devices.For Existing Customers
The NICE brand and team will remain intact and will continue to develop and support the EnginFrame and Desktop Cloud Visualization(DCV) products. Customers will continue to receive world-class support and services, enhanced with the backing of the AWS team. Going forward, NICE and AWS will work together to create even better tools and services.

Still Day 1
As Jeff Bezos often says, it is still day 1 and we don’t have all of the answers yet. However, I did want to share this news with you and let you know that we are looking forward to meeting and working with our new colleagues. We expect the deal to close in Q1 of 2016.


New – GxP Compliance Resource for AWS

by Jeff Barr | on | in Announcements, Security | Permalink | Comments
Ever since we launched AWS, customers have been curious about how they can use it to build and run applications that must meet many different types of regulatory requirements. For example, potential AWS users in the pharmaceutical, biotech, and medical device industries are subject to a set of guidelines and practices that are commonly known as GxP. In those industries, the x can representLaboratory (GLP), Clinical (GCP), or Manufacturing (GMP).These practices are intended to ensure that a product is safe and that it works as intended. Many of the practices are focused on traceability (the ability to reconstruct the development history of a drug or medical device) and accountability (the ability to learn who has contributed what to the development, and when they did it). For IT pros in regulated industries, GxP is important because it has requirements on how electronic records are stored, as well as how the systems that store these records are tested and maintained.Because the practices became prominent at a time when static, on-premises infrastructure was the norm, companies have developed practices that made sense in this environment but not in the cloud. For example, many organizations perform point-in-time testing of their on-premises infrastructure and are not taking advantage of the all that the cloud has to offer. With the cloud, practices such as dynamic verification of configuration changes, compliance-as-code, and the use of template-driven infrastructure are easy to implement and can have important compliance benefits.

New Resource
Customers are already running GxP-workloads on AWS! In order to help speed the adoption for other pharma and medical device manufacturers, we are publishing our new GxP compliance resource today.

The GxP position paper (Considerations for Using AWS Products in GxP Systems) provides interested parties with a brief overview of AWS and of the principal services, and then focuses on a discussion of how they can be used in a GxP system. The recommendations within the paper fit in to three categories:

Quality Systems – This section addresses management, personnel, audits, purchasing controls, product assessment, supplier evaluation, supplier agreement, and records & logs.

System Development Life Cycle – This section addresses system development, validation, and operation. As I read this section of the document, it was interesting to learn how the software-defined infrastructure-as-code AWS model allows for better version control and is a great fit for GxP. The ability to use a common set of templates for development, test, and production environments that are all configured in the same way simplifies and streamlines several aspects of GxP compliance.

Regulatory Affairs – This section addresses regulatory submissions, inspections by health authorities, and personal data privacy controls.

We hired Lachman Consultants (an internationally renowned compliance consulting firm), and had them contribute to and review an earlier draft of the position paper. The version that we are publishing today reflects their feedback.

Join our Webinar
If you are interested in building cloud-based systems that must adhere to GxP, please join our upcoming GxP Webinar. Scheduled for February 23, this webinar will give you an overview of the new GxP compliance resource and will show you how AWS can facilitate GxP compliance within your organization. You’ll learn about rules-based consistency, compliance-as-code, repeatable software-based testing, and much more.

Jeff;PS – The AWS Life Sciences page is another great resource!

Amazon Wind Farm Fowler Ridge is Live!

by Jeff Barr | on | in Announcements | Permalink | Comments
Back in November 2014 AWS made a long-term commitment to achieve 100% renewable energy usage for our global infrastructure footprint, and we continue to make progress towards this goal. Today’s news on this topic is particularly exciting for us – our Amazon Wind Farm Fowler Ridge, located in Benton County, Indiana, is now live and producing electricity! This marks the first of our four announced renewable energy projects to move into full operation.We first announced that we teamed with Pattern Energy Group LP (Pattern Development) on the construction and operation of the 150 megawatt (MW) Amazon Wind Farm Fowler Ridge in January of last year. Over the course of the summer and fall of 2015, Pattern erected the wind farm’s 65 utility-scale turbines. Each of those turbines is tall, so tall in fact that the rotor axis, called a nacelle, is about the height of a 26-story building with blades long enough to sweep an area about the size of a football field.On January 1, 2016, its first day of full operation, Amazon Wind Farm Fowler Ridge generated more than 1.1 million kilowatt-hours of renewable electricity, enough to power over 100 US homes for an entire year! Each year the project is expected to generate enough renewable electricity to power the equivalent of approximately 46,000 homes. With this wind farm, we’re able to increase the amount of renewable energy produced in the grid that powers AWS’s US East Region in Northern Virginia and upcoming US Region in Ohio. Over time we’ll continue to add more wind and solar power delivered into the grids, reducing the amount of coal and other fossil fuels needed to power those grids.

Today’s news definitely helps us progress towards our goal of 40% renewable energy for our global infrastructure by the end of 2016 and marks more progress in AWS’s march towards our long-term 100% renewable goal, with much more soon to come. Since the Fowler Ridge project, we have announced three other agreements for new wind and solar projects that will be constructed over the coming months and start generating renewable power in late 2016 and early 2017. Stay tuned for more exciting announcements to come in the future as well.


In the Works – AWS Region in Canada

by Jeff Barr | on | in Announcements | Permalink | Comments
We continue to announce, build, and launch additional AWS regions as our customer base becomes larger, more diverse, and accustomed to running many different types of workloads in the cloud.Hello, Canada
I am happy to announce that we will be opening an AWS region in Montreal, Québec, Canada in the coming year. This region will be carbon-neutral and powered almost entirely by clean, renewable hydro power.The planned Canada-Montreal region will give AWS partners and customers the ability to run their workloads and store their data in Canada. As a reminder, we currently have 4 other regions in North America—US East (Northern Virginia), US West (Northern California),US West (Oregon), and AWS GovCloud (US)—with a total of 13 Availability Zones, plus the planned but not yet operational region coming to Ohio in 2016.

Today’s announcement means that our global infrastructure now comprises 32 Availability Zones across 12 geographic regions worldwide, with another 5 AWS regions (and 11 Availability Zones) in Canada, China, India, Ohio, and the United Kingdom coming online throughout the next year (see the AWS Global Infrastructure page for more info).

As always, we are looking forward to serving new and existing Canadian customers and to working with partners in the area. Of course, the new region will also be open to existing AWS customers who would like to process and store data in Canada.

Leave A Comment

Your email address will not be published. Required fields are marked *