AWS reveals more bits of its cloud broke as it recovered from DynamoDB debacle
theregister.co.ukAmazon Web Services has revealed that its efforts to recover from the massive mess at its US-EAST-1 region caused other services to fail.
The most recent update to the cloud giant’s service health page opens by recounting how a DNS mess meant services could not reach a DynamoDB API, which led to widespread outages.
AWS got that sorted at 02:24 AM PDT on October 20th.
But then things went pear-shaped in other ways.
“After resolving the DynamoDB DNS issue, services began recovering but we had a subsequent impairment in the internal subsystem of EC2 that is responsible for launching EC2 instances due to its dependency on DynamoDB,” the status page explains. Not being able to launch EC2 instances meant Amazon’s foundational rent-a-server offering was degraded, a significant issue because many users rely on the ability to automatically create servers as and when needed.
While Amazonian engineers tried ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE

