Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
AWS Certified Solutions Architect ??? Associate Guide
AWS Certified Solutions Architect ??? Associate Guide

AWS Certified Solutions Architect ??? Associate Guide: The ultimate exam guide to AWS Solutions Architect certification

Arrow left icon
Profile Icon Gabriel Ramirez Isaias Ramirez Melgarejo Profile Icon Stuart Scott
Arrow right icon
€8.99 €29.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.4 (7 Ratings)
eBook Oct 2018 626 pages 1st Edition
eBook
€8.99 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Gabriel Ramirez Isaias Ramirez Melgarejo Profile Icon Stuart Scott
Arrow right icon
€8.99 €29.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.4 (7 Ratings)
eBook Oct 2018 626 pages 1st Edition
eBook
€8.99 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€8.99 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

AWS Certified Solutions Architect ??? Associate Guide

Introducing Amazon Web Services

Welcome to the journey of becoming an Amazon Web Services (AWS) solutions architect. A path full of challenges, but also a path full of knowledge awaits you. To begin, I'd like to start by defining the role of a solutions architect in the software-engineering context. Architecture has a lot to do with technology, but it also has a lot to do with everything else; it is a discipline responsible for the nonfunctional requirements, and a model to design the Quality of Service (QoS) of the information systems.

Architecture is about finding the right balance and the midpoint of every circumstance. It is about understanding the environment in which problems are created, involving the people, the processes, the organizational culture, the business capabilities, and any external drivers that can influence the success of a project.

We will learn that part of our role as solutions architects is to evaluate several trade-offs, manage the essential complexity of things, their technical evolution, and the inherent entropy of complex systems.

The following topics will be covered in this chapter:

  • Understanding cloud computing
  • Cloud design patterns and principles
  • Shared security model
  • Identity and access management

Technical requirements

Minimizing complexity

A widely used strategy to solve difficult problems is to use functional decomposition, that is, breaking a complex system or process into manageable parts; a pattern for this is the multilayered architecture, also known as the n-tier architecture, by which we decompose big systems into logical layers focused only on one responsibility, leveraging characteristics such as scalability, flexibility, reusability, and many other benefits. The three-layer architecture is a popular pattern used to decompose monolithic applications and design distributed systems by isolating their functions into three different services:

  • Presentation Tier: This represents the component responsible for the user interface, in which user actions and events are generated via a web page, a mobile application, and so on.
  • Logic Tier: This is the middleware, the middle tier where the business logic is found. This tier can be implemented via web servers or application servers; here, every presentation tier event gets translated into service methods and business functions.
  • Data Tier: Persistence means will interact with the logic tier to maintain user state and behavior; this is the central repository of data for the application. Examples of this are database management systems (DBMS) or distributed memory-caching systems.

Conway's law

"organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations."

This sentence shows the relevance of the way people organize to develop systems, and how this impacts every design decision we make. We will get into depth in the later chapters discussing microservices architectures, about how we can decouple and remove the barriers that prevent systems from evolving. Bear in mind that this book will show you a new way of systems thinking, and with AWS you have the tools to solve any kind of problem and create very sophisticated solutions.

Cloud computing

Cloud computing is a service model based on large pools of resources exposed through web interfaces, with the objective being to provide shareable, elastic, and secure services on demand with low cost and high flexibility:

Architecting for AWS

Designing cloud-based architectures, carries a different approach than traditional solutions, because physical hardware and infrastructure are now treated as software. This brings many benefits, such as reusability, high cohesion, a uniform service interface, and flexible operations.

It's easy to make use of on-demand resources when they are needed to modify its attributes in a matter of minutes. We can also provision complex structures declaratively and adapt services to the demand patterns of our users. In this chapter, we will be discussing the design principles that will make the best use of AWS.

Cloud design principles

These principles confirm the fundamental pillars on which well-architected and well-designed systems must be made:

  • Enable scalability:
    • Antipattern: Manual operation to aggregate capacity reactively and not proactively. Passive detection of failures and service limits can result in downtimes for applications and is prone to human errors due to limited reaction timespans:

From the diagram, we can see that instances take time to be fully usable, and the process is human-dependent.

    • Best practice: The elastic nature of AWS services makes it possible to manage changes in demand and adapt to the consumer patterns with the possibility to reach global audiences. When a resource is not elastic, it is possible to use Auto Scaling or serverless approaches:

Auto Scaling automatically spins up instances to compensate for demand.

  • Automate your environment:
    • Antipattern: Ignoring configuration changes and the lack of a configuration management database (CMDB) can result in erratic behavior, visibility loss, and have a high impact on critical production systems. The absence of robust monitoring solutions results in fragile systems and slow responses to change requests compromising security and the system's stability:

AWS Config records every change in the resources and provides a unique source of truth for configuration changes.

    • Best practice: Relying on automation, from scripts to specialized monitoring services, will help us to gain reliability and make every cloud operation secure and consistent. The early detection of failures and deviation from normal parameters will support in the process of fixing bugs and avoiding issues to become risks. It is possible to establish a monitoring strategy that accounts for every layer of our systems. Artificial intelligence can be used for analyzing the stream of changes in real time and reacting to these events as they occur, facilitating agile operations in the cloud:

Config can be used to durably store configuration changes for later inspection, react in real time with push notifications, and visualize real-time operations dashboards.

  • Use disposable resources:
    • Antipattern: Running instances with low utilization or over capacity can result in higher costs. The poor understanding of every service feature and capability can result in higher expenses, lower performance, and administration overhead. Maybe you are not using the right tool for the job. Immutable infrastructure is a determinant aspect of using disposable resources, so you can create and replace components in a declarative way:

Tagging will help you gain control, and provide means to orchestrate change management. The previous diagram shows how tagging can help to discover compute resources to stop and start only in office hours for different regions.

    • Best practice: Using reserved instances promotes a fast Return on Investment (ROI). Running instances only when they are needed or using services such as Auto Scaling and AWS Lambda will strengthen usage only when needed, thus optimizing costs and operations. Practices such as Infrastructure as Code (IaC) give us the ability to create full-scale environments and perform production-level testing. When tests are over, you can tear down the testing environment:

Auto Scaling lets the customer specify the scaling needs without incurring additional costs.

  • Loosely couple your components:
    • Antipattern: Tightly coupled systems avoid scalability and create a high degree of dependencies at the component and service levels. Systems become more rigid, less portable, and it is very complicated to introduce changes and improvements. Coupling not only happens at the software or infrastructure levels, but also with service providers by using proprietary solutions or services that create dependencies with a brand forcing us to consume their products with the inherent restrictions of the maker:

Software changes constantly, and we need to keep ourselves updated to avoid the erosion of the operating systems and technology stacks.

    • Best practice: Using indirection levels will avoid direct communications and less state sharing between components, thus keeping low coupling and high cohesion. Extracting configuration data and exposing it as services will permit evolution, flexibility, and adaptability to changes in the future.

It is possible to replace a component with another if the interface has not changed significantly. It is fundamental to use standards-based technologies and protocols that bring interoperability such as HTTP and RESTful web services:

A good strategy to decouple is to use managed services.

  • Design services, not servers:
    • Antipattern: Investing time and effort in the management of storage, caching, balancing, analysis, streaming, and so on, is time-consuming and deviates us from the main purpose of the business: the creation of valuable solutions for our customers. We need to focus on product development and not on infrastructure and operations. Large-scale in-house solutions are complicated, and they require a high level of expertise and budget for research and fine-tuning:
This image depicts the architecture needed for a scalable NoSQL database
    • Best practice: Cloud-based services abstract the problem of the operation of this specialized services on a large-scale and offload the operations management to a third party. Many AWS services are backed by Service Level Agreements (SLAs), and these SLAs are passed down to our customers. In the end, this improves the brand's reputation and our security posture; also, this will enable organizations to reach higher levels of compliance:
Communication in-transit is secure between AWS services
  • Choose the right database solutions:
    • Antipattern: Taking a relational database management system (RDBMS) as a silver bullet and using the same approach to solve any kind of problem, from temporal storage to Big Data. Not taking into account the usage patterns of our data, not having a governance data model, and an incorrect classification can leave data exposed to third parties. Working in silos will result in increased costs associated with the extract, transform, and load (ETL) pipelines from dispersed data avoiding valuable knowledge:

Big Data and analytics workloads will require a lot of effort and capacity from an RDBMS datastore.

    • Best practice: Classify information according to its level of sensibility and risk, to establish controls that allow clear security objectives, such as confidentiality, integrity, and availability (CIA). Designing solutions with specific-purpose services and ad hoc use cases such as caching and distributed search engines and non-relational databases. All of these will contribute to the flexibility and adaptability of the business. Understanding data temperature will improve the efficiency of storage and recovery solutions while optimizing costs. Working with managed databases will make it possible to analyze data at petabyte scale, process huge quantities of information in parallel, and create data-processing pipelines for batch and real-time patterns:
Some specific-purpose storage services in AWS
  • Avoid single points of failure:
    • Antipattern: A chain is as strong as its weakest link. Monolithic applications, a low throughput network card, and web servers without enough RAM memory can bring a whole system down. Non-scalable resources can represent single points of failure or even a database license that prevents the use of more CPU cores. Points of failure can also be related to people by performing unsupervised activities and processes without the proper documentation; also the lack of agility could represent a constraint in critical production operations:
    • Best practice: Active-passive architectures avoid complete service outages, and redundant components enable business continuity by performing switchover and failover when necessary. The data and control planes must take this N+1 design paradigm, avoiding bottlenecks and single component failures that compromise the full operation. Managed services offer up to 99.95% availability regionally, offloading responsibilities from the customer. Experienced AWS solutions architects can design sophisticated solutions with SLAs of up to five nines:
  • Optimize for cost:
    • Antipattern: Using big servers for simple compute functions, such as authentication or email relay, could lead to elevated costs in the long term and keeping instances running 24/7 when traffic is intermittent. Adding bigger instances to improve performance without a performance objective and proper tuning won't solve the problem. Even poorly designed storage solutions will cost more than expected. Billing can go out of control if expenses are not monitored in detail.

It is common to provision resources for testing purposes or to leave instances idle, forgetting to remove them. Sometimes, instances need to communicate with resources or services in another geographic region increasing complexity and costs due to inter-region transfer rates:

    • Best practice: Replacing traditional servers with containers or serverless solutions. Consider using Docker to maximize the instance resources and AWS Lambda; use recurring compute resources only when needed. Reserving compute capacity can decrease your costs significantly, by up to 95%. Leverage managed services features that can store transient data such as sessions, messages, streams, system metrics, and logs:
  • Use caching:
    • Antipattern: Repeatedly accessing the same group of data or storing this data in a medium not optimized for reading workloads, applications dealing with the physical distance between the client, and the service endpoint. The user receives a pretty bad experience when the network is not available.

Increasing costs due to redundant read requests and cross-region transfer rates, also not having a life cycle for storage and no metrics that can warn about the usage patterns of data:

    • Best practice: Identify the most used queries and objects to optimize this information by transferring a copy of this data to the closest location to your end users. By using memory storage technologies it is possible to achieve microsecond latencies and the ability to retrieve huge amounts of data. It is necessary to use caching strategies in multiple levels, even storing commonly accessed data directly on the client, for example mobile applications using search catalogs.

Caching services can lower your costs and offload backend stress by moving this data closer to the consumer. It is even possible to offer degraded experiences without the total service disruption in the case of a backend failure:

  • Secure your infrastructure everywhere:
    • Antipattern: Trusting in the operating system's firewalls and being naive about the idea that every workload in the cloud is 100% secure out of the box. Waiting until a security breach is made to take corrective measures maybe thinking that only Fortune 500 companies are victims of Distributed Denial of Service (DDoS) attacks. Implementing HTTPS but no security at-rest compromise greatly the organization assets and become an easy prey of ransomware attacks.

Using the root account and the lack of logs management structures will take visibility and auditability away. Not having an InfoSec department or ignoring security best practices can lead to a complete loss of credibility and of our clients:

    • Best practice: Security must be an holistic labor. It must be implemented at every layer using a systemic approach. Security needs to be automated to be able to react immediately when a breach or unusual activity is found; this way, it is possible to remediate as detected. In AWS, it is possible to segregate network traffic and use managed services that help us to protect valuable assets with complete visibility of operations and management.

At-rest data can be safeguarded by using encryption semantics using cryptographic keys generated on demand delegating the management and usage of these keys to users designated for these jobs. Data in transit must be protected by standard transmission protocols such as IPSec and TLS:

The CIA triad is a commonly used model to achieve information security

Cloud design patterns – CDP

Cloud design patterns are a collection of solutions for solving common problems using AWS technologies. It is a library of recurrent problems that solutions architects deal with frequently when implementing and designing for the cloud. This is curated by the Ninja of Three (NoT) team.

These patterns will be used throughout this book as a reference, so you can get acquainted with them and understand their rationale and implementation in detail.

You can find detailed information at this URL: http://en.clouddesignpattern.org/index.php/Main_Page.

AWS Cloud Adoption Framework – AWS CAF

The Cloud Adoption Framework offers six perspectives to help business and organizations to create an actionable plan for the change management associated with their cloud strategies. It is a way to align businesses and technology to produce successful results:

The CAF Action Plan template

The six perspectives are grouped as follows:

  • CAF business perspectives:
    • Business perspective: Aligns business and IT into a single model
    • People perspective: Personnel management to improve their cloud competencies
    • Governance perspective: Follows best practices to improve enterprise management and performance
  • CAF technical perspectives:
    • Platform perspective: Includes the strategic design, the principles, patterns, and tools to help you with the architecture transition process
    • Security perspective: Focuses on compliance and risk management to define security controls
    • Operations perspective: This perspective helps to identify the processes and stakeholders relevant to execute change management and business continuity

AWS Well-Architected Framework – AWS WAF

The AWS Well-Architected Framework takes a structured approach to design, implement, and adopt cloud technologies, and it works around five perspectives so you can look at a problem from different angles. These areas of interest are sometimes neglected because of time constraints and misalignment with compliance frameworks. This book relies strongly on the pillars that are listed here:

  • Operational excellence
  • Security
  • Reliability
  • Performance efficiency
  • Cost optimization
More information and WAF whitepapers can be found here: https://aws.amazon.com/architecture/well-architected/.

Shared security model

This model is the way AWS frees the customer from the responsibility of establishing controls at the infrastructure, platform, and services levels by implementing them through their services. In this sense, the customer must provide full control of implementation in some cases, or work in a hybrid model where the customer provides their own solutions by complementing existing ones in the cloud:

The previous diagram shows that AWS is responsible for the security of the cloud; this involves software and hardware infrastructure and core services. The customer is responsible for everything in the cloud and the data they are the owner of.

To clarify this model, we will use a simple web server example and explain for every step which controls are in place for the customer and for AWS:

To create our web server, we will create an instance.

In the EC2 console choose Launch Instance:

Following are the details of the instance:

AWS/customer

  • In this example, let's create an instance (1); this image (Amazon Linux AMI) is managed by AWS, and it is security hardened, and it comes preconfigured from software packages from only trusted sources
  • Instances run isolated from other clients by virtual interfaces that run on a custom version of the Xen hypervisor
  • Every disk block is zeroized and RAM memory is randomized

The previous example is an example of an inherited control (virtualization type) and a shared control (virtual image).

The highlighted components represent the ones relevant for this example.

The next screen is for the configuration of the network attributes and the tenancy mode:

The following are the details of instance configuration:

AWS

Every instance runs in a virtual private cloud (Network) (1); the network is an infrastructure-protected service, and the customer inherits this protection, which enables workload isolation to the account level.

Customer

Is possible to segregate the network by means of public and private subnetting, route tables function as a traffic control mechanism between networks, service endpoints, and on-premises networks.

Customer

Identity and Access Management is the service dedicated to user management and account access.

IAM Roles are meant to improve security from the customer perspective by establishing trust relationships between services and other parties. EC2AccessToS3Role (2) will allow an instance to invoke service actions on S3 securely to store and retrieve data.

AWS/customer

The Tenancy property (3) is a shared control by which AWS implements security at some layers and the customer will implement security in other layers. It is common to run your instance in shared hosts (multi-tenant), but it can be done on a dedicated host (single tenant); this will make your workloads compliant with FIPS-140 and PCI-DSS standards.

The virtual private cloud (VPC) is an example of an inherited control, since AWS runs the network infrastructure; nevertheless, segmentation and subnet configuration is an example of a hybrid control, because the client is responsible for the full implementation by performing a correct configuration and resource distribution.

IAM operations are customer-related, and this represents a specific customer control. IAM roles and all the account access must be managed properly by the client.

Making use of dedicated resources is an example of shared controls. AWS will provide the dedicated infrastructure and the client provides all the management from the hypervisor upwards (operating system, applications).

The highlighted components represent the ones relevant for this example. Add a persistent EBS volume to our EC2 instance:

Security at rest for EBS with KMS cryptographic keys

AWS/customer

EBS volumes can be ciphered on demand by using cryptographic keys provided by the Key Management Service (KMS); this way all data at rest will be kept confidential

The EBS encryption attribute is an example of a shared control, because AWS will provide these facilities as part of EBS and KMS services, but the client must enable this configuration properties because by default, disks are not encrypted. The customer has the ability to use specific controls such as Linux Unified Key Setup (LUKS) to encrypt EBS volumes with third-party tools:

The highlighted components represent the ones relevant for this example.

Create a security group to filter the network traffic:

Detail:

AWS/customer

Security groups act as firewalls at the instance level, denying all inbound traffic and opening access only by customer-specified IPs, networks, ports, and protocols. It is a best practice to compartmentalize access by chaining multiple security groups restricting access on every layer. In this example, we create only one security group for the web server in which will be allowed HTTP traffic from any IP address (0.0.0.0/0) and restricted access via SSH only from a management machine—in this case, my IP.

This is a hybrid control because the function of network traffic filtering is from AWS, but the full implementation is given by the customer through the service API:

The highlighted components represent the ones relevant for this example.

Create a key pair to access the EC2 instance:

Detail:

AWS/Customer

Every compute instance in EC2, whether Linux or Windows, is associated with a key pair, one public key and one private key. The public key is used to cipher the login information of a specific instance. The private key is guarded by the customer so they can provide their identity through SSH for Linux instances. Windows instances use the private key to decrypt the administrator's password.

This is a shared control because the customer and AWS keep responsibility for the guarding of these keys and avoid third-party access that does not have the private key in their possession:

The last step has a dual responsibility:

  • The customer must protect the platform on which the application will be running, their applications, and everything related to the identity and access management from the app of the middleware perspective.
  • AWS is responsible for the storage and protection of the public key and the instance configuration.

Identity and Access Management

Let's discuss the core services to manage security in the AWS account scope. Identity and Access Management (IAM) and CloudTrail. IAM is the service responsible for all the user administration, and their credentials, access, and permissions with respect to the AWS service APIs. CloudTrail will give us visibility on how this accesses are used, since CloudTrail records all the account activity at the API level.

  1. To enable CloudTrail, you must access the AWS console, find CloudTrail in the services pane and then click on Create trail:
  1. The configuration is flexible enough to record events in one region only, or cross-region in the same account, and you can even record CloudTrail events with multiple accounts; it is recommended to choose All for Read/Write events:

It is important to enable this service initially in newly created accounts and always keep it active because it also helps in troubleshooting when configuration problems occur, when there are production service outages, or to attribute actions to IAM users.

CloudTrail is considered a best practice associated with the operation of any AWS account.

User creation

We will create an IAM user and this user can be used by a person for the purpose of everyday operations. But it can also be used by an application invoking service APIs.

  1. Let's navigate to the IAM service in the console, choose the left menu Users, and then Add user:
  1. The user we will create can only access the web console, and for this, we will create credentials that consist of a username and a password:
  1. In this case, check the AWS Management Console access (1) as shown in the next screenshot. You have the possibility to assign a custom password for the user or to generate one randomly. It is a good practice to enable password resetting on the next sign-in. This will automatically assign the policy IAMUserChangePassword (2) that will allow the user to change their own password as shown here:
  1. Choose Next: Permissions; on this screen, we will leave it as is. Select Create User to demonstrate the default user level that every new user has. The last screen shows us the login data, such as the URL. This URL is different than root access:

  1. Copy the URL, username, and password shown in the previous screenshot. The access URL can be customized; by default, it has the account number but it is up to the administrator to generate an alias by clicking Customize:
  1. Close your current session and use the new access URL. You will see a screen like the following:
  1. Use your new IAM administrator username and password; once logged in, search for the EC2 services. You will get the following behavior:
  1. Let's validate the current access scope by trying to list the account buckets in S3:

This simple test shows up something fundamental about AWS security. Every IAM user, once created, has no permissions; only when permissions are assigned explicitly will the IAM user be able to use APIs, including the AWS console. Let's close this session and log in again; this time, with the root account to create a secure access structure.

Designing an access structure

We will work on the access structure using the following model:

This diagram shows different use cases for IAM Users, IAM Groups, IAM Roles, and the IAM Policies for everyone.

Create an administration group

We will create IAM groups solely for administration purposes, and a unique user that has wide access to this account.

  1. Navigate to IAM | Groups and then choose the action Create New Group:
  1. Use the name Administration, and select Next Step. In this screen, we will add an IAM policy. IAM policies are JSON formatted documents that specify granuraly the permissions that an entity has (user, group, role).
  2. In the search bar of the step, write the word administrator to filter some of the options available for administration policies; the policy we need is called Administrator Access. This policy is an AWS-managed policy and is available for every AWS customer for their use when a full administration policy is required for a principle.
  3. Select Next Step. This will lead us to the review screen, where we can visually confirm that the administrator policy has been added. This policy has many AWS resources and has a unique Amazon Resource Name as follows:
arn:aws:iam::aws:policy/AdministratorAccess
  1. Add the group; in the group detail, we can observe that it doesn't have any users in it. Let's add our administrator user. This user must be used instead of the root account.
  2. Choose the left menu IAM users and create a new user with the name administrator; in this case, we will only choose AWS Management Console access, because, at this time, this user won't be using access keys, only the username and the password, to log into the console.
  1. In step 2 of the Add User screen, choose the administration group; this way, the user will inherit the AdministratorAccess IAM policy from the group Administration:
  1. Select Next: Review and Create User. It is now possible to download the comma-separated value (CSV) file with the access data for this user. Again, copy the access URL for this user and close the root session; from now on, we will be using only the administrator user created. The login screen must be similar to this:
  1. Once the login data is entered, we will be required to change the temporary password, guaranteeing confidentiality. Now that we are logged in as an IAM user, we can confirm, because the top bar has changed to administrator@account as shown:

Business case

Ada is a network administrator; part of her activities consist of provisioning virtual networks (VPCs), subnets, configuring route tables, managing hosted zones for domains in Route 53, and managing logs of the network traffic.

Ada supports database and development areas in the provisioning of infrastructure and network resources, but she should not be able to create compute resources as EC2 instances and EBS volumes.

Alan is a DBA, and most of his tasks are performance monitoring, checking database logs, provisioning databases, and performing full database backups on S3. This user must be able to read network parameters for each database instance and troubleshoot issues in production databases.

Alan will work with the AWS console because most of the monitoring operations are made from a web environment and other DBAs just using a vendor-specific driver. We will create a database administrator group, because, this way, the administration is centralized. By doing this, we can control and maintain all the security aspects in one spot; this is a better approach than having to manage tens or hundreds of users independently.

Let's create four groups and three users with the following details:

Function

IAM group name

IAM policy

IAM user

IAM User

DatabaseAdministrator

DatabaseAdministrator

Alan

IAM User

NetworkAdministrator

NetworkAdministrator

Ada

IAM Service Role

Development

-

Dennis

IAM Cross Account

Auditors

SecurityAudit

-

Alan must support Ada's activities temporarily while she is out of the office for vacations, so we need to add Alan to the network administrator group. The set of permissions Alan will have are the union of both groups.

IAM groups allow a user to become a member of one or more groups, as groups exist at the same level. It's impossible to nest a group inside another group.

Dennis is the member of the development team and he must be able to create EC2 instances. He must also be able to create EC2 service roles that permit cross-service access. These IAM roles give the ability to impersonate the access to services and avoid the usage of explicit access keys when applications interact via CLI or programmatically to authenticate APIs. The application Dennis will deploy consists of a .NET application that uses the AWS SDK and accesses an S3 bucket to store and retrieve data. To achieve this, the IAM role S3AccessToEC2Role must be created via IAM. This instance role and profile will internally require temporal credentials via the security token service (STS):

The practice of not using ACCESS_KEY and SECRET_ACCESS_KEY credentials will prevent other parties from using them if they are found in the filesystem or in the application configuration files.

Our user Dennis must be able to create instances and IAM roles for applications. Dennis will make use of an IAM policy inherited via the development group and will be given the permissions needed to create instance roles and pass IAM roles to services. Let's create this policy, so it can be attached to the development group.

  1. Navigate to IAM | Policies | Create policy and choose the JSON tab and paste the document available under this file: https://github.com/gabanox/Certified-Solution-Architect-Associate-Guide/blob/master/chapter00/RoleCreatorPolicy.json:
  1. We are required to specify a Name and Description for the new IAM policy:
  1. Once we have created our policy, let's associate it with the development group. On the IAM Policies listing, we have the option to filer by name (1); in this case, this policy was created globally in this account and can be reused. Choose Customer managed or use the search box to find it:
  1. Once we choose the policy, select Policy Actions and use the function attach (2), as shown in the preceding screenshot; this will redirect us to the detail, where we can use attached entities and associate them with the development group as shown:
  1. Again, let's use the policy filter choosing AWS managed until we find AmazonEC2FullAccess. Repeat the process to attach this policy to the development group. This way, Dennis will be able to create EC2 instances and associate the appropriate instance role:
  1. Close the current session and log in with Dennis' credentials. Let's navigate to the IAM section and proceed to create a new role. The service that will use this role is EC2:
  1. Under permissions, search for s3 in the search box and add the policy AmazonS3FullAccess:
  1. Name the role EC2ToS3InstanceRole and click Create:
  1. Spin a new EC2 instance in step 3 of the EC2 wizard, under Configure Instance, and make sure to select EC2toS3InstanceRole:

At this point, if you have some problem creating this users structure, you can run the following script from the command line.

Inline policies

As the administrator, navigate to the S3 service in the console, and create a bucket with a unique name. After that, upload a text file with any kind of content using the upload button.

Write down the bucket name, as you will use it later.

  1. We will create an IAM user with CLI-only access and a policy that only allows reading objects. Use the name s3-user as shown in the next screenshot, and in the fourth step of the Add user wizard, download the access credentials CSV file:
  1. Make sure Programmatic access is enabled; we will create this user without any kind of permissions at first. Once created, the user will be assigned an inline policy; these policies cannot be reused with other users because they are defined only once and attached directly to users and groups.
  1. Under the Users section in IAM, select the s3-user and Add inline policy as shown here:
  1. Use the visual editor, in this case by choosing S3 as the service (1). Under Actions, select only GetObject (2), and for the resources section, select Specific and Add ARN (3), writing down the bucket name and the option Any:
  1. To configure the access credentials via CLI, open a terminal and paste the following command replacing the values for aws_access_key_id and _aws_secret_access_key with your own (the s3-user CSV file values):
aws configure set profile.s3-user.aws_access_key_id "AKIAXXXXXXXXXXXXXXXXX"
aws configure set profile.sqs-user.aws_secret_access_key "ZuEVD4DDyK1TsmNp/Pa6toR/Qf3FfUN0t/XXXXXX"
  1. The following command will allow the download operation (a read) to the local filesystem:
aws s3 cp s3://example-bucket-20180602/test-object.txt --profile s3-user .

The structure of this command is as follows:

aws s3 cp <LocalPath> <S3Uri> or <S3Uri> <LocalPath> or <S3Uri> <S3Uri> ...

This way, we are using the copy subcommand and validating that the S3 user has read-only access to objects.

IAM cross-account roles

An external auditors group requires read-only access to the account so they can inspect API activity on the account. These auditors are a company that has an AWS account, but to follow the less-privileged principle, we will use a role with an AWS Managed Policy called SecurityAudit, plus the AWSCloudTrailReadOnlyAccess. This way, there won't be a necessity to create an additional user in the audited account, but this account can define the level of permissions necessary to let external auditors perform their task.

This task requires an additional AWS account, so this is only demonstrative. Also, take into account that only IAM users are allowed to perform the AssumeRole action.

  1. As an administrator, navigate to IAM and create a role. On the type of trusted entity, choose Another AWS account. For this demo, I will use a hypothetical account number. The IAM role used in this scenario is called delegation:

We have two accounts, the Production Account (11111111111), which will have the cross-account role created, and the access policy for this external audit user, and the Sandbox Account (222222222222), the one that will assume the external role to access AWS APIs; in this case, Cloud trail.

  1. The next step is to create the cross-account role in the production account. The type of trusted entity is Another AWS account, and we will specify the trusted account number (sandbox):

External auditors will also have read-only access to Cloud trail:

  1. Select the role created to review the details; by default, it is configured to allow broad access using the root principal as the trusted entity.
  2. Edit this trust relationship as shown next, and specify our IAM user in this case administrator (it could be a member of the audit group):

Now, in the sandbox account, we need to configure the audit-role this account will assume. This can be done in the current user upper menu as follows:

In the next screen, you will be prompted to specify the requested account (production), the name of the external role to assume, and a friendly name:

  1. Now, we can see our role-friendly name listed under Role History; once we assume this identity, our web console will display the production account data and the top menu will change to display the current assumed role:
  1. To end this lesson under Cloud trail, it is possible to see all the API events since logging was activated for the account:

As a complementary activity, it is recommended to enable Multi-Factor Authentication (MFA); you can provision this 2FA by downloading Google Authenticator from your mobile app store.

For detailed information check the documents here: https://aws.amazon.com/iam/details/mfa/.

Summary

This chapter is the formal introduction to Amazon Web Services; you have learned about cloud design principles, patterns, frameworks, and best practices for AWS architecture. You will put security in practice by designing an access structure for a business case for several kinds of users, manage them via groups, and enable cross-account roles.

Further reading

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Build highly reliable and scalable workloads on the AWS platform
  • Pass the exam in less time and with confidence
  • Get up and running with building and managing applications on the AWS platform

Description

Amazon Web Services (AWS) is currently the leader in the public cloud market. With an increasing global interest in leveraging cloud infrastructure, the AWS Cloud from Amazon offers a cutting-edge platform for architecting, building, and deploying web-scale cloud applications. As more the rate of cloud platform adoption increases, so does the need for cloud certification. The AWS Certified Solution Architect – Associate Guide is your one-stop solution to gaining certification. Once you have grasped what AWS and its prerequisites are, you will get insights into different types of AWS services such as Amazon S3, EC2, VPC, SNS, and more to get you prepared with core Amazon services. You will then move on to understanding how to design and deploy highly scalable applications. Finally, you will study security concepts along with the AWS best practices and mock papers to test your knowledge. By the end of this book, you will not only be fully prepared to pass the AWS Certified Solutions Architect – Associate exam but also capable of building secure and reliable applications.

Who is this book for?

The AWS Certified Solutions Architect –Associate Guide is for you if you are an IT professional or Solutions Architect wanting to pass the AWS Certified Solution Architect – Associate 2018 exam. This book is also for developers looking to start building scalable applications on AWS

What you will learn

  • Explore AWS terminology and identity and access management
  • Acquaint yourself with important cloud services and features in categories such as compute, network, storage, and databases
  • Define access control to secure AWS resources and set up efficient monitoring
  • Back up your database and ensure high availability by understanding all of the database-related services in the AWS Cloud
  • Integrate AWS with your applications to meet and exceed non-functional requirements
  • Build and deploy cost-effective and highly available applications

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 31, 2018
Length: 626 pages
Edition : 1st
Language : English
ISBN-13 : 9781789135800
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Oct 31, 2018
Length: 626 pages
Edition : 1st
Language : English
ISBN-13 : 9781789135800
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 99.97
AWS Certified Solutions Architect ??? Associate Guide
€36.99
Designing AWS Environments
€20.99
Software Architect’s Handbook
€41.99
Total 99.97 Stars icon
Banner background image

Table of Contents

25 Chapters
Introducing Amazon Web Services Chevron down icon Chevron up icon
AWS Global Infrastructure Overview Chevron down icon Chevron up icon
Elasticity and Scalability Concepts Chevron down icon Chevron up icon
Hybrid Cloud Architectures Chevron down icon Chevron up icon
Resilient Patterns Chevron down icon Chevron up icon
Event Driven and Stateless Architectures Chevron down icon Chevron up icon
Integrating Application Services Chevron down icon Chevron up icon
Disaster Recovery Strategies Chevron down icon Chevron up icon
Storage Options Chevron down icon Chevron up icon
Matching Supply and Demand Chevron down icon Chevron up icon
Introducing Amazon Elastic MapReduce Chevron down icon Chevron up icon
Web Scale Applications Chevron down icon Chevron up icon
Understanding Access Control Chevron down icon Chevron up icon
Encryption and Key Management Chevron down icon Chevron up icon
An Overview of Security and Compliance Services Chevron down icon Chevron up icon
AWS Security Best Practices Chevron down icon Chevron up icon
Web Application Security Chevron down icon Chevron up icon
Cost Effective Resources Chevron down icon Chevron up icon
Working with Infrastructure as Code Chevron down icon Chevron up icon
Automation with AWS Chevron down icon Chevron up icon
Introduction to the DevOps practice in AWS Chevron down icon Chevron up icon
Mock Test 1 Chevron down icon Chevron up icon
Mock Test 2 Chevron down icon Chevron up icon
Assessment Chevron down icon Chevron up icon
Another Book You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.4
(7 Ratings)
5 star 57.1%
4 star 0%
3 star 0%
2 star 14.3%
1 star 28.6%
Filter icon Filter
Top Reviews

Filter reviews by




Victor Perez Pereira Nov 17, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Excellent book. The content is full of information and explanation of aws services. Good reading to perform in the exam!
Amazon Verified review Amazon
Alfonso Ledesma Mar 14, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Es un excelente libro, muy completo en diferentes aspectos. He tomado infinidad de cursos en línea, libros, etc., y hasta ahora este es el que me ha parecido mejor, dado que toca muchos aspectos de arquitectura que en otros materiales no encuentras. En muchos materiales que se pueden encontrar online mencionan conceptos como elasticidad, resilencia, alta disponibilidad y demás relacionados directamente con el cómputo en la nube, pero no los explican como en este libro. Lo recomiendo 100%.Incluso teniendo buena experiencia con AWS; encontré que este libro está plagado de conocimiento y me sirvió para conocer más a fondo servicios que uso a diario y que creía ya tener dominados.
Amazon Verified review Amazon
Eric M Apr 10, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Recommended to the absolute beginner to AWS. It covers a wide range of topics in plain English (as opposed to pure tech language that makes you feel bad about your skills) and does it very well plus there are some good examples to follow through. Great prep for the exam!
Amazon Verified review Amazon
Placeholder Jul 23, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Good
Amazon Verified review Amazon
AVGJANE May 22, 2019
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
You can barely read most of the print and examples. Cheaply printed!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.