Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Cloud Computing

27 Articles
article-image-is-serverless-architecture-a-good-choice-for-app-development
Mehul Rajput
11 Oct 2019
6 min read
Save for later

Is serverless architecture a good choice for app development?

Mehul Rajput
11 Oct 2019
6 min read
App development has evolved rapidly in recent years. With new demands and expectations from businesses and users, trends like cloud have helped developers to be more productive and to build faster, more reliable and secure applications. But, there’s no end to evolution - and serverless is arguably the next step for application development. But is a serverless architecture the right choice? What is a Serverless Architecture? When you hear the word sever-less, you might assume that it means no servers. In actual fact it really refers to the elimination of the need to manage the servers. Instead, it shifts that responsibility to your cloud provider. Simply, it means that the constituent parts of an application are divided between multiple servers, with no need for the application owner/manager to create or manage the infrastructure that supports it. Instead of running off a server, with a serverless architecture, it runs off functions. These are essentially actions that are fired off to ensure things happen within the application. This is where the phrase ‘function-as-a-service’, or FaaS, (another way of describing serverless) comes from.  A recent report claims that the FaaS market is projected to grow up to 32.7% by 2021, by 7.72 billion US dollars. Is Serverless Architecture a Good Choice for App Development? Now that we’ve established what the serverless actually means, we must get to business. Is serverless architecture the right choice for app development? Well, it can work either way. It can be positive as well as negative. Here are some reasons: Using serverless for app development: the positives There are many reasons because of which serverless architecture can be good for app development and should be used. Some of them are discussed below: Decreasing costs Easier for service Scalability Third-party services Decreasing costs The most effective use of a serverless architecture in an app development process is that it reduces the costs of the work.It’s typically less expensive a ‘traditional’ server architecture. The reason is that on hardware servers, you have to pay for many different things that might not be  required. For example, you won’t have to pay for regular maintenance, the grounds, the electricity, and staff maintenance. Hence, you can save a considerable amount of money and use that for app quality as well. Easier for service It is a rational thought that when the owner or the app manager will not have to manage the server themselves, and a machine can do this job, then it won’t be as challenging to make the service accessible. As it will make the job more comfortable because it will not require supervision. Second, you will not have to spend time on it. Instead, you can use this time for productive work such as product development. Third, the service by this technology is reliable, and hence you can easily use it without much fear. Scalability Now another interestingly useful advantage of serverless architecture in app development is scalability. So, what is scalability? Well, it is the phenomenon by which a system handles an extra amount of work by adding resources to the system. It is the capability of an app or product to continue to work appropriately without disturbance when it is reformed in size or volume to meet any users need. So, serverless architecture act as the resource that is added to the system to handle any work that has piled up. Third-party services Another essential and useful feature of the serverless architecture is that, going this way you can use third-party services. Hence, your app can use any third-party service it requires other than what you already have. This way, the struggle needed to create the backend architecture of the app reduces. Additionally the third-party might provide us with better services than we already have. Hence, eventually, serverless architecture proves to be better as it provides the extent of a third-party. Serverless for app development: negatives Now we know all the advantages of a serverless architecture, it’s important to note that it can also it  bring some limitations and disadvantages. These are: Time restrictions Vendor lock-in Multi-tenancy Debugging is not possible Time restrictions As mentioned before, serverless architecture works on FaaS rules and has a time limit for running a function. This time limit is 300 seconds exactly. So, when this limit is reached, the function is stopped. Therefore, for more complex functions that require more time to execute, FaaS approach may not be a good choice. Although this problem can be tackled in a way that the problem is solved easily, to do this, we can split a task into several simpler functions if the task allows it. Otherwise, time restrictions like these can cause great difficulty. Vendor lock-in We have discussed that by using serverless architecture, we can utilize with third party services. Well, it can also go in the wrong way and cause vendor lock-in. If, for any reason, you decide to shift to a new service provider, in most cases services will be fulfilled in a different way. That means the productivity gains you expected from serverless will be lost as you will have to adjust and reconfigure the infrastructure to accept the new service. Multi-tenancy Multi-tenancy is an increasing problem in serverless architecture. The data of many tenants are kept quite near to each other. This can create  confusion. Some data might be exchanged, distributed, or probably lost. In turn, this can cause security and reliability issues. A customer could, for example, suddenly produce an extraordinarily high load which would affect other customers' applications. Debugging is not possible Debugging isn’t possible with serverless. As Serverless Architecture is a place where the data is being stored, it doesn’t have a debugging facility where the uploaded code can be debugged. If you want to know the function, run or perform it and wait for the result. The result can crash in the function and you cannot do anything about this. However, there is a way to resolve this problem, as well. You can use extensive logging which with every step being logged, decreases the chances of errors that cause debugging issues. Conclusion Serverless architecture certainly seems impressive in spite of having some limitations. There is no doubt that the viability and success of architectures depend on the business requirements and of course on the technology used. In the same way, serverless can sparkle bright if used in the appropriate case. I hope this blog might have helped you in the understanding of Serverless architecture for mobile apps and might be able to see it's both bright and dark sides. Author Bio Mehul Rajput is a CEO and co-founder of Mindinventory which specializes in Android and iOS app development and provide web and mobile app solutions from startup to enterprise level businesses. He is an avid blogger and writes on mobile technologies, mobile app, app marketing, app development, startup and business.   What is serverless architecture and why should I be interested? Introducing numpywren, a system for linear algebra built on a serverless architecture Serverless Computing 101 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 2
Read more
  • 0
  • 0
  • 6128

article-image-hot-chips-31-ibm-power10-amds-ai-ambitions-intel-nnp-t-cerebras-largest-chip-with-1-2-trillion-transistors-and-more
Fatema Patrawala
23 Aug 2019
7 min read
Save for later

Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more

Fatema Patrawala
23 Aug 2019
7 min read
Hot Chips 31, the premiere event for the biggest semiconductor vendors to highlight their latest architectural developments is held in August every year. The event this year was held at the Memorial Auditorium on the Stanford University Campus in California, from August 18-20, 2019. Since its inception it is co-sponsored by IEEE and ACM SIGARCH. Hot Chips is amazing for the level of depth it provides on the latest technology and the upcoming releases in the IoT, firmware and hardware space. This year the list of presentations for Hot Chips was almost overwhelming with a wide range of technical disclosures on the latest chip logic innovations. Almost all the major chip vendors and IP licensees involved in semiconductor logic designs took part: Intel, AMD, NVIDIA, Arm, Xilinx, IBM, were on the list. But companies like Google, Microsoft, Facebook and Amazon also took part. There are notable absences from the likes of Apple, who despite being on the Committee, last presented at the conference in 1994. Day 1 kicked off with tutorials and sponsor demos. On the cloud side, Amazon AWS covered the evolution of hypervisors and the AWS infrastructure. Microsoft described its acceleration strategy with FPGAs and ASICs, with details on Project Brainwave and Project Zipline. Google covered the architecture of Google Cloud with the TPU v3 chip.  And a 3-part RISC-V tutorial rounded off by afternoon, so the day was spent well with insights into the latest cloud infrastructure and processor architectures. The detailed talks were presented on Day 2 and Day 3, below are some of the important highlights of the event: IBM’s POWER10 Processor expected by 2021 IBM which creates families of processors to address different segments, with different models for tasks like scale-up, scale-out, and now NVLink deployments. The company is adding new custom models that use new acceleration and memory devices, and that was the focus of this year’s talk at Hot Chips. They also announced about POWER10 which is expected to come with these new enhancements in 2021, they additionally announced, core counts of POWER10 and process technology. IBM also spoke about focusing on developing diverse memory and accelerator solutions to differentiate its product stack with heterogeneous systems. IBM aims to reduce the number of PHYs on its chips, so now it has PCIe Gen 4 PHYs while the rest of the SERDES run with the company's own interfaces. This creates a flexible interface that can support many types of accelerators and protocols, like GPUs, ASICs, CAPI, NVLink, and OpenCAPI. AMD wants to become a significant player in Artificial Intelligence AMD does not have an artificial intelligence–focused chip. However, AMD CEO Lisa Su in a keynote address at Hot Chips 31 stated that the company is working toward becoming a more significant player in artificial intelligence. Lisa stated that the company had adopted a CPU/GPU/interconnect strategy to tap artificial intelligence and HPC opportunity. She said that AMD would use all its technology in the Frontier supercomputer. The company plans to fully optimize its EYPC CPU and Radeon Instinct GPU for supercomputing. It would further enhance the system’s performance with its Infinity Fabric and unlock performance with its ROCM (Radeon Open Compute) software tools. Unlike Intel and NVIDIA, AMD does not have a dedicated artificial intelligence chip or application-specific accelerators. Despite this, Su noted, “We’ll absolutely see AMD be a large player in AI.” AMD is considering whether to build a dedicated AI chip or not. This decision will depend on how artificial intelligence evolves. Lisa explained that companies have been improving their CPU (central processing unit) performance by leveraging various elements. These elements are process technology, die size, TDP (thermal design power), power management, microarchitecture, and compilers. Process technology is the biggest contributor, as it boosts performance by 40%. Increasing die size also boosts performance in the double digits, but it is not cost-effective. While AMD used microarchitecture to boost EPYC Rome server CPU IPC (instructions per cycle) by 15% in single-threaded and 23% in multi-threaded workloads. This IPC improvement is above the industry average IPC improvement of around 5%–8%. Intel’s Nervana NNP-T and Lakefield 3D Foveros hybrid processors Intel revealed fine-grained details about its much-anticipated Spring Crest Deep Learning Accelerators at Hot Chips 31. The Nervana Neural Network Processor for Training (NNP-T) comes with 24 processing cores and a new take on data movement that's powered by 32GB of HBM2 memory. The spacious 27 billion transistors are spread across a 688mm2 die. The NNP-T also incorporates leading-edge technology from Intel-rival TSMC. Intel Lakefield 3D Foveros Hybrid Processors Intel in another presentation talked about Lakefield 3D Foveros hybrid processors that are the first to come to market with Intel's new 3D chip-stacking technology. The current design consists of two dies. The lower die houses all of the typical southbridge features, like I/O connections, and is fabbed on the 22FFL process. The upper die is a 10nm CPU that features one large compute core and four smaller Atom-based 'efficiency' cores, similar to an ARM big.LITTLE processor. Intel calls this a "hybrid x86 architecture," and it could denote a fundamental shift in the company's strategy. Finally, the company stacks DRAM atop the 3D processor in a PoP (package-on-Package) implementation. Cerebras largest chip ever with 1.2 trillion transistors California artificial intelligence startup Cerebras Systems introduced its Cerebras Wafer Scale Engine (WSE), the world’s largest-ever chip built for neural network processing. Sean Lie the Co-Founder and Chief Hardware Architect at Cerebras Lie presented the gigantic chip ever at Hot Chips 31. The 16nm WSE is a 46,225 mm2 silicon chip which is slightly larger than a 9.7-inch iPad. It features 1.2 trillion transistors, 400,000 AI optimized cores, 18 Gigabytes of on-chip memory, 9 petabyte/s memory bandwidth, and 100 petabyte/s fabric bandwidth. It is 56.7 times larger than the largest Nvidia graphics processing unit, which accommodates 21.1 billion transistors on a 815 mm2 silicon base. NVIDIA’s multi-chip solution for deep neural networks accelerator NVIDIA which announced about designing a test multi-chip solution for DNN computations at a VLSI conference last year, the company explained chip technology at Hot Chips 31 this year. It is currently a test chip which involves a multi-chip DL inference. It is designed for CNNs and has a RISC-V chip controller. It has 36 small chips, 8 Vector MACs per PE, and each chip has 12 PEs and each package has 6x6 chips. Few other notable talks at Hot Chips 31 Microsoft unveiled its new product Hololens 2.0 silicone. It has a holographic processor and a custom silicone. The application processor runs the app, and the HPU modifies the rendered image and sends to the display. Facebook presented details on Zion, its next generation in-memory unified training platform. Zion which is designed for Facebook sparse workloads, has a unified BFLOAT 16 format with CPU and accelerators. Huawei spoke about its Da Vinci architecture, a single Ascend 310 which can deliver 16 TeraOPS of 8-bit integer performance, support real-time analytics across 16 channels of HD video, and consume less than 8W of power. Xiling Versal AI engine Xilinx, the manufacturer of FPGAs, announced its new Versal AI engine last year as a way of moving FPGAs into the AI domain. This year at Hot Chips they expanded on its technology and more. Ayar Labs, an optical chip making startup, showcased results of its work with DARPA (U.S. Department of Defense's Defense Advanced Research Projects Agency) and Intel on an FPGA chiplet integration platform. The final talk on Day 3 ended with a presentation by Habana, they discussed about an innovative approach to scaling AI Training systems with its GAUDI AI Processor. AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications
Read more
  • 0
  • 0
  • 5502

article-image-how-do-aws-developers-manage-web-apps
Guest Contributor
04 Jul 2019
6 min read
Save for later

How do AWS developers manage Web apps?

Guest Contributor
04 Jul 2019
6 min read
When it comes to hosting and building a website on cloud, Amazon Web Services (AWS) is one of the most preferred choices for developers. According to Canalys, AWS is dominating the global public cloud market, holding around one-third of the total market share. AWS offers numerous services that can be used for compute power, content delivery, database storage, and more. Developers can use it to build a high-availability production website, whether it is a WordPress site, Node.js web app, LAMP stack web app, Drupal website, or a Python web app. AWS developers, need to set up, maintain and evolve the cloud infrastructure of web apps. Aside from these, they are also responsible for applying best practices related to security and scalability. Having said that, let’s take a deep dive into how AWS developers manage a web application. Deploying a website or web app with Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) offers developers a secure and scalable computing capacity in the cloud. For hosting a website or web app, the developers need to use virtual app servers called instances. With Amazon EC2 instances, developers gain complete control over computing resources. They can scale the capacity on the basis of requirements and pay only for the resources they actually use. There are tools like AWS lambda, Elastic Beanstalk and Lightsail that allow the isolation of web apps from common failure cases. Amazon EC2 supports a number of main operating systems, including Amazon Linux, Windows Server 2012, CentOS 6.5, and Debian 7.4. Here is how developers get themselves started with Amazon EC2 for deploying a website or web app. The first step is to set up an AWS account and log into it.   Select “Launch Instance” from the Amazon EC2 Dashboard. It will enable the creation of VM. Now configure the instance by choosing an Amazon Machine Image (AMI), instance type and security group.   Click on Launch. In the next step, choose ‘Create a new key pair’ and name it. A key pair file gets downloaded automatically, which needs to be saved. It will be needed for logging in to the instance. Click on ‘Launch Instances’ to finish the set-up process. Once the instance is ready, it can be used to build high availability websites or web app. Using Amazon S3 for cloud storage Amazon Simple Storage Service, or Amazon S3 is a secure and highly scalable cloud storage solution that makes web-scale computing seamless for developers. It is used for the objects that are required to build a website, such as HTML pages, images, CSS files, videos and JavaScript. S3 comes with a simple interface so that developers can fetch and store large amounts of data from anywhere on the internet, at any time. The storage infrastructure provided with Amazon S3 is known for scalability, reliability, and speed. Amazon itself uses this storage option to host its own websites. Within S3, the developers need to create buckets for data storage. Each bucket can store a large amount of data, allowing developers to upload a high number of objects into it. The amount of data an object can contain, is up to 5 TB. The objects are stored and fetched from the bucket using a unique key. There are several purposes of a bucket. It can be used to organize the S3 namespace, recognize the accounts assigned for storage and data transfer, as well as work as the aggregation unit for usage. Elastic load balancing Load balancing is a critical part of a website or web app to distribute and balance the traffic load accordingly to multiple targets. AWS provides elastic load balancing to its developers, which allows them to distribute the traffic across a number of services, like Amazon EC2 instances, IP addresses, Lambda functions and containers. With Elastic load balancing, developers can ensure that their projects run efficiently even when there is heavy traffic. There are three kinds of load balancers available with AWS elastic load balancing— Application Load Balancer, Network Load Balancer and Classic Load Balancer. Application Load Balancer is an ideal option for HTTP and HTTPS traffic. It provides advanced routing for the requests meant for the delivery of microservices and containers. For balancing the load of Transmission Control Protocol (TCP), Transport Layer Security (TLS) and User Datagram Protocol (UDP), developers opt for Network Load Balancer. Whereas, the Classic Load Balancer is best suited for typical load distribution across EC2 instances. It works for both requests and connections. Debugging and troubleshooting A web app or website can include numerous features and components. Often, a few of them might face issues or not work as expected, because of coding errors or other bugs. In such cases, AWS developers follow a number of processes and techniques and check the useful resources that help them to debug a recipe or troubleshoot the issues.   See the service issue at Common Debugging and Troubleshooting Issues.   Check the Debugging Recipes for issues related to recipes.   Check the AWS OpsWorks Stack Forum. It is a forum where other developers discuss their issues. AWS team also monitors these issues and helps in finding the solutions.   Get in touch with AWS OpsWorks Stacks support team to solve the issue.  Traffic monitoring and analysis Analysing and monitoring the traffic and network logs help in understanding the way websites and web apps perform on the internet.  AWS provides several tools for traffic monitoring, which includes Real-Time Web Analytics with Kinesis Data Analytics, Amazon Kinesis, Amazon Pinpoint, Amazon Athena, etc.  For tracking of website metrics, the Real-Time Web Analytics with Kinesis Data Analytics is used by developers. This tool provides insights into visitor counts, page views, time spent by visitors, actions taken by visitors, channels driving the traffic and more. Additionally, the tool comes with an optional dashboard which can be used for monitoring of web servers. Developers can see custom metrics of the servers to know about the performance of servers, average network packets processing, errors, etc. Wrapping up Management of a web application is a tedious task and requires quality tools and technologies. Amazon Web Services makes things easier for web developers, providing them with all the tools required to handle the app.  Author Bio Vaibhav Shah is the CEO of Techuz, a mobile app and web development company in India and the USA. He is a technology maven, a visionary who likes to explore innovative technologies and has empowered 100+ businesses with sophisticated Web solutions
Read more
  • 0
  • 0
  • 5223

article-image-cloud-pricing-comparison-aws-vs-azure
Guest Contributor
02 Feb 2019
11 min read
Save for later

Cloud pricing comparison: AWS vs Azure

Guest Contributor
02 Feb 2019
11 min read
On average, businesses waste about 35% of their cloud spend due to inefficiently using their cloud resources. This amounts to more than $10 billion in wasted cloud spend across just the top three public cloud providers. Although the unmatched compute power, data storage options and efficient content delivery systems of the leading public cloud providers can support incredible business growth, this can cause some hubris. It’s easy to lose control of costs when your cloud provider appears to be keeping things running smoothly. To stop this from happening, it’s essential to adopt a new approach to how we manage - and optimize - cloud spend. It’s not an easy thing to do, as pricing structures can be complicated. However, in this post, we’ll look at how both AWS and Azure structure their pricing, and how you can best determine what’s right for you. Different types of cloud pricing schemes Broadly, the pricing model for cloud services can range from a pure subscription-based model, where services are charged based on a cloud catalog and users are billed per month, per mailbox, or app license ordered. In this instance, subscribers are billed for all the resources to which they are subscribed, irrespective of whether they are used or not. The other option is pay-as-you-go. This is where subscribers begin with a billing amount set at 0, which then grows with the services and resources they use.. Amazon uses the Pay-As-You-Go model, charging a predetermined price for every hour of virtual machine resources used. Such a model is also used by other leading cloud service providers including Microsoft Azure and Google’s Google Cloud Platform. Another variant of cloud pricing is an enterprise billing service. This is based on the number of active users assigned to a particular cloud subscription. Microsoft Azure is a leading cloud provider that offers cloud subscription for its customers. Most cloud providers offer varying combinations of the above three models with attractive discount options built-in. These include: What free tier services do AWS and Azure offer? Both AWS and Azure offer a ‘free tier’ service for new and initial subscribers. This is for potential long-time subscribers to test out the service before committing for the long run. For AWS, Amazon allows subscribers to try out most of AWS’ services free for a year, including RDS, S3, EC2, Elastic Block Store, Elastic Load Balancing (EBS) and other AWS services. For example, you can utilize EC2 and EBS on the free tier to host a website for a whole year. EBS pricing will be zero unless your usage exceeds the limit of 30GB of storage. The free tier for the EC2 includes 730 hours of a t2.micro instance. Azure offers similar deals for new users. Azure’s services like App Service, Virtual Machines, Azure SQL Database, Blob Storage and Azure Kubernetes Service (AKS) are free for the initial period of 12 months. Additionally, Azure provides the ‘Functions’ compute service (for serverless) at 1 million requests free every month throughout the subscription. This is useful if you want to give serverless a try. AWS and Azure’s pay-as-you-go, on-demand pricing models Under the pay-as-you-go model, AWS and Azure offer subscribers the option to simply settle their bills at the end of every month without any upfront investment. This is a good option if you want to avoid a long-term and binding contract. Most resources are available on demand and charged on a per hour basis, and costs are calculated based on the number of hours the resource was used. For data storage and data transfer, the rates are generally calculated per Gigabyte. Subscribers are notified 30 days in advance for any changes in the Pay As You Go rates as well as when new services are added periodically to the platform. Reserve-and-pay-less pricing model In addition to the on-demand pricing model, Amazon AWS has an alternate scheme called Reserved Instance (RI) that allows the subscriber to reserve capacity for specific products. RI offers discounted hourly rates and capacity reservation for its EC2 and RDS services. A subscriber can reserve a resource and can save up to 75% of total billing costs in the long run. These discounted rates are automatically added to the subscriber’s AWS bills. Subscribers have the option to reserve instances either for a 1-year or a 3-year term. Microsoft Azure offers to help subscribers save up to 72% of their billing costs compared to its pay-as-you-go model when subscribers sign up for one to three-year terms for Windows and Linux virtual machines (VMs). Microsoft also allows for added flexibility in the sense that if your business needs change, you can cancel your Azure RI subscription at any time and return the remaining unused RI to Microsoft for an early termination fee. Use-more-and-pay-less pricing model In addition to the above payment options, AWS offers subscribers one additional payment option. When it comes to data transfer and data storage services, AWS gives discounts based on the subscriber’s usage. These volume-based discounts help subscribers realize critical savings as their usage increases. Subscribers can benefit from the economies of scale, allowing their businesses to grow while costs are kept relatively under control. AWS also gives subscribers the option to sign up for services that help their growing business. As an example, AWS’ storage services offer subscribers with opportunities to lower pricing based on how frequently data is accessed and performance needed in the retrieval process. For EC2, you can get a discount of up to 10% if you reserve more. The image below demonstrates the pricing of the AWS S3 bucket based on usage. Comparing Cloud Pricing on Azure and AWS As the major cloud service providers – Amazon Web Services, Azure, Google Cloud Platform and IBM – continually decrease prices of cloud instances, provide new and innovative discount options, include additional instances, and drop billing increments. In some cases, especially, Microsoft Azure, per second billing has also been introduced. However, as costs decrease, the complexity increases. It is paramount for subscribers to understand and efficiently navigate this complexity. We take a crack at it here. Reserved Instance Pricing Given the availability of Reserved Instances by Azure, AWS and GCP have also introduced publicly available discounts, some reaching up to 75%. This is in exchange for signing up to use the services of the particular cloud service provider for a one year to 3 year period. We’ve briefly covered this in the section above. Before signing up, however, subscribers need to understand the amount of usage they are committing to and how much of usage to leave as an ‘on-demand’ option. To do this, subscribers need to consider many different factors – Historical usage – by region, instance type, etc Steady-state vs. part-time usage An estimate of usage growth or decline Probability of switching cloud service providers Choosing alternative computing models like serverless, containers, etc. On-Demand Instance Pricing On-Demand Instances work best for applications that have short-term, irregular workloads but critical enough as to not be interrupted. For instance, if you’re running cron jobs on a periodic basis that lasts for a few hours, you can move them to on-demand instances. Each On-Demand Instance is billed per instance hour from time it is launched until it is terminated. These are most useful during the testing or development phase of applications. On-demand instances are available in many varying levels of computing power, designed for different tasks executed within the cloud environment. These on-demand instances have no binding contractual commitments and can be used as and when required. Generally, on-demand instances are among the most expensive purchasing options for instances. Each on-demand instance is billed at a per instance hour from the time it is launched until it is stopped or terminated. If partial instance hours are used, these are rounded up to the full hour during billing. The chart below shows the on-demand price per hour for AWS and Azure cloud services and the hourly price for each GB of RAM. VM Type AWS OD Hourly Azure OD Hourly AWS OD / GB RAM Azure OD / GB RAM Standard 2 vCPU w Local SSD $0.133 $0.100 $0.018 $0.013 Standard 2 vCPU no local disk $0.100 $0.100 $0.013 $0.013 Highmem 2 vCPU w Local SSD $0.166 $0.133 $0.011 $0.008 Highmem 2 vCPU no local disk $0.133 $0.133 $0.009 $0.008 Highcpu 2 vCPU w Local SSD $0.105 $0.085 $0.028 $0.021 Highcpu 2 vCPU no local disk $0.085 $0.085 $0.021 $0.021   The on-demand price of Azure instances is cheaper compared to AWS for certain VM types. The price difference is evident for instances with local SSD. Discounted Cloud Instance Pricing When it comes to discounted cloud pricing, it is important to remember that this comes with a lock-in period of 1 – 3 years. Therefore, it would work best for organizations that are more stable and have a good idea of what their historical cloud usage is and can fairly accurately predict what cloud services they would require over the next 12 month period. In the table below, we have looked at annual costs of both AWS and Azure. VM Type AWS 1 Y RI Annual Azure 1 Y RI Annual AWS 1 Y RI Annual / GB RAM Azure 1 Y RI Annual / GB RAM Standard 2 vCPU w Local SSD $867 $508 $116 $64 Standard 2 vCPU no local disk $622 $508 $78 $64 Highmem 2 vCPU w Local SSD $946 $683 $63 $43 Highmem 2 vCPU no local disk $850 $683 $56 $43 Highcpu 2 vCPU w Local SSD $666 $543 $178 $136 Highcpu 2 vCPU no local disk $543 $543 $136 $136 Azure’s rates are clearly better than Amazon’s pricing and by a good margin. Azure offers better-discounted rates for Standard, Highmem and High CPU compute instances.   Optimizing Cloud Pricing Subscribers need to move beyond short-term, one time fixes and make use of automation to continuously monitor their spend, raise alerts for over or underuse of service and also take an automated action based on a predetermined condition. Here are some of the ways you can optimize your cloud spending: Cloud Pricing Calculators Cloud Pricing tools enable you to list the different parameters for your AWS or Azure subscriptions. You can use these tools to calculate an approximate monthly cost that would likely be incurred. AWS Simple Monthly Calculator You can try the official cloud pricing calculators from AWS and Azure or a third-party pricing calculator. Calculators help you to optimize your pricing based on your requirements. For example, if you have a long-term requirement for running instances, and if you’re currently running them using on-demand pricing schemes, cloud calculators can offer better insights into reserved-instance schemes and other ways that you can improve your cloud expenditure. For instance, this Azure calculator by NetApp offers more price optimization option. This includes options to tier less frequently used data to storage objects like Azure Blob and customize snapshot creation and storage efficiency. Zerto is another popular calculator for Azure and AWS with a simpler interface. However, note that the estimated cost is based on current pricing and is subject can be liable to change. Price List API Historically, for potential users to narrow down on the final usage cost involved a considerable amount of manual rate checks. They involve collecting price points, and checking and cross-referencing them manually. In the case of AWS, the Price List API offers programmatic access, which is especially beneficial to designers who can now query the AWS price list instead of searching manually through the web. To make matters more natural, the queries can be constructed into simple code in any language. Azure offers a similar billing API to gain insights into your Azure usage programmatically. Summary Understanding and optimizing cloud pricing is somewhat challenging with AWS and Azure. This is partially because they offer hundreds of features with different pricing options and new features are added to the pipeline every week. To solve some of these complexities, we’ve covered some of the popular ways to tackle pricing in AWS and Azure. Here’s a list of things that we’ve covered: How the cloud pricing works and the different pricing schemes in AWS and Azure Comparison of different instance pricing options in AWS and Azure which includes reserved instance, on-demand instances, and discounted instances. Third-party tools like calculators for optimizing price. Price list API for AWS and Azure. If you have any thoughts to share, feel free to post it in the comments. About the author Gilad David Maayan Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Oracle, Zend, CheckPoint and Ixia. Gilad is a 3-time winner of international technical communication awards, including the STC Trans-European Merit Award and the STC Silicon Valley Award of Excellence. Over the past 7 years, Gilad has headed Agile SEO, which performs strategic search marketing for leading technology brands. Together with his team, Gilad has done market research, developer relations, and content strategy in 39 technology markets, lending him a broad perspective on trends, approaches, and ecosystems across the tech industry. Cloud computing trends in 2019 The 10 best cloud and infrastructure conferences happening in 2019 Bo Weaver on Cloud security, skills gap, and software development in 2019  
Read more
  • 0
  • 0
  • 25123

Banner background image
article-image-key-trends-in-software-development-in-2019-cloud-native-and-the-shrinking-stack
Richard Gall
18 Dec 2018
8 min read
Save for later

Key trends in software development in 2019: cloud native and the shrinking stack

Richard Gall
18 Dec 2018
8 min read
Bill Gates is quoted as saying that we tend to overestimate the pace of change over a period of 2 years, but underestimate change over a decade. It’s an astute observation: much of what will matter in 2019 actually looks a lot like what we said will be important in development this year. But if you look back 10 years, the change in the types of applications and websites we build - as well as how we build them - is astonishing. The web as we understood it in 2008 is almost unrecognisable. Today, we are in the midst of the app and API economy. Notions of surfing the web sound almost as archaic as a dial up tone. Similarly, the JavaScript framework boom now feels old hat - building for browsers just sounds weird... So, as we move into 2019, progressive web apps, artificial intelligence, and native app development remain at the top of development agenda. But this doesn’t mean these changes are to be ignored as empty hype. If anything, as adoption increases and new tools emerge, we will begin to see more radical shifts in ways of working. The cutting edge will need to sharpen itself elsewhere. What will it mean to be a web developer in 2019? But these changes are enforcing wider changes in the industry. Arguably, it’s transforming what it means to be a web developer. As applications become increasingly lightweight (thanks to libraries and frameworks like React and Vue), and data becomes more intensive, thanks to the range of services upon which applications and websites depend, developers need to expand across the stack. You can see this in some of the latest Packt titles - in Modern JavaScript Web Development Cookbook, for example, you’ll learn microservices and native app development - topics that have typically fallen outside of the strict remit of web development. The simplification of many aspects of development has, ironically, forced developers to look more closely at how these aspects fit together. As you move further into layers of abstraction, the way things interact and work alongside each other become vital. For the most part, it’s no longer a case of writing the requisite code to make something run on the specific part of the application you’re working on, it’s rather about understanding how the various pieces - from the backend to the front end - fit together. This means, in 2019, you need to dive deeper and get to know your software systems inside out. Get comfortable with the backend. Dive into cloud. Start playing with microservices. Rethink and revisit languages you thought you knew. Get to know your infrastructure: tackling the challenges of API development It might sound strange, but as the stack shrinks and the responsibilities of developers - web and otherwise - shift, understanding the architectural components within the software their building is essential. You could blame some of this on DevOps - essentially, it has made developers responsible for how their code runs once it hits production. Because of this important change, the requisite skills and toolchain for the modern developer is also expanding. There are a range of routes into software architecture, but exploring API design is a good place to begin. Hands on RESTful API Design offers a practical way into the topic. While REST is the standard for API design, the diverse range of tools and approaches is making managing the client a potentially complex but interesting area. GraphQL, a query language developed by Facebook is said to have killed off REST (although we wouldn’t be so hasty), while Redux and Relay, two libraries for managing data in React applications, have seen a lot of interest over the last 12 months as two key tools for working with APIs. Want to get started with GraphQL? Try Beginning GraphQL. Learn Redux with Learning Redux.       Microservices: take responsibility for your infrastructure The reason that we’re seeing so many tools offering ways of managing APIs is that microservices are becoming the dominant architectural mode. This requires developer attention too. That’s not to say that you need to implement microservices now (in fact, there are probably many reasons not to), but if you want to be building software in 5 years time, getting to grips with the principles behind microservices and the tools that can help you use them. Perhaps one of the central technologies driving microservices are containers. You could run microservices in a virtual machine, but because they’re harder to scale than containers, you probably wouldn’t be seeing the benefits you’d be expecting from a microservices architecture. This means getting to grips with core container technologies is vital. Docker is the obvious place to start. There are varying degrees to which developers need to understand it, but even if you don’t think you’ll be using it immediately it does give you a nice real-world foundation in containers if you don’t already have one. Watch and learn how to put Docker to work with the Hands on Docker for Microservices video.  But beyond Docker, Kubernetes is the go to tool that allows you to scale and orchestrate containers. This gives you control over how you scale application services in a way that you probably couldn’t have imagined a decade ago. Get a grounding in Kubernetes with Getting Started with Kubernetes - Third Edition, or follow a 7 day learning plan with Kubernetes in 7 Days. If you want to learn how Docker and Kubernetes come together as part of a fully integrated approach to development, check out Hands on Microservices with Node.js. It's time for developers to embrace cloud It should come as no surprise that, if the general trend is towards full stack, where everything is everyone’s problem, that developers simply can’t afford to ignore cloud. And why would you want to - the levels of abstraction it offers, and the various services and integrations that come with the leading cloud services can make many elements of the development process much easier. Issues surrounding scale, hardware, setup and maintenance almost disappear when you use cloud. That’s not to say that cloud platforms don’t bring their own set of challenges, but they do allow you to focus on more interesting problems. But more importantly, they open up new opportunities. Serverless becomes a possibility - allowing you to scale incredibly quickly by running everything on your cloud provider, but there are other advantages too. Want to get started with serverless? Check out some of these titles… JavaScript Cloud Native Development Cookbook Hands-on Serverless Architecture with AWS Lambda [Video] Serverless Computing with Azure [Video] For example, when you use cloud you can bring advanced features like artificial intelligence into your applications. AWS has a whole suite of machine learning tools - AWS Lex can help you build conversational interfaces, while AWS Polly turns text into speech. Similarly, Azure Cognitive Services has a diverse range of features for vision, speech, language, and search. What cloud brings you, as a developer, is a way of increasing the complexity of applications and processes, while maintaining agility. Adding in features and optimizations previously might have felt sluggish - maybe even impossible. But by leveraging AWS and Azure (among others), you can do much more than you previously realised. Back to basics: New languages, and fresh approaches With all of this ostensible complexity in contemporary software development, you’d be forgiven for thinking that languages simply don’t matter. That’s obviously nonsense. There’s an argument that gaining a deeper understanding of how languages work, what they offer, and where they may be weak, can make you a much more accomplished developer. Be prepared is sage advice for a world where everything is unpredictable - both in the real world and inside our software systems too. So, you have two options - and both are smart. Either go back to a language you know and explore a new paradigm or learn a new language from scratch. Learn a new language: Kotlin Quick Start Guide Hands-On Go Programming Mastering Go Learning TypeScript 2.x - Second Edition     Explore a new programming paradigm: Functional Programming in Go [Video] Mastering Functional Programming Hands-On Functional Programming in RUST Hands-On Object-Oriented Programming with Kotlin     2019: the same, but different, basically... It's not what you should be saying if you work for a tech publisher, but I'll be honest: software development in 2019 will look a lot like it has in 2018.  But that doesn't mean you have time to be complacent. In just a matter of years, much of what feels new or ‘emerging’ today will be the norm. You don’t have to look hard to see the set of skills many full stack developer job postings are asking for - the demands are so diverse that adaptability is clearly immensely valuable both for your immediate projects and future career prospects. So, as 2019 begins, commit to developing yourself sharpening your skill set.
Read more
  • 0
  • 0
  • 5524

article-image-what-is-distributed-computing-and-whats-driving-its-adoption
Melisha Dsouza
07 Nov 2018
8 min read
Save for later

What is distributed computing and what's driving its adoption?

Melisha Dsouza
07 Nov 2018
8 min read
Distributed computing is having a real impact on the way companies look at the cloud. The "Most Promising Jobs 2018" report published by LinkedIn pointed out that distributed and cloud Computing rank amongst the top 10 most in-demand skills. What are the problems with centralized computing systems? Distributed computing solves many of the challenges that centralized computing systems pose today. These centralized systems - like IBM Mainframes - have been around for decades, but they’re beginning to lose favor. This is because centralized computing is ineffective and expensive in the context of increasing data and workloads. When you have a single central computer which controls a massive amount of computations - at the same time - it’s a massive strain on the system. Even one that’s particularly powerful. Centralized systems simply aren’t capable of processing huge volumes of transactional data and supporting tons of online users concurrently. There’s also a big issue with reliability. If your centralized server fails, all data could be permanently lost if you have no disaster recovery strategy. Fortunately, distributed computing offers solutions to many of these issues. How does distributed computing work? Distributed Computing comprises a group of systems located at different places, all connected over a network. They work on a single problem or a common goal. Each one of these systems is autonomous, programmable, asynchronous and failure-prone. These systems provide a better price/performance ratio when compared to a centralized system. This is because it’s more economical to add microprocessors rather than mainframes to your network. They have more computational power as compared to their centralized (mainframe) computing systems. Distributed computing and agility Another major plus point of distributed computing systems is that they provide much greater agility than centralized computing systems. Without centralization, organizations can add and change software and computational power according to the demands and needs of the business. With the reduction in price for computing power and storage thanks to the rise of public cloud services like AWS, organizations all over the world have begun using distributed systems and service-oriented architectures, like microservices. Distributed computing in action: Google search A perfect example of distributed computing in action is Google search. When a user submits a query, Google will use data from a number of different servers to deliver results, based on things like location, past searches, semantic keywords - and much, much more. These servers are located all around the world and are able to provide the search result in seconds or at time milliseconds. How cloud is driving the adoption of distributed computing Central to the adoption is the cloud. Today, cloud is mainstream and opens up the possibility of distributed systems to organizations in a number of different ways. Arguably, you’re not really seeing the full potential of cloud until you’ve moved to a distributed system. Let’s take a look at the different ways cloud services are helping companies feel confident enough to successfully leverage distributed computing. Infrastructure as a Service (IaaS) IaaS makes distributed systems accessible for many organizations by allowing them to host their infrastructure either internally on a private or public cloud. Essentially, they give an organization control over the operating system and platform that forms the foundation of their software infrastructure, but give an external cloud provider control over servers and virtualization technologies that make it possible to deploy that infrastructure. In the context of a distributed system, this means organizations have less to worry about. As you can imagine, without an IaaS, the process of developing and deploying a distributed system becomes much more complex and even costly. Platform as a Service: Custom Software on another Platform If IaaS effectively splits responsibilities between the organization and the cloud provider (the ‘service’), the platform as a Service (PaaS) ‘outsources’ even more to the cloud provider. Essentially, an organization simply has to handle the applications and data, leaving every other aspect of their infrastructure to the platform. This brings many benefits, and, in theory, should allow even relatively small engineering teams to take advantage of the benefits of a distributed system. The underlying complexity and heavy lifting that a distributed system brings rests with the cloud provider, allowing an organization’s engineers to focus on what matters most - shipping code. If you’re thinking about speed and innovation, then a PaaS opens that right up, provided your happy to allow your cloud provider to manage the bulk of your infrastructure. Software as a Service SaaS solutions are perhaps the clearest example of a distributed system. Arguably, given the way we use Saas today, it’s easy to forget that it can be a part of a distributed system. The concept is simple: it’s a complete software solution delivered to the end-user. If you’re trying to accomplish something particularly complex, something which you simply do not have the resources to do yourself, a SaaS solution could be effective. Users don’t need to worry about installing and maintaining software, they can simply access it via the internet   The biggest advantages of adopting a distributed computing system #1 Complete control on the system architecture Distributed computing opens up your options when it comes to system architecture. Although you might rely on an external cloud service for some resources (like compute or storage), the architectural decisions are ultimately yours. This means that you can make decisions based on exactly what your organization needs and how it works. In a sense, this is why distributed computing can bring you agility - but its not just about being agile in the strict sense, but also in a broader version of the word. It allows you to prioritize according to your own needs and demands. #2 Improve the “absolute performance” of the computing system Tasks can be partitioned into sub computations that can run concurrently. This, in turn, provides a total speedup of task completion. What’s more, if a particular site is currently overloaded with jobs, some of them can be moved to lightly loaded sites. This technique of ‘load sharing’ can boost the performance of your system. Essentially, distributed systems minimize the latency and response time while increasing the throughput. [caption id="attachment_23973" align="alignnone" width="1536"]  [/caption] #3  The Price to Performance ratio for the system Distributed networks offer a better price/performance ratio compared to centralized mainframe computers. This is because decentralized and modular applications can share expensive peripherals, such as high-capacity file servers and high-resolution printers. Similarly, multiple components can be run on nodes with specialized processing. This further reduces the cost of multiple specialized processing systems. #4 Disaster Recovery Distributed systems involve services communicating through different machines. This is where message integrity, confidentiality and authentication comes into play. In such a case, distributed computing gives organizations the flexibility to deploy a 4 way mechanism to keep operations secure: Encryption Authentication Authorization: Auditing: Another aspect of disaster recovery is reliability. If computation and the associated data effectively built into a single machine, and if that machine goes down, the entire service goes with it. With a distributed system, what could happen instead is that specific services might go down, but the whole thing should, in theory at least, stay standing. #5 Resilience through replication So, if specific services can go down within a distributed system, you still do need to do something to increase resilience. You do this by replicating services across multiple nodes, minimizing potential points of failure. This is what’s known as fault tolerance - it improves system reliability without affecting the system as a whole. It’s also worth pointing out that the hardware on which a distributed system is built is replaceable - this is better than depending on centralized hardware which, if it fails, will take everything with it… Another distributed computing example: SETI A good example of a distributed system is SETI. SETI collects massive amounts of data from observatories around the world on activity in the sky, in a bid to identify possible signs of extraterrestrial life. This information is then sliced into smaller pieces of data for easy analysis through distributed computing applications running as a screensaver on individual user PC’s, all around the world. The PC’s running the SETI screensaver will download a small file, and while a PC is unused, the screen saver downloads a data slice from SETI. It then runs the analytics application while the PC is idle, and when the analysis is complete, the analyzed data slice is uploaded back to SETI. This massive data analytics is possible all because of distributed computing. So, although distributed computing has become a bit of a buzzword, the technology is gaining traction in the minds of customers and service providers. Beyond the hype and debate, these services will ultimately help companies to be more responsive to market conditions while restraining IT costs. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Oath’s distributed network telemetry collector- ‘Panoptes’ is now Open source! Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018
Read more
  • 0
  • 0
  • 6129
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-edge-computing-trick-or-treat
Melisha Dsouza
31 Oct 2018
4 min read
Save for later

Edge computing - Trick or Treat?

Melisha Dsouza
31 Oct 2018
4 min read
According to IDC’s Digital Universe update, the number of connected devices is projected to expand to 30 billion by 2020 to 80 billion by 2025. IDC also estimates that the amount of data created and copied annually will reach 180 Zettabytes (180 trillion gigabytes) in 2025, up from less than 10 Zettabytes in 2015. Thomas Bittman, vice president and distinguished analyst at Gartner Research, in a session on edge computing at the recent Gartner IT Infrastructure, Operations Management and Data Center Conference predicted, “In the next few years, you will have edge strategies-you’ll have to.” This prediction was consistent with a real-time poll conducted at the conference which stated that 25% of the audience uses edge computing technology and more than 50% plan to implement it within two years. How does Edge computing work? 2018 marked the era of edge computing with the increase in the number of smart devices and the massive amounts of data generated by them. Edge computing allows data produced by the internet of things (IoT) devices to be processed near the edge of a user’s network. Instead of relying on the shared resources of large data centers in a cloud-based environment, edge computing will place more demands on endpoint devices and intermediary devices like gateways, edge servers and other new computing elements to encourage a complete edge computing environment. Some use cases of Edge computing The complex architecture of devices today demands a more comprehensive computing model to support its infrastructure. Edge computing caters to this need and reduces latency issues, overhead and cost issues associated with centralized computing options like the cloud. A good example of this is the launch of the world’s first digital drilling vessel, the Noble Globetrotter I by London-based offshore drilling company- ‘Noble Drilling’. The vessel uses data to create virtual versions of some of the key equipment on board. If the drawworks on this digitized rig begins to fail prematurely, information based on a ‘digital twin’ of that asset will notify a team of experts onshore. The “digital twin” is a virtual model of the device that lives inside the edge processor and can point out to tiny performance discrepancies human operators may easily miss. Keeping a watch on all pertinent data on a dashboard, the onshore team can collaborate with the rig’s crew to plan repairs before a failure. Noble believes that this move towards edge computing will lead to a more efficient, cost-effective offshore drilling. By predicting potential failures in advance, Noble can avert breakdowns at and also spare the expense of replacing/ repairing equipment. Another news that caught our attention was  Microsoft’s $5 billion investment in IoT to empower the intelligent cloud and the intelligent edge.  Azure Sphere is one of Microsoft’s intelligent edge solutions to power and protect connected microcontroller unit (MCU)-powered devices. MCU powered devices power everything from household stoves and refrigerators to industrial equipment and considering that there are 9 billion MCU-powered devices shipping every year, we need all the help we can get in the security spectrum! That’s intelligent edge for you on the consumer end of the application spectrum. 2018 also saw progress in the development of edge computing tools and solutions across the spectrum, from hardware to software. Take for instance OpenStack Rocky one of the most widely deployed open source cloud infrastructure software. It is designed to accommodate edge computing requirements by deploying containers directly on bare metal. OpenStack Ironic improves management and automation capabilities to bare metal infrastructure. Users can manage physical infrastructure just like they manage VMs, especially with new Ironic features introduced in Rocky. Intel’s OpenVIVO computer vision toolkit is yet another example of using edge computing to help developers to streamline their deep learning inferences and deploy high-performance computer vision solutions across a wide range of use-cases. Baidu, Inc. released the Kunlun AI chip built to handle AI models for both, edge computing on devices and in the cloud via data centers. Edge computing - Trick or Treat? However, edge computing does come with disadvantages like the steep cost of deploying and managing an edge network, security concerns and performing numerous operations. The final verdict: Edge computing is definitely a treat when complement by embedded AI for enhancing networks to promote efficiency in analysis and improve security for business systems. Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 Ubuntu 18.10 ‘Cosmic Cuttlefish’ releases with a focus on AI development, multi-cloud and edge deployments, and much more!
Read more
  • 0
  • 0
  • 2978

article-image-cloud-computing-services-iaas-paas-saas
Amey Varangaonkar
07 Aug 2018
4 min read
Save for later

Types of Cloud Computing Services: IaaS, PaaS, and SaaS

Amey Varangaonkar
07 Aug 2018
4 min read
Cloud computing has risen massively in terms of popularity in recent times. This is due to the way it reduces on-premise infrastructure cost and improves efficiency. Primarily, the cloud model has been divided into three major service categories: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) We will discuss each of these instances in the following sections: The article is an excerpt taken from the book 'Cloud Analytics with Google Cloud Platform', written by Sanket Thodge. Infrastructure as a Service (IaaS) Infrastructure as a Service often provides the infrastructure such as servers, virtual machines, networks, operating system, storage, and much more on a pay-as-you-use basis. IaaS providers offer VM from small to extra-large machines. The IaaS gives you complete freedom while choosing the instance type as per your requirements: Common cloud vendors providing the IaaS services are: Google Cloud Platform Amazon Web Services IBM HP Public Cloud Platform as a Service (PaaS) The PaaS model is similar to IaaS, but it also provides the additional tools such as database management system, business intelligence services, and so on. The following figure illustrates the architecture of the PaaS model: Cloud platforms providing PaaS services are as follows: Windows Azure Google App Engine Cloud Foundry Amazon Web Services Software as a Service (SaaS) Software as a Service (SaaS) makes the users connect to the products through the internet (or sometimes also help them build in-house as a private cloud solution) on a subscription basis model. Below image shows the basic architecture of SaaS model. Some cloud vendors providing SaaS are: Google Application Salesforce Zoho Microsoft Office 365 Differences between SaaS, PaaS, and IaaS The major differences between these models can be summarized to a table as follows: Software as a Service (SaaS) Platform as a Service (PaaS) Infrastructure as a Service (IaaS) Software as a service is a model in which a third-party provider hosts multiple applications and lets customers use them over the internet. SaaS is a very useful pay-as-you-use model. Examples: Salesforce, NetSuite This is a model in which a third-party provider application development platform and services built on its own infrastructure. Again these tools are made available to customers over the internet. Examples: Google App Engine, AWS Lambda In IaaS, a third-party application provides servers, storage, compute resources, and so on. And then makes it available for customers for their utilization. Customers can use IaaS to build their own PaaS and SaaS service for their customers. Examples: Google Cloud Compute, Amazon S3 How PaaS, IaaS, and SaaS are separated at a service level In this section, we are going to learn about how we can separate IaaS, PaaS, and SaaS at the service level: As the previous diagram suggests, we have the first column as OPS, which stands for operations. That means the bare minimum requirement for any typical server. When we are going with a server to buy, we should consider the preceding features before buying. It includes Application, Data, Runtime, Framework, Operating System, Server, Disk, and Network Stack. When we move to the cloud and decide to go with IaaS—in this case, we are not bothered about the server, disk, and network stack. Thus, the headache of handling hardware part is no more with us. That's why it is called Infrastructure as a Service. Now if we think of PaaS, we should not be worried about runtime, framework, and operating system along with the components in IaaS. Things that we need to focus on are only application and data. And the last deployment model is SaaS—Software as a Service. In this model, we are not concerned about literally anything. The only thing that we need to work on is the code and just a look at the bill. It's that simple! If you found the above excerpt useful, make sure to check out the book 'Cloud Analytics with Google Cloud Platform' for more of such interesting insights into Google Cloud Platform. Read more Top 5 cloud security threats to look out for in 2018 Is cloud mining profitable? Why AWS is the prefered cloud platform for developers working with big data?
Read more
  • 0
  • 0
  • 14958

article-image-cloud-deployment-models-private-public-hybrid
Amey Varangaonkar
03 Aug 2018
5 min read
Save for later

Demystifying Clouds: Private, Public, and Hybrid clouds

Amey Varangaonkar
03 Aug 2018
5 min read
Cloud computing is as much about learning the architecture as it is about the different deployment options that we have. We need to know the different ways our cloud infrastructure can be kept open to the world and do we want to restrict it. In this article, we look at the three ways of cloud computing and its deployment: There are 3 major cloud deployment models available to us today: Private cloud Public cloud Hybrid cloud In this excerpt, we will look at each of these separately: The following excerpt has been taken from the book 'Cloud Analytics with Google Cloud Platform' written by Sanket Thodge. Private cloud Private cloud services are built specifically when companies want to hold everything to them. It provides the users with customization in choosing hardware, in all the software options, and storage options. This typically works as a central data center to the internal end users. This model reduces the dependencies on external vendors. Enterprise users accessing this cloud may or may not be billed for utilizing the services. Private cloud changes how an enterprise decides the architecture of the cloud and how they are going to apply it in their infrastructure. Administration of a private cloud environment can be carried by internal or outsourced staff. Common private cloud technologies and vendors include the following: VMware: https://cloud.vmware.com OpenStack: https://www.openstack.org Citrix: https://www.citrix.co.in/products/citrix-cloud CloudStack: https://cloudstack.apache.org Go Grid: https://www.datapipe.com/gogrid With a private cloud, the same organization is showing itself as the cloud consumer as well as the cloud provider, as the infrastructure is built by them and the consumers are also from the same enterprise. But in order to differentiate these roles, a separate organizational department typically assumes the responsibility for provisioning the cloud and therefore assumes the cloud provider role, whereas the departments requiring access to this established private cloud take the role of the cloud consumer: Public cloud In a public cloud deployment model, a third-party cloud service provider often provides the cloud service over the internet. Public cloud services are sold with respect to demand and by a minute or hourly basis. But if you want, you can go for a long term commitment for up to five years in some cases, such as renting a virtual machine. In the case of renting a virtual machine, the customers pay for the duration, storage, or bandwidth that they consume (this might vary from vendor to vendor). Major public cloud service providers include: Google Cloud Platform: https://cloud.google.com Amazon Web Services: https://aws.amazon.com  IBM: https://www.ibm.com/cloud Microsoft Azure: https://azure.microsoft.com Rackspace: https://www.rackspace.com/cloud The architecture of a public cloud will typically go as follows: Hybrid cloud The next and the last cloud deployment type is the hybrid cloud. A hybrid cloud is an amalgamation of public cloud services (GCP, AWS, Azure likes) and an on-premises private cloud (built by the respective enterprise). Both on-premise and public have their roles here. On-premise is more for mission-critical applications, whereas public cloud manages spikes in demand. Automation is enabled between both the environment. The following figure shows the architecture of a hybrid cloud: The major benefit of a hybrid cloud is to create a uniquely unified, superbly automated, and insanely scalable environment that takes the benefit of everything a public cloud infrastructure has to offer, while still maintaining control over mission-critical vital data. Some common hybrid cloud examples include: Hitachi hybrid cloud: https://www.hitachivantara.com/en-us/solutions/hybrid-cloud.html Rackspace: https://www.rackspace.com/en-in/cloud/hybrid IBM: https://www.ibm.com/it-infrastructure/z/capabilities/hybrid-cloud AWS: https://aws.amazon.com/enterprise/hybrid Differences between the private cloud, hybrid cloud, and public cloud models The following tables summarizes the differences between the three cloud deployment models: Private Hybrid Public Definition A cloud computing model in which enterprises uses its own proprietary software and hardware. And this is specifically limited to its own data centre. Servers, cooling system, and storage - everything belongs to the company. This model includes a mixture of private and public cloud. It has a few components on-premises, private cloud and it will also be connected to other services on public cloud with perfect orchestration. Here, we have a complete third-part or a company that lets us use their infrastructure for a given period of time. This is a pay-as-you-use model. General public can access their infrastructure and no in-house servers are required to be maintained. Characteristics Single-tenant architecture On-premises hardware Direct control of the hardware Cloud bursting capacities Advantages of both public and private cloud Freedom to choose services from multiple vendors Pay-per use model Multi-tenant model Vendors HPE, VMWare, Microsoft, OpenStack Combination of public and private Google Cloud Platform, Amazon Web Services, Microsoft Azure We saw the three models are quite distinct from each other, each bringing along a specialized functionality to a business, depending on their needs. If you found the above excerpt useful, make sure to check out the book 'Cloud Analytics with Google Cloud Platform' for more information on GCP and how you can perform effective analytics on your data using it. Read more Why Alibaba cloud could be the dark horse in the public cloud race Is cloud mining profitable? Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine)
Read more
  • 0
  • 1
  • 4968

article-image-top-10-it-certifications-for-cloud-and-networking-professionals-in-2018
Vijin Boricha
05 Jul 2018
7 min read
Save for later

Top 10 IT certifications for cloud and networking professionals in 2018

Vijin Boricha
05 Jul 2018
7 min read
Certifications have always proven to be one of the best ways to boost one’s IT career. Irrespective of the domain you choose, you will always have an upperhand if your resume showcases some valuable IT certifications. Certified professionals attract employers as certifications are an external validation that an individual is competent in that said technical skill. Certifications enable individuals to start thinking out of the box, become more efficient in what they do, and execute goals with minimum errors. If you are looking at enhancing your skills and increasing your salary, this is a tried and tested method. Here are the top 10 IT certifications that will help you in uprising your IT career. AWS Certified Solution Architect - Associate: AWS is currently the market leader in the public cloud. Packt Skill Up Survey 2018 confirms this too. Source: Packt Skill Up Survey 2018 AWS Cloud from Amazon offers a cutting-edge platform for architecting, building, and deploying web-scale cloud applications. With rapid adaptation of cloud platform the need for cloud certifications has also increased. IT professionals with some experience of AWS Cloud, interested in designing effective Cloud solutions opt for this certification. This exam promises to scale your ability of architecting and deploying secure and robust applications on AWS technologies. Individuals who fail to pass an exam must wait 14 days before they are eligible to retake the exam. There isn’t any attempt limit for this exam. AWS Certification passing scores depend on statistical analysis and are subject to change. Exam Fee: $150 Average Salary: $119,233 per annum Number of Questions: 65 Types of Question: MCQ Available Languages: English, Japanese AWS Certified Developer - Associate: This is another role-based AWS certification that has gained enough traction for industries to keep it as a job validator. This exam helps individuals validate their software development knowledge which helps them develop cloud applications on AWS. IT professionals with hands-on experience in designing and maintaining AWS-based applications should definitely go for this certification to stand-out. Individuals who fail to pass an exam must wait 14 days before they are eligible to retake the exam. There isn’t any attempt limit for this exam. AWS Certification passing scores depend on statistical analysis and are subject to change. Exam Fee: $150 Average Salary: $116,456 per annum Number of Questions: 65 Types of Question: MCQ Available Languages: English, Simplified Chinese, and Japanese Project Management Professional (PMP) Project management Professional is one of the most valuable certifications for project managers. The beauty of this certification is that it not only teaches individuals creative methodologies but makes them proficient in any industry domain they look forward to pursuing. The techniques and knowledge one gets from this certification is applicable in any industry globally. This certification promises that PMP certified project managers are capable of completing projects on time, in a desired budget and ensure meeting the original project goal. Exam Fee: Non-PMI Members: $555/ PMI Members: $405 Average Salary: $113,000 per annum Number of Questions: 200 Type of Question: A combination of Multiple Choice and Open-end Passing Threshold: 80.6% Certified Information Systems Security Professional (CISSP) CISSP is one of the globally recognized security certifications. This cybersecurity certification is a great way to demonstrate your expertise and build industry-level security skills. On achieving this certification users will be well-versed in designing, engineering, implementing, and running an information security program. Users need at least 5 years of minimum working experience in order to be eligible for this certification. This certification will help you measure your competence in designing and maintaining a robust environment. Exam Fee: $699 Average Salary: $111,638 per annum Number of Questions: 250 (each question carries 4 marks) Type of Question: Multiple Choice Passing Threshold: 700 marks CompTIA Security+ CompTIA Security+ certification is a vendor neutral certification used to kick-start one’s career as a security professional. It helps users get acquainted to all the aspects related to IT security. If you are inclined towards systems administration, network administration, and security administration, this is something that you should definitely go for. With this certification users learn the latest trends and techniques in risk management, risk mitigation, threat management and intrusion detection. Exam Fee: $330 Average Salary: $95,829 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English (Japanese, Portuguese and Simplified Chinese estimated Q2 2018) Passing Threshold: 750/900 CompTIA Network+ Another CompTIA certification! Why? CompTIA Network+ is a certification that helps individuals in developing their career and validating their skills to troubleshoot, configure, and manage both wired and wireless networks. So, if you are an entry-level IT professional interested in managing, maintaining, troubleshooting and configuring complex network infrastructures then, this one is for you. Exam Fee: $302 Average Salary: $90,280 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English (In Development: Japanese, German, Spanish, Portuguese) Passing Threshold: 720 (on a scale of 100-900) VMware Certified Professional 6.5 – Data Center Virtualization (VCP6.5-DCV) Yes, even today virtualization is highly valued in a lot of industries. Data Center Virtualization Certification helps individuals develop skills and abilities to install, configure, and manage a vSphere 6.5 infrastructure. This industry-recognized certification validates users’ knowledge on implementing, managing, and troubleshooting a vSphere V6.5 infrastructure. It also helps IT professionals build a  foundation for business agility that can accelerate the transformation to cloud computing. Exam Fee: $250 Average Salary: $82,342 per annum Number of Questions: 46 Available language: English Type of Question: Single and Multiple Choice Passing Threshold: 300 (on a scale of 100-500) CompTIA A+ Yet another CompTIA certification that helps entry level IT professionals have an upper hand. This certification is specially for individuals interested in building their career in technical support or IT operational roles. If you are thinking more than just PC repair then, this one is for you. By entry level certification I mean this is a certification that one can pursue simultaneously while in college or secondary school. CompTIA A+ is a basic version of Network+ as it only touches basic network infrastructure issues while making you proficient as per industry standards. Exam Fee: $211 Average Salary:$79,390 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English, German, Japanese, Portuguese, French and Spanish Passing Threshold: 72% for 220-801 exam and 75% for 220-802 exam Cisco Certified Networking Associate (CCNA) Cisco Certified Network Associate (CCNA) Routing and Switching is one of the most important IT certifications to stay up-to date with your networking skills. It is a foundational certification for individuals interested in a high level networking profession. The exam helps candidates validate their knowledge and skills in networking, LAN switching, IPv4 and IPv6 routing, WAN, infrastructure security, and infrastructure management. This certification not only validates users networking fundamentals but also helps them stay relevant with skills needed to adopt next generation technologies. Exam Fee: $325 Average Salary:$55,166-$90,642 Number of Questions: 60-70 Available Languages: English, Japanese Type of Question: Multiple Choice Passing Threshold: 825/1000 CISM (Certified Information Security Manager) Lastly, we have Certified Information Security Manager (CISM), a nonprofit certification offered by ISACA that caters to security professionals involved in information security, risk management and governance. This is an advanced-level certification for experienced individuals who develop and manage enterprise information security programs. Only users who hold five years of verified experience, out of which 3 year of experience in infosec management, are eligible for this exam. Exam Fee: $415- $595 (Cheaper for members) Average Salary: $52,402 to $243,610 Number of Questions: 200 Passing Threshold: 450  (on a scale of 200-800) Type of Question: Multiple Choice Are you confused as to which certification you should take-up? Well, leave your noisy thoughts aside and choose wisely. Pick-up an exam that is inclined to your interest. If you want to pursue IT security don’t end-up going for Cloud certifications. No career option is fun unless you want to pursue it wholeheartedly. Take a right step and make it count. Why AWS is the prefered cloud platform for developers working with big data? 5 reasons why your business should adopt cloud computing Top 5 penetration testing tools for ethical hackers  
Read more
  • 0
  • 0
  • 2834
article-image-keep-your-serverless-aws-applications-secure-tutorial
Savia Lobo
18 Jun 2018
11 min read
Save for later

Keep your serverless AWS applications secure [Tutorial]

Savia Lobo
18 Jun 2018
11 min read
Handling security is an extensive and complex topic. If not done right, you open up your app to dangerous hacks and breaches. Even if everything is right, it may be hacked. So it's important we understand common security mechanisms to avoid exposing websites to vulnerabilities and follow the recommended practices and methodologies that have been largely tested and proven to be robust. In this tutorial, we will learn how to secure serverless applications using AWS. Additionally, we will learn about the security basics and then move on to handle authorization and authentication using AWS. This article is an excerpt taken from the book, 'Building Serverless Web Applications' wriiten by Diego Zanon. Security basics in AWS One of the mantras of security experts is this: don't roll your own. It means you should never use in a production system any kind of crypto algorithm or security model that you developed by yourself. Always use solutions that have been highly used, tested, and recommended by trusted sources. Even experienced people may commit errors and expose a solution to attacks, especially in the cryptography field, which requires advanced math. However, when a proposed solution is analyzed and tested by a great number of specialists, errors are much less frequent. In the security world, there is a term called security through obscurity. It is defined as a security model where the implementation mechanism is not publicly known, so there is a belief that it is secure because no one has prior information about the flaws it has. It can be indeed secure, but if used as the only form of protection, it is considered as a poor security practice. If a hacker is persistent enough, he or she can discover flaws even without knowing the internal code. In this case, again, it's better to use a highly tested algorithm than your own. Security through obscurity can be compared to someone trying to protect their own money by burying it in the backyard when the common security mechanism would be to put the money in a bank. The money can be safe while buried, but it will be protected only until someone finds about its existence and starts to look for it. Due to this reason, when dealing with security, we usually prefer to use open source algorithms and tools. Everyone can access and discover flaws in them, but there are also a great number of specialists that are involved in finding the vulnerabilities and fixing them. In this section, we will discuss other security concepts that everyone must know when building a system. Information security When dealing with security, there are some attributes that need to be considered. The most important ones are the following: Authentication: Confirm the user's identity by validating that the user is who they claim to be Authorization: Decide whether the user is allowed to execute the requested action Confidentiality: Ensure that data can't be understood by third-parties Integrity: Protect the message against undetectable modifications Non-repudiation: Ensure that someone can't deny the authenticity of their own message Availability: Keep the system available when needed These terms will be better explained in the next sections. Authentication Authentication is the ability to confirm the user's identity. It can be implemented by a login form where you request the user to type their username and password. If the hashed password matches what was previously saved in the database, you have enough proof that the user is who they claim to be. This model is good enough, at least for typical applications. You confirm the identity by requesting the user to provide what they know. Another kind of authentication is to request the user to provide what they have. It can be a physical device (like a dongle) or access to an e-mail account or phone number. However, you can't ask the user to type their credentials for every request. As long as you authenticate it in the first request, you must create a security token that will be used in the subsequent requests. This token will be saved on the client side as a cookie and will be automatically sent to the server in all requests. On AWS, this token can be created using the Cognito service. How this is done will be described later in this chapter. Authorization When a request is received in the backend, we need to check if the user is allowed to execute the requested action. For example, if the user wants to checkout the order with ID 123, we need to make a query to the database to identify who is the owner of the order and compare if it is the same user. Another scenario is when we have multiple roles in an application and we need to restrict data access. For example, a system developed to manage school grades may be implemented with two roles, such as student and teacher. The teacher will access the system to insert or update grades, while the students will access the system to read those grades. In this case, the authentication system must restrict the actions insert and update for users that are part of the teachers group and users in the students group must be restricted to read their own grades. Most of the time, we handle authorization in our own backend, but some serverless services don't require a backend and they are responsible by themselves to properly check the authorization. For example, in the next chapter, we are going to see how serverless notifications are implemented on AWS. When we use AWS IoT, if we want a private channel of communication between two users, we must give them access to one specific resource known by both and restrict access to other users to avoid the disclosure of private messages. Confidentiality Developing a website that uses HTTPS for all requests is the main drive to achieve confidentiality in the communication between the users and your site. As the data is encrypted, it's very hard for malicious users to decrypt and understand its contents. Although there are some attacks that can intercept the communication and forge certificates (man-in-the-middle), those require the malicious user to have access to the machine or network of the victim user. From our side, adding HTTPS support is the best thing that we can do to minimize the chance of attacks. Integrity Integrity is related to confidentiality. While confidentiality relies on encrypting a message to prevent other users from accessing its contents, integrity deals with protecting the messages against modifications by encrypting messages with digital signatures (TLS certificates). Integrity is an important concept when designing low level network systems, but all that matters for us is adding HTTPS support. Non-repudiation Non-repudiation is a term that is often confused with authentication since both of them have the objective to prove who has sent the message. However, the main difference is that authentication is more interested in a technical view and the non-repudiation concept is interested in legal terms, liability, and auditing. When you have a login form with user and password input, you can authenticate the user who correctly knows the combination, but you can't have 100% certain since the credentials can be correctly guessed or stolen by a third-party. On the other hand, if you have a stricter access mechanism, such as a biometric entry, you have more credibility. However, this is not perfect either. It's just a better non-repudiation mechanism. Availability Availability is also a concept of interest in the information security field because availability is not restricted to how you provision your hardware to meet your user needs. Availability can suffer attacks and can suffer interruptions due to malicious users. There are attacks, such as Distributed Denial of Service (DDoS), that aim to create bottlenecks to disrupt site availability. In a DDoS attack, the targeted website is flooded with superfluous requests with the objective to overload the systems. This is usually accomplished by a controlled network of infected machines called a botnet. On AWS, all services run under the AWS Shield service, which was designed to protect against DDoS attacks with no additional charge. However, if you run a very large and important service, you may be a direct target of advanced and large DDoS attacks. In this case, there is a premium tier offered in the AWS Shield service to ensure your website's availability even in worst case scenarios. This requires an investment of US$ 3,000 per month, and with this, you will have 24x7 support of a dedicated team and access to other tools for mitigation and analysis of DDoS attacks. Security on AWS We use AWS credentials, roles, and policies, but security on AWS is much more than handling authentication and authorization of users. This is what we will discuss in this section. Shared responsibility model Security on AWS is based on a shared responsibility model. While Amazon is responsible for keeping the infrastructure safe, the customers are responsible for patching security updates to software and protecting their own user accounts. AWS's responsibilities include the following: Physical security of the hardware and facilities Infrastructure of networks, virtualization, and storage Availability of services respecting Service Level Agreements (SLAs) Security of managed services such as Lambda, RDS, DynamoDB, and others A customer's responsibilities are as follows: Applying security patches to the operating system on EC2 machines Security of installed applications Avoiding disclosure of user credentials Correct configuration of access policies and roles Firewall configurations Network traffic protection (encrypting data to avoid disclosure of sensitive information) Encryption of server-side data and databases In the serverless model, we rely only on managed services. In this case, we don't need to worry about applying security patches to the operating system or runtime, but we do need to worry about third-party libraries that our application depends on to execute. Also, of course, we need to worry about all the things that we need to configure (firewalls, user policies, and so on), the network traffic (supporting HTTPS) and how data is manipulated by the application. The Trusted Advisor tool AWS offers a tool named Trusted Advisor, which can be accessed through https://console.aws.amazon.com/trustedadvisor. It was created to offer help on how you can optimize costs or improve performance, but it also helps identify security breaches and common misconfigurations. It searches for unrestricted access to specific ports on your EC2 machines, if Multi-Factor Authentication is enabled on the root account and if IAM users were created in your account. You need to pay for AWS premium support to unlock other features, such as cost optimization advice. However, security checks are free. Pen testing A penetration test (or pen test) is a good practice that all big websites must perform periodically. Even if you have a good team of security experts, the usual recommendation is to hire a specialized third-party company to perform pen tests and to find vulnerabilities. This is because they will most likely have tools and procedures that your team may not have tried yet. However, the caveat here is that you can't execute these tests without contacting AWS first. To respect their user terms, you can only try to find breaches on your own account and assets, in scheduled time frames (so they can disable their intrusion detection systems for your assets), and only on restricted services, such as EC2 instances and RDS. AWS CloudTrail AWS CloudTrail is a service that was designed to record all AWS API calls that are executed on your account. The output of this service is a set of log files that register the API caller, the date/time, the source IP address of the caller, the request parameters, and the response elements that were returned. This kind of service is pretty important for security analysis, in case there are data breaches, and for systems that need the auditing mechanism for compliance standards. MFA Multi-Factor Authentication (MFA) is an extra security layer that everyone must add to their AWS root account to protect against unauthorized access. Besides knowing the user and password, a malicious user would also need physical access to your smartphone or security token, which greatly restricts the risks. On AWS, you can use MFA through the following means: Virtual devices: Application installed on Android, iPhone, or Windows phones Physical devices: Six-digit tokens or OTP cards SMS: Messages received on your phone We have discussed the basic security concepts and how to apply them on a serverless project. If you've enjoyed reading this article, do check out 'Building Serverless Web Applications' to implement signup, sign in, and log out features using Amazon Cognito. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Analyzing CloudTrail Logs using Amazon Elasticsearch How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 2701

article-image-a-serverless-online-store-on-aws-could-save-you-money-build-one
Savia Lobo
14 Jun 2018
9 min read
Save for later

A serverless online store on AWS could save you money. Build one.

Savia Lobo
14 Jun 2018
9 min read
In this article you will learn to build an entire serverless project of an AWS online store, beginning with a React SPA frontend hosted on AWS followed by a serverless backend with API Gateway and Lambda functions. This article is an excerpt taken from the book, 'Building Serverless Web Applications' written by Diego Zanon. In this book, you will be introduced to the AWS services, and you'll learn how to estimate costs, and how to set up and use the Serverless Framework. The serverless architecture of AWS' online store We will build a real-world use case of a serverless solution. This sample application is an online store with the following requirements: List of available products Product details with user rating Add products to a shopping cart Create account and login pages For a better understanding of the architecture, take a look at the following diagram which gives a general view of how different services are organized and how they interact: Estimating costs In this section, we will estimate the costs of our sample application demo based on some usage assumptions and Amazon's pricing model. All pricing values used here are from mid 2017 and considers the cheapest region, US East (Northern Virginia). This section covers an example to illustrate how costs are calculated. Since the billing model and prices can change over time, always refer to the official sources to get updated prices before making your own estimations. You can use Amazon's calculator, which is accessible at this link: http://calculator.s3.amazonaws.com/index.html. If you still have any doubts after reading the instructions, you can always contact Amazon's support for free to get commercial guidance. Assumptions For our pricing example, we can assume that our online store will receive the following traffic per month: 100,000 page views 1,000 registered user accounts 200 GB of data transferred considering an average page size of 2 MB 5,000,000 code executions (Lambda functions) with an average of 200 milliseconds per request Route 53 pricing We need a hosted zone for our domain name and it costs US$ 0.50 per month. Also, we need to pay US$ 0.40 per million DNS queries to our domain. As this is a prorated cost, 100,000 page views will cost only US$ 0.04. Total: US$ 0.54 S3 pricing Amazon S3 charges you US$ 0.023 per GB/month stored, US$ 0.004 per 10,000 requests to your files, and US$ 0.09 per GB transferred. However, as we are considering the CloudFront usage, transfer costs will be charged by CloudFront prices and will not be considered in S3 billing. If our website occupies less than 1 GB of static files and has an average per page of 2 MB and 20 files, we can serve 100,000 page views for less than US$ 20. Considering CloudFront, S3 costs will go down to US$ 0.82 while you need to pay for CloudFront usage in another section. Real costs would be even lower because CloudFront caches files and it would not need to make 2,000,000 file requests to S3, but let's skip this detail to reduce the complexity of this estimation. On a side note, the cost would be much higher if you had to provision machines to handle this number of page views to a static website with the same availability and scalability. Total: US$ 0.82 CloudFront pricing CloudFront is slightly more complicated to price since you need to guess how much traffic comes from each region, as they are priced differently. The following table shows an example of estimation: RegionEstimated trafficCost per GB transferredCost per 10,000 HTTPS requestsNorth America70%US$ 0.085US$ 0.010Europe15%US$ 0.085US$ 0.012Asia10%US$ 0.140US$ 0.012South America5%US$ 0.250US$ 0.022 As we have estimated 200 GB of files transferred with 2,000,000 requests, the total will be US$ 21.97. Total: US$ 21.97 Certificate Manager pricing Certificate Manager provides SSL/TLS certificates for free. You only need to pay for the AWS resources you create to run your application. IAM pricing There is no charge specifically for IAM usage. You will be charged only by what AWS resources your users are consuming. Cognito pricing Each user has an associated profile that costs US$ 0.0055 per month. However, there is a permanent free tier that allows 50,000 monthly active users without charges, which is more than enough for our use case. Besides that, we are charged for Cognito Syncs of our user profiles. It costs US$ 0.15 for each 10,000 sync operations and US$ 0.15 per GB/month stored. If we estimate 1,000 active and registered users with less than 1 MB per profile, with less than 10 visits per month in average, we can estimate a charge of US$ 0.30. Total: US$ 0.30 IoT pricing IoT charges starts at US$ 5 per million messages exchanged. As each page view will make at least 2 requests, one to connect and another to subscribe to a topic, we can estimate a minimum of 200,000 messages per month. We need to add 1,000 messages if we suppose that 1% of the users will rate the products and we can ignore other requests like disconnect and unsubscribed because they are excluded from billing. In this setting, the total cost would be of US$ 1.01. Total: US$ 1.01 SNS pricing We will use SNS only for internal notifications, when CloudWatch triggers a warning about issues in our infrastructure. SNS charges US$ 2.00 per 100,000 e-mail messages, but it offers a permanent free tier of 1,000 e-mails. So, it will be free for us. CloudWatch pricing CloudWatch charges US$ 0.30 per metric/month and US$ 0.10 per alarm and offers a permanent free tier of 50 metrics and 10 alarms per month. If we create 20 metrics and expect 20 alarms in a month, we can estimate a cost of US$ 1.00. Total: US$ 1.00 API Gateway pricing API Gateway starts charging US$ 3.50 per million of API calls received and US$ 0.09 per GB transferred out to the Internet. If we assume 5 million requests per month with each response with an average of 1 KB, the total cost of this service will be US$ 17.93. Total: US$ 17.93 Lambda pricing When you create a Lambda function, you need to configure the amount of RAM memory that will be available for use. It ranges from 128 MB to 1.5 GB. Allocating more memory means additional costs. It breaks the philosophy of avoiding provision, but at least it's the only thing you need to worry about. The good practice here is to estimate how much memory each function needs and make some tests before deploying to production. A bad provision may result in errors or higher costs. Lambda has the following billing model: US$ 0.20 per 1 million requests US$ 0.00001667 GB-second Running time is counted in fractions of seconds, rounding up to the nearest multiple of 100 milliseconds. Furthermore, there is a permanent free tier that gives you 1 million requests and 400,000 GB-seconds per month without charges. In our use case scenario, we have assumed 5 million requests per month with an average of 200 milliseconds per execution. We can also assume that the allocated RAM memory is 512 MB per function: Request charges: Since 1 million requests are free, you pay for 4 million that will cost US$ 0.80. Compute charges: Here, 5 million executions of 200 milliseconds each gives us 1 million seconds. As we are running with a 512 MB capacity, it results in 500,000 GB-seconds, where 400,000 GB-seconds of these are free, resulting in a charge of 100,000 GB-seconds that costs US$ 1.67. Total: US$ 2.47 SimpleDB pricing Take a look at the following SimpleDB billing where the free tier is valid for new and existing users: US$ 0.14 per machine-hour (25 hours free) US$ 0.09 per GB transferred out to the internet (1 GB is free) US$ 0.25 per GB stored (1 GB is free) Take a look at the following charges: Compute charges: Considering 5 million requests with an average of 200 milliseconds of execution time, where 50% of this time is waiting for the database engine to execute, we estimate 139 machine hours per month. Discounting 25 free hours, we have an execution cost of US$ 15.96. Transfer costs: Since we'll transfer data between SimpleDB and AWS Lambda, there is no transfer cost. Storage charges: If we assume a 5 GB database, it results in US$ 1.00, since 1 GB is free. Total: US$ 16.96, but this will not be added in the final estimation since we will run our application using DynamoDB. DynamoDB DynamoDB requires you to provision the throughput capacity that you expect your tables to offer. Instead of provisioning hardware, memory, CPU, and other factors, you need to say how many read and write operations you expect and AWS will handle the necessary machine resources to meet your throughput needs with consistent and low-latency performance. One read capacity unit represents one strongly consistent read per second or two eventually consistent reads per second, where objects have a size up to 4 KB. Regarding the writing capacity, one unit means that you can write one object of size 1 KB per second. Considering these definitions, AWS offers in the permanent free tier 25 read units and 25 write units of throughput capacity, in addition to 25 GB of free storage. It charges as follows: US$ 0.47 per month for every Write Capacity Unit (WCU) US$ 0.09 per month for every Read Capacity Unit (RCU) US$ 0.25 per GB/month stored US$ 0.09 GB per GB transferred out to the Internet Since our estimated database will have only 5 GB, we are on the free tier and we will not pay for transferred data because there is no transfer cost to AWS Lambda. Regarding read/write capacities, we have estimated 5 million requests per month. If we evenly distribute them, we will get two requests per second. In this case, we will consider that it's one read and one write operation per second. We need to estimate now how many objects are affected by a read and a write operation. For a write operation, we can estimate that we will manipulate 10 items on average and a read operation will scan 100 objects. In this scenario, we would need to reserve 10 WCU and 100 RCU. As we have 25 WCU and 25 RCU for free, we only need to pay for 75 RCU per month, which costs US$ 6.75. Total: US$ 6.75 Total pricing Let's summarize the cost of each service in the following table: ServiceMonthly CostsRoute 53US$ 0.54S3US$ 0.82CloudFrontUS$ 21.97CognitoUS$ 0.30IoTUS$ 1.01CloudWatchUS$ 1.00API GatewayUS$ 17.93LambdaUS$ 2.47DynamoDBUS$ 6.75TotalUS$ 52.79 It results in a total cost of ~ US$ 50 per month in infrastructure to serve 100,000 page views. If you have a conversion rate of 1%, you can get 1,000 sales per month, which means that you pay US$ 0.05 in infrastructure for each product that you sell. Thus, in this article you learned the serverless architecture of AWS online store also learned how to estimate its costs. If you've enjoyed reading the excerpt, do check out, Building Serverless Web Applications to monitor the performance, efficiency and errors of your apps and also learn how to test and deploy your applications. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Serverless computing wars: AWS Lambdas vs Azure Functions Using Amazon Simple Notification Service (SNS) to create an SNS topic
Read more
  • 0
  • 0
  • 6759

article-image-reasons-your-business-to-adopt-cloud-computing
Vijin Boricha
11 Jun 2018
6 min read
Save for later

5 reasons why your business should adopt cloud computing

Vijin Boricha
11 Jun 2018
6 min read
Businesses are moving their focus to using existing technology to accomplish their 2018 business targets. Although cloud services have been around for a while, many organisations hesitated to make the move. But recent enhancements such as cost-effectiveness, portability, agility, and faster connectivity have grabbed more attention from new and not so famous organisations. So, if your organization is looking for ways to achieve greater heights and you are exploring healthy investments that benefit your organisation then, your first choice should be cloud computing as the on-premises server system is fading away. You don’t need any proof to agree that cloud computing is playing a vital role in changing the way businesses work today. Organizations have started looking for cloud options to widen their businesses’ reach (read revenue, growth, sales) and to run more efficiently (read cost savings, bottom line, ROI). There are three major cloud options that growing businesses can look at: Public Cloud Private Cloud Hybrid Cloud A Gartner report states that by the year 2020 big vendors will shift from cloud-first to cloud-only policies. If you are wondering what could fuel this predicted rise in cloud adoption, look no further.Below are some factors contributing to this trend of businesses adopting cloud computing. Cloud offers increased flexibility One of the most beneficial aspects of adopting cloud computing is its flexibility no matter the size of the organization or the location your employee is placed at. Cloud computing comes with a wide range of options from modifying storage space to supporting both in-office and remotely located employees. This makes it easy for businesses to increase and decrease server loads along with providing employees with the benefit of working from anywhere at anytime with zero timezone restrictions. Cloud computing services, in a way, help businesses focus on revenue growth than spending time and resources on building hardware/software capabilities. Cloud computing is cost effective Cloud-backed businesses definitely benefit on cost as there is no need to maintain expensive in-house servers and other expensive devices given that everything is handled on the cloud. If you want your business to grow, you just need to spend on storage space and pay for the services you use. Cost transparency helps organizations plan their expenditure and pay-per-use is one of the biggest advantage businesses can leverage. With cloud adoption you eliminate spending on increasing processing power, hard drive space or building a large data center. When there are less hardware facilities to manage, you do not need a large IT team to handle it. Software licensing cost is automatically eliminated as the software is already stored on cloud and businesses have an option of paying as per their use. Scalability is easier with cloud The best part about cloud computing is its support for unpredicted requirements which helps businesses scale or downsize resources quickly and efficiently. It’s all about modifying your subscription plan which allows you to upgrade your storage or bandwidth plans as per your business needs.This kind of scalability option helps increasing business performance and minimizes the risk of up-front investments of operational issues and maintenance. Better availability means less downtime and better productivity So with cloud adoption you need not worry about downtime as they are reliable and maintain about 100% uptime. This means whatever you own on the cloud is available to your customers at any point. For every server breakdown, the cloud service providers make sure of having a backup server in place to avoid missing out on essential data. This can barely be achieved by traditional on-premises infrastructure, which is another reason businesses should switch to cloud. All of the above mentioned mechanism makes it easy to share files and documents with teammates, thanks to its flexible accessibility. Teams can collaborate more effectively when they can access documents anytime and anywhere. This obviously improves workflow and gives businesses a competitive edge. Being present is office to complete tasks is no longer a requirement for productivity;  a work/life balance is an added side-effect of such an arrangement. In short, you need not worry about operational disasters and you can get the job done without physically being present in office. Automated Backups One major problem with an on-premises data center is that everything depends on the functioning of your physical system. In cases where you lose your device or some kind of disaster befalls your physical system, it may lead to loss of data as well. This is never the case with cloud as you can access your files and documents from any device or location no matter the physical device you use. Organizations have to bear a massive expense for regular back-ups whereas cloud computing comes with automatic back-ups and provides enterprise grade functioning to all sizes of businesses. If you’re thinking about data security, cloud is a safer option as each of the cloud computing variants (private, public, and hybrid) has its own set of benefits. If you are not dealing with sensitive data, choosing public cloud would be the best option whereas for sensitive data, businesses should opt for private cloud where they have total control of the security policies. On the other hand, hybrid cloud allows you to benefit from both worlds. So, if you are looking for scalable solutions along with a much more controlled architecture for data security, hybrid cloud architecture will blend well with your business needs. It allows users to pick and choose the public or private cloud service they require to fulfill their business requirements. Migrating your business to cloud definitely has more advantages over disadvantages. It helps increase organizational efficiency and fuels business growth. Cloud computing helps reduce time-to-market, facilitates product development, keeps employees happy, and builds a desired workflow. This in the end helps your organisation achieve greater success. It doesn’t hurt that the cost you saved thus is available for you to invest in areas that are in dire need of some cash inflow! Read Next: What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Serverless computing wars: AWS Lambdas vs Azure Functions How machine learning as a service is transforming cloud
Read more
  • 0
  • 0
  • 3294
article-image-alibaba-the-dark-horse-in-the-public-cloud-race
Gebin George
07 Jun 2018
3 min read
Save for later

Why Alibaba cloud could be the dark horse in the public cloud race

Gebin George
07 Jun 2018
3 min read
Public cloud market seems to be highly dominated by industry giants from the west like Amazon Web Services (AWS) and Microsoft Azure. One of China’s tech giants, Alibaba cloud has entered the public cloud market recently and seems to be catching up pretty quickly with its Software-as-a-Service (SaaS). Infrastructure-as-Service (IaaS), Platform-as-a-Service offerings. According to reports, in December 2017, Alibaba cloud witnessed 56% YoY growth and revenue as good as 12.8 billion USD. It is expected to be better in the Q2 2018 report, where-in the market size will increase to a sizeable amount. Alibaba cloud is already leading China’s cloud market share. It provides around 100 core services, with datacenter spread around 17 regions as a whole. Some of the stunning features of Alibaba cloud include: Elastic Computing ECS services of Alibaba cloud are highly scalable, quick and powerful with high-range Intel CPUs, which brings down the latency to give staggering results. It comes with extra security layer for protecting applications from DDoS and Trojan attacks. The services involved here includes ECS, container services, Autoscaling and so on. Networking Alibaba cloud enables you with hybrid and distributed network, ideal for enterprises which demand high network coverages. This network involves communication between two VPCs and communication between VPCs and IDCs. Security It has a built-in Anti-DDoS management and security assessment services. This definitely reduces the cost of hiring and training quality security engineers to analyze and manage security services and data breaches. Storage and CDN Alibaba cloud’s OSS (Object Storage Service) helps you store, backup, and archive huge amount of data on cloud. This service is absolutely flexible and you only need to pay as per your usage and there are no additional cost involved in it. Analytics It comprises of a wide range of analytics services like business analytics, data processing, stream analytics and so on. Services like Elastic MapReduce, Apache Hadoop and Apache Spark can be run easily on Alibaba cloud for efficient Cloud Analytics. For detailed products and services from Alibaba cloud, refer their official site. AWS and Azure were dominating the public cloud market with an array of services which changed as per the market requirements. Considering the current advancements in the Alibaba cloud and its affordable and highly competitive price range, Alibaba joins the others in the race to dominate the public cloud market. Microsoft Build 2018 Day 1: Azure meets Artificial Intelligence How to create your own AWS CloudTrail Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 2336

article-image-why-aws-is-the-prefered-cloud-platform-for-developers-working-with-big-data
Savia Lobo
07 Jun 2018
4 min read
Save for later

Why AWS is the preferred cloud platform for developers working with big data

Savia Lobo
07 Jun 2018
4 min read
The cloud computing revolution has well and truly begun. But the market is fiercely competitive - there are a handful of cloud vendors leading the pack and it’s not easy to say which one is best. AWS, Google Cloud Platform, Microsoft Azure, and Oracle are leading the way when it comes to modern cloud-based infrastructure and it’s hard to separate them. Big data is in high demand as businesses can flesh out useful insights. Organizations carry out advanced analytics in order to leverage deep and exploratory perspective on the data. After a deep analysis is performed, BI tools such as Tableau, Microsoft Power BI, Qlik Sense, and so on, are used in drafting out dashboard visualizations, reports, performance management metrics etc. that makes the data analytics actionable. Thus, we see how analytics and BI tools are important in getting the best out of big data. In this year’s Skill Up survey, there emerged a frontrunner for developers: AWS Source : Packt Skill Up Survey Let’s talk AWS Amazon is said to outplay any other cloud platform players in the market. AWS provides its customers with a highly robust infrastructure with commendable security options. In its inception year, 2006, AWS already had more than 150,000 developers who signed up to use the AWS services. Amazon announced this at a press release that year. In a recent survey conducted by the Synergy Research, AWS is among the top cloud platform providers with a 35% market share. The top customers of AWS include NASA, Netflix, Adobe Systems, Airbnb, and many more. Cloud technology is not a new and emerging trend anymore and has truly become mainstream. What sets AWS on a different plateau is, it has caught developers’ attention by its impressive suite of developer tools. It’s a cloud platform that is designed with continuous delivery and DevOps in mind. AWS: Every developer’s den Once you’re an AWS’ member, you can experience hundreds of different platforms that it offers. Starting form Core Computation and Content Delivery Networks, one can even take advantage of the IoT and game development platforms. If you’re worried how to payback for all that you have used, don’t worry, AWS offers its complete package of solutions across six modes of payments. It also offers hundreds of templates in every programming language to glide along one’s choice of project. Pay-as-you-go feature in AWS enables customers to use the features that are required. This avoids unnecessary buying of resources that would add no value to businesses. Security on AWS is something users appreciate. AWS’ configuration options, management policies, and their reliable security are the reasons why one can easily trust their cloud services. AWS has layers of security encryptions that enable high-end user data protection. It also decides on user privileges using the IAM (Identity and Access Management) roles. This helps to keep restrictions on the number of resources used by the user. It also helps in greatly reducing malpractices. AWS provides developers with autoscaling, as it is one of the most important features that every developer needs. With AutoScaling, developers can suspend their unimportant management issues on autopilot; AWS takes care of it. Developers can instead focus more on processes, development, and programming. The free tier within AWS runs an Amazon EC2, which includes an S3 storage, EC2 compute hours, Elastic Load balancer time, and so on. This enables developers to try AWS’ API within their software to enhance it further. AWS cuts down deployment time required to provision a web server. By using Amazon Machine Images, one can have a machine deployed and ready to accept connections in a short time. Amazon’s logo says it all. It provides A to Z services under one hood for developers, businesses,and for general users. Each service is tailored to serve different purposes and also has a dedicated and specialized hardware. Developers can easily choose Amazon for their development needs with their pay-as-you-go and make the most of it without even buying stuff. Though, there are other service providers such as Microsoft Azure, Google Cloud Platform and so on, Amazon offers functionalities which others are yet to match. Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider How to secure ElasticCache in AWS How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 4455