Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides - Cloud & Networking

65 Articles
article-image-8-ways-artificial-intelligence-can-improve-devops
Prasad Ramesh
01 Sep 2018
6 min read
Save for later

8 ways Artificial Intelligence can improve DevOps

Prasad Ramesh
01 Sep 2018
6 min read
DevOps combines development and operations in an agile manner. ITOps refers to network infrastructure, computer operations, and device management. AIOps is artificial intelligence applied to ITOps, a term coined by Gartner. Makes us wonder what AI applied to DevOps would look like. Currently, there are some problem areas in DevOps that mainly revolve around data. Namely, accessing the large pool of data, taking actions on it, managing alerts etc. Moreover, there are errors caused by human intervention. AI works heavily with data and can help improve DevOps in numerous ways. Before we get into how AI can improve DevOps, let’s take a look at some of the problem areas in DevOps today. The trouble with DevOps Human errors: When testing or deployment is performed manually and there is an error, it is hard to repeat and fix. Many a time, the software development is outsourced in companies. In such cases, there is lack of coordination between the dev and ops teams. Environment inconsistency: Software functionality breaks when the code moves to different environments as each environment has different configurations. Teams can run around wasting a lot of time due to bugs when the software works fine on one environment but not on the other. Change management: Many companies have change management processes well in place, but they are outdated for DevOps. The time taken for reviews, passing a new module etc is manual and proves to be a bottleneck. Changes happen frequently in DevOps and the functioning suffers due to old processes. Monitoring: Monitoring is key to ensure smooth functioning in Agile. Many companies do not have the expertise to monitor the pipeline and infrastructure. Moreover monitoring only the infrastructure is not enough, there also needs to be monitoring of application performance, solutions need to be logged and analytics need to be tracked. Now let’s take a look at 8 ways AI can improve DevOps given the above context. 1. Better data access One of the most critical issues faced by DevOps teams is the lack of unregulated access to data. There is also a large amount of data, while the teams rarely view all of the data and focus on the outliers. The outliers only work as an indicator but do not give robust information. Artificial intelligence can compile and organize data from multiple sources for repeated use. Organized data is much easier to access and understand than heaps of raw data. This will help in predictive analysis and eventually a better decision making process. This is very important and enables many other ways listed below. 2. Superior implementation efficiency Artificially intelligent systems can work with minimal or no human intervention. Currently, a rules-based environment managed by humans is followed in DevOps teams. AI can transform this into self governed systems to greatly improve operational efficiency. There are limitations to the volume and complexity of analysis a human can perform. Given the large volumes of data to be analyzed and processed, AI systems being good at it, can set optimal rules to maximize operational efficiencies. 3. Root cause analysis Conducting root cause analysis is very important to fix an issue permanently. Not getting to the root cause allows for the cause to persist and affect other areas further down the line. Often, engineers don’t investigate failures in depth and are more focused on getting the release out. This is not surprising given the limited amount of time they have to work with. If fixing a superficial area gets things working, the root cause is not found. AI can take all data into account and see patterns between activity and cause to find the root cause of failure. 4 Automation Complete automation is a problem in DevOps, many tasks in DevOps are routine and need to be done by humans. An AI model can automate these repeatable tasks and speed up the process significantly. A well-trained model increases the scope of complexity of the tasks that can be automated by machines. AI can help achieve least human intervention so that developers can focus on more complex interactive problems. Complete automation also allows the errors to be reproduced and fixed promptly. 5 Reduce Operational Complexity AI can be used to simplify operations by providing a unified view. An engineer can view all the alerts and relevant data produced by the tools in a single place. This improves the current scenario where engineers have to switch between different tools to manually analyze and correlate data. Alert prioritization, root cause analysis, evaluating unusual behavior are complex time consuming tasks that depend on data. An organized singular view can greatly benefit in looking up data when required. “AI and machine learning makes it possible to get a high-level view of the tool-chain, but at the same time zoom in when it is required.” -SignifAI 6 Predicting failures A critical failure in a particular tool/area in DevOps can cripple the process and delay cycles. With enough data, machine learning models can predict when an error can occur. This goes beyond simple predictions. If an occurred fault is known to produce certain readings, AI can read patterns and predict the signs failure. AI can see indicators that humans may not be able to. As such early failure prediction and notification enable the team to fix it before it can affect the software development life cycle (SDLC). 7 Optimizing a specific metric AI can work towards solutions where the uptime is maximized. An adaptive machine learning system can learn how the system works and improve it. Improving could mean tweaking a specific metric in the workflow for optimized performance. Configurations can be changed by AI for optimal performance as required during different production phases. Real-time analysis plays a big part in this. 8 Managing Alerts DevOps systems can be flooded with alerts which are hard for humans to read and act upon. AI can analyze these alerts in real-time and categorize them. Assigning priority to alerts helps teams towards work on fixing them rather than going through a long list of alerts. The alerts can simply be tagged by a common ID for specific areas or AI can be trained for classifying good and bad alerts. Prioritizing alerts in such a way that flaws are shown first to be fixed will help smooth functioning. Conclusion As we saw, most of these areas depend heavily on data. So getting the system right to enhance data accessibility is the first step to take. Predictions work better when data is organized, performing root cause analysis is also easier. Automation can repeat mundane tasks and allow engineers to focus on more interactive problems that machines cannot handle. With machine learning, the overall operation efficiency, simplicity, and speed can be improved for smooth functioning of DevOps teams. Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner Top 7 DevOps tools in 2018 GitLab’s new DevOps solution
Read more
  • 0
  • 0
  • 23918

article-image-twilio-whatsapp-api-great-tool-reach-new-businesses
Amarabha Banerjee
15 Aug 2018
3 min read
Save for later

Twilio WhatsApp API: A great tool to reach new businesses

Amarabha Banerjee
15 Aug 2018
3 min read
The trend in the last few years have indicated that businesses want to talk to their customers in the same way they communicate with their friends and family. This enables them to cater to their specific need and to create customer centric  products. Twilio, a cloud and communication based platform has been at the forefront of creating messaging solutions for businesses. Recently, Twilio has enabled developers to integrate SMSing and calling facilities into their applications using the Twilio Web Services API. Over the last decade, Twilio customers have used Programmable SMS to build innovative messaging experiences for their users, whether it is sending instant transaction notifications for money transfers, food delivery alerts, or helping millions of people with the parking tickets. This latest feature added to the Twilio API integrates WhatsApp messaging into the application and manages messages and WhatsApp contacts with a business account. Why is the Twilio Whatsapp integration so significant? WhatsApp is one of the most popular instant messaging apps in the world presently. Everyday, 30 million messages are exchanged using WhatsApp. The visualization below shows the popularity of WhatsApp across different countries. Source: Twilio Integrating WhatsApp communications in the business applications would mean greater flexibility and ability to reach to a larger segment of audience. How is it done The operational overhead of integrating directly with the WhatsApp messaging network requires hosting, managing, and scaling containers in your own cloud infrastructure. This can be a tough task for any developer or business with a different end-objective and limited budget. The Twilio API makes it easier for you. WhatsApp delivers end-to-end message encryption through containers. These containers manage encryption keys and messages between the business and users. The containers need to be hosted in multiple regions for high availability and to scale efficiently, as messaging volume grows. Twilio solves this problem for you with a simple and reliable REST API. Other failsafe messaging features like: User opt-out options from WhatsApp messages Automatic switching to sms messaging in the absence of data network Shift to another messaging service in regions where WhatsApp is absent etc; can be implemented easily using the Twilio API. Also, you do not have to use separate APIs to get connected with different messaging services like Facebook messenger, MMS, RCS, LINE etc as all of them are possible within this API. WhatsApp is taking things at a slower pace currently. It initially allows you to develop a test application using the Twilio Sandbox for WhatsApp. This lets you to test your application first, and send messages to a limited number of users only. After your app gets production ready, you can create a WhatsApp business profile and get a dedicated Twilio number to work with WhatsApp. Source: Twilio With the added feature, Twilio enables you to leave aside the maintenance aspect of creating a separate WhatsApp integration service. Twilio takes care of the cloud containers and the security aspect of the application. It gives developers an opportunity to focus on creating customer centric products to communicate with them easily and efficiently. Make phone calls and send SMS messages from your website using Twilio Securing your Twilio App Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 4180

article-image-cloud-computing-services-iaas-paas-saas
Amey Varangaonkar
07 Aug 2018
4 min read
Save for later

Types of Cloud Computing Services: IaaS, PaaS, and SaaS

Amey Varangaonkar
07 Aug 2018
4 min read
Cloud computing has risen massively in terms of popularity in recent times. This is due to the way it reduces on-premise infrastructure cost and improves efficiency. Primarily, the cloud model has been divided into three major service categories: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) We will discuss each of these instances in the following sections: The article is an excerpt taken from the book 'Cloud Analytics with Google Cloud Platform', written by Sanket Thodge. Infrastructure as a Service (IaaS) Infrastructure as a Service often provides the infrastructure such as servers, virtual machines, networks, operating system, storage, and much more on a pay-as-you-use basis. IaaS providers offer VM from small to extra-large machines. The IaaS gives you complete freedom while choosing the instance type as per your requirements: Common cloud vendors providing the IaaS services are: Google Cloud Platform Amazon Web Services IBM HP Public Cloud Platform as a Service (PaaS) The PaaS model is similar to IaaS, but it also provides the additional tools such as database management system, business intelligence services, and so on. The following figure illustrates the architecture of the PaaS model: Cloud platforms providing PaaS services are as follows: Windows Azure Google App Engine Cloud Foundry Amazon Web Services Software as a Service (SaaS) Software as a Service (SaaS) makes the users connect to the products through the internet (or sometimes also help them build in-house as a private cloud solution) on a subscription basis model. Below image shows the basic architecture of SaaS model. Some cloud vendors providing SaaS are: Google Application Salesforce Zoho Microsoft Office 365 Differences between SaaS, PaaS, and IaaS The major differences between these models can be summarized to a table as follows: Software as a Service (SaaS) Platform as a Service (PaaS) Infrastructure as a Service (IaaS) Software as a service is a model in which a third-party provider hosts multiple applications and lets customers use them over the internet. SaaS is a very useful pay-as-you-use model. Examples: Salesforce, NetSuite This is a model in which a third-party provider application development platform and services built on its own infrastructure. Again these tools are made available to customers over the internet. Examples: Google App Engine, AWS Lambda In IaaS, a third-party application provides servers, storage, compute resources, and so on. And then makes it available for customers for their utilization. Customers can use IaaS to build their own PaaS and SaaS service for their customers. Examples: Google Cloud Compute, Amazon S3 How PaaS, IaaS, and SaaS are separated at a service level In this section, we are going to learn about how we can separate IaaS, PaaS, and SaaS at the service level: As the previous diagram suggests, we have the first column as OPS, which stands for operations. That means the bare minimum requirement for any typical server. When we are going with a server to buy, we should consider the preceding features before buying. It includes Application, Data, Runtime, Framework, Operating System, Server, Disk, and Network Stack. When we move to the cloud and decide to go with IaaS—in this case, we are not bothered about the server, disk, and network stack. Thus, the headache of handling hardware part is no more with us. That's why it is called Infrastructure as a Service. Now if we think of PaaS, we should not be worried about runtime, framework, and operating system along with the components in IaaS. Things that we need to focus on are only application and data. And the last deployment model is SaaS—Software as a Service. In this model, we are not concerned about literally anything. The only thing that we need to work on is the code and just a look at the bill. It's that simple! If you found the above excerpt useful, make sure to check out the book 'Cloud Analytics with Google Cloud Platform' for more of such interesting insights into Google Cloud Platform. Read more Top 5 cloud security threats to look out for in 2018 Is cloud mining profitable? Why AWS is the prefered cloud platform for developers working with big data?
Read more
  • 0
  • 0
  • 14999
Banner background image

article-image-cloud-deployment-models-private-public-hybrid
Amey Varangaonkar
03 Aug 2018
5 min read
Save for later

Demystifying Clouds: Private, Public, and Hybrid clouds

Amey Varangaonkar
03 Aug 2018
5 min read
Cloud computing is as much about learning the architecture as it is about the different deployment options that we have. We need to know the different ways our cloud infrastructure can be kept open to the world and do we want to restrict it. In this article, we look at the three ways of cloud computing and its deployment: There are 3 major cloud deployment models available to us today: Private cloud Public cloud Hybrid cloud In this excerpt, we will look at each of these separately: The following excerpt has been taken from the book 'Cloud Analytics with Google Cloud Platform' written by Sanket Thodge. Private cloud Private cloud services are built specifically when companies want to hold everything to them. It provides the users with customization in choosing hardware, in all the software options, and storage options. This typically works as a central data center to the internal end users. This model reduces the dependencies on external vendors. Enterprise users accessing this cloud may or may not be billed for utilizing the services. Private cloud changes how an enterprise decides the architecture of the cloud and how they are going to apply it in their infrastructure. Administration of a private cloud environment can be carried by internal or outsourced staff. Common private cloud technologies and vendors include the following: VMware: https://cloud.vmware.com OpenStack: https://www.openstack.org Citrix: https://www.citrix.co.in/products/citrix-cloud CloudStack: https://cloudstack.apache.org Go Grid: https://www.datapipe.com/gogrid With a private cloud, the same organization is showing itself as the cloud consumer as well as the cloud provider, as the infrastructure is built by them and the consumers are also from the same enterprise. But in order to differentiate these roles, a separate organizational department typically assumes the responsibility for provisioning the cloud and therefore assumes the cloud provider role, whereas the departments requiring access to this established private cloud take the role of the cloud consumer: Public cloud In a public cloud deployment model, a third-party cloud service provider often provides the cloud service over the internet. Public cloud services are sold with respect to demand and by a minute or hourly basis. But if you want, you can go for a long term commitment for up to five years in some cases, such as renting a virtual machine. In the case of renting a virtual machine, the customers pay for the duration, storage, or bandwidth that they consume (this might vary from vendor to vendor). Major public cloud service providers include: Google Cloud Platform: https://cloud.google.com Amazon Web Services: https://aws.amazon.com  IBM: https://www.ibm.com/cloud Microsoft Azure: https://azure.microsoft.com Rackspace: https://www.rackspace.com/cloud The architecture of a public cloud will typically go as follows: Hybrid cloud The next and the last cloud deployment type is the hybrid cloud. A hybrid cloud is an amalgamation of public cloud services (GCP, AWS, Azure likes) and an on-premises private cloud (built by the respective enterprise). Both on-premise and public have their roles here. On-premise is more for mission-critical applications, whereas public cloud manages spikes in demand. Automation is enabled between both the environment. The following figure shows the architecture of a hybrid cloud: The major benefit of a hybrid cloud is to create a uniquely unified, superbly automated, and insanely scalable environment that takes the benefit of everything a public cloud infrastructure has to offer, while still maintaining control over mission-critical vital data. Some common hybrid cloud examples include: Hitachi hybrid cloud: https://www.hitachivantara.com/en-us/solutions/hybrid-cloud.html Rackspace: https://www.rackspace.com/en-in/cloud/hybrid IBM: https://www.ibm.com/it-infrastructure/z/capabilities/hybrid-cloud AWS: https://aws.amazon.com/enterprise/hybrid Differences between the private cloud, hybrid cloud, and public cloud models The following tables summarizes the differences between the three cloud deployment models: Private Hybrid Public Definition A cloud computing model in which enterprises uses its own proprietary software and hardware. And this is specifically limited to its own data centre. Servers, cooling system, and storage - everything belongs to the company. This model includes a mixture of private and public cloud. It has a few components on-premises, private cloud and it will also be connected to other services on public cloud with perfect orchestration. Here, we have a complete third-part or a company that lets us use their infrastructure for a given period of time. This is a pay-as-you-use model. General public can access their infrastructure and no in-house servers are required to be maintained. Characteristics Single-tenant architecture On-premises hardware Direct control of the hardware Cloud bursting capacities Advantages of both public and private cloud Freedom to choose services from multiple vendors Pay-per use model Multi-tenant model Vendors HPE, VMWare, Microsoft, OpenStack Combination of public and private Google Cloud Platform, Amazon Web Services, Microsoft Azure We saw the three models are quite distinct from each other, each bringing along a specialized functionality to a business, depending on their needs. If you found the above excerpt useful, make sure to check out the book 'Cloud Analytics with Google Cloud Platform' for more information on GCP and how you can perform effective analytics on your data using it. Read more Why Alibaba cloud could be the dark horse in the public cloud race Is cloud mining profitable? Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine)
Read more
  • 0
  • 1
  • 5050

article-image-techs-culture-war-entrepreneur-egos-v-engineer-solidarity
Richard Gall
12 Jul 2018
10 min read
Save for later

Tech’s culture war: entrepreneur egos v. engineer solidarity

Richard Gall
12 Jul 2018
10 min read
There is a rift in the tech landscape that has been shifting quietly for some time. But 2018 is the year it has finally properly opened. This is the rift between the tech’s entrepreneurial ‘superstars’ and a nascent solidarity movement, both of which demonstrate the two faces of the modern tech industry. But within this ‘culture war’ there’s a broader debate about what technology is for and who has the power to make decisions about it. And that can only be a good thing - this is a conversation we’ve needed for some time. With the Cambridge Analytica scandal, and the shock election results to which it was tied, much contemporary political conversation is centered on technology’s impact on the social sphere. But little attention has been paid to the way these social changes or crises are actually enforcing changes within the tech industry itself. If it feels like we’re all having to pick sides when it comes to politics, the same is true when it comes to tech. The rise of the tech ego If you go back to the early years of software, in the early part of the twentieth century, there was little place for ego. It’s no accident that during this time computing was feminized - it was widely viewed as administrative. It was only later that software became more male dominated, thanks to a sexist cultural drive to establish male power in the field. This was arguably the start of egos tech takeover- after all, men wanted their work to carry a certain status. Women had to be pushed out to give them it. It’s no accident that the biggest names in technology - Bill Gates, Steve Wozniak, Steve Jobs - are all men. Their rise was, in part, a consequence of a cultural shift in the sixties. But it’s recognise the fact that in the eighties, these were still largely faceless organizations. Yes, they were powerful men, but the organizations they led were really just the next step out from the military industrial complex that helped develop software as we know it today. It was only when ‘tech’ properly entered the consumer domain that ego took on a new value. As PCs became part of every day life, attaching these products to interesting and intelligent figures was a way of marketing these products. It’s worth remarking that it isn’t really important whether these men had huge egos at all. All that matters is that they were presented in that way, and granted an incredible amount of status and authority. This meant that complexity of software and the literal labor of engineering could be reduced to a relatable figure like Gates or Jobs. We can still feel the effects of that today: just think of the different ways Apple and Microsoft products are perceived. Tech leaders personify technology. They make it marketable. Perhaps tech ‘egos’ were weirdly necessary. Because technology was starting to enter into everyone’s lives, these figures - as much entrepreneurs as engineers - were able to make it accessible and relatable. If that sounds a little far fetched, consider what the tech ‘ninja’ or the ‘guru’ really means for modern businesses. It often isn’t so much about doing something specific, but instead about making the value and application of those technologies clear, simple, and understandable. When companies advertise for these roles using this sort of language they’re often trying to solve an organizational problem as much as a technical one. That’s not to say that being a DevOps guru at some middling eCommerce company is the same as being Bill Gates. But it is important to note how we started talking in this way. Similarly, not everyone who gets called a ‘guru’ is going to have a massive ego (some of my best friends are cloud gurus!), but this type of language does encourage a selfish and egotistical type of thinking. And as anyone who’s worked in a development team knows, that can be incredibly dangerous. From Zuckerberg to your sprint meeting - egos don’t care about you Today, we are in a position where the discourse of gurus and ninjas is getting dangerous. This is true on a number of levels. On the one hand we have a whole new wave of tech entrepreneurs. Zuckerberg, Musk, Kalanick, Chesky, these people are Gates and Jobs for a new generation. For all their innovative thinking, it’s not hard to discern a certain entitlement from all of these men. Just look at Zuckerberg and his role in the Cambridge Analytica Scandal. Look at Musk and his bizarre intervention in Thailand. Kalanick’s sexual harassment might be personal, but it reflects a selfish entitlement that has real professional consequences for his workforce. Okay, so that’s just one extreme - but these people become the images of how technology should work. They tell business leaders and politicians that tech is run by smart people who ostensibly should be trusted. This not only has an impact on our civic lives but also on our professional lives too. Ever wonder why your CEO decides to spend big money on a CTO? It’s because this is the model of modern tech. That then filters down to you and the projects you don’t have faith in. If you feel frustrated at work, think of how these ideas and ways of describing things cascade down to what you do every day. It might seem small, but it does exist. The emergence of tech worker solidarity While all that has been happening, we’ve also seen a positive political awakening across the tech industry. As the egos come to dictate the way we work, what we work on, and who feels the benefits, a large group of engineers are starting to realize that maybe this isn’t the way things should be. Disaffection in Silicon Valley This year in Silicon Valley, worker protests against Amazon, Microsoft and Google have all had an impact on the way their companies are run. We don’t necessarily hear about these people - but they’re there. They’re not willing to let their code be used in ways that don’t represent them. The Cambridge Analytica scandal was the first instance of a political crisis emerging in tech. It wasn’t widely reported, but some Facebook employees asked to move across to different departments like Instagram or WhatsApp. One product designer, Westin Lohne, posted on Twitter that he had left his position saying “morally, it was extremely difficult to continue working there as a product designer.” https://twitter.com/westinlohne/status/981731786337251328 But while the story at Facebook was largely disorganized disaffection, at Google there was real organization against Project Maven. 300 Google employees signed a petition against the company’s AI initiative with the Pentagon. In May, a number of employees resigned over the issue. One is reported as saying “over the last couple of months, I’ve been less and less impressed with Google’s response and the way our concerns are being listened to.” Read next: Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon A similar protest happened at Amazon, with an internal letter to Jeff Bezos protesting the use of Rekognition - Amazon’s facial recognition technology - by law enforcement agencies, including ICE. “Along with much of the world we watched in horror recently as U.S. authorities tore children away from their parents,” the letter stated, according to Gizmodo. “In the face of this immoral U.S. policy, and the U.S.’s increasingly inhumane treatment of refugees and immigrants beyond this specific policy, we are deeply concerned that Amazon is implicated, providing infrastructure and services that enable ICE and DHS.” Microsoft saw a similar protest, sparked, in part, by the shocking images of families being separated at the U.S./Mexico border. Despite the company distancing itself over ICE’s activities, many employees were vocal in their opposition. “This is the sort of thing that would make me question staying,” said one employee, speaking to Gizmodo. A shift in attitudes as tensions emerge True, when taken individually, these instances of disaffection may not look like full-blown solidarity. But together, it amounts to a changing consciousness across Silicon Valley. Of course, it wouldn’t be wrong to say that a relationship between tech, the military, and government has always existed. But the reason things are different is precisely because these tensions have become more visible, attitudes more prominent in public discourse. It’s worth thinking about these attitudes and actions in the context of hyper-competitive Silicon Valley where ego is the norm, and talent and flair is everything. Signing petitions carries with it some risk - leaving a well-paid job you may have spent years working towards is no simple decision. It requires a decisive break with the somewhat egotistical strand that runs through tech to make these sorts of decisions. While it might seem strange, it also shouldn’t be that surprising. If working in software demands a high level of collaboration, then collaboration socially and politically is really just the logical development from our professional lives. All this talk about ‘ninjas’, ‘gurus’ and geniuses only creates more inequality within the tech job market - whether you’re in Silicon Valley, Stoke, or Barcelona, or Bangalore, this language actually hides the skills and knowledge that are actually most valuable in tech. Read next: Don’t call us ninjas or rockstars, say developers Where do we go next? The future doesn’t look good. But if the last six months or so are anything to go by there are a number of things we can do. On the one hand more organization could be the way forward. The publishing and media industries have been setting a great example of how unionization can work in a modern setting and help workers achieve protection and collaborative power at work. If the tech workforce is going to grow significantly over the next decade, we’re going to see more unionization. We’ve already seen technology lead to more unionization and worker organization in the context of the gig economy - Deliveroo and Uber drivers, for example. Gradually it’s going to return to tech itself. The tech industry is transforming the global economy. It’s not immune from the changes it’s causing. But we can also do more to challenge the ideology of the modern tech ego. Key to this is more confidence and technological literacy. If tech figureheads emerge to make technology marketable and accessible, the way to remove that power is to demystify it. It’s to make it clear that technology isn’t a gift, the genius invention of an unfathomable mind, but instead that it’s a collaborative and communal activity, and a skill that anyone can master given the right attitude and resources. At its best, tech culture has been teaching the world that for decades. Think about this the next time someone tells you that technology is magic. It’s not magic, it’s built by people like you. People who want to call it magic want you to think they’re a magician - and like any other magician, they’re probably trying to trick you.
Read more
  • 0
  • 0
  • 3164

article-image-analyzing-enterprise-application-behavior-with-wireshark-2
Vijin Boricha
09 Jul 2018
19 min read
Save for later

Analyzing enterprise application behavior with Wireshark 2

Vijin Boricha
09 Jul 2018
19 min read
One of the important things that you can use Wireshark for is application analysis and troubleshooting. When the application slows down, it can be due to the LAN (quite uncommon in wired LAN), the WAN service (common due to insufficient bandwidth or high delay), or slow servers or clients. It can also be due to slow or problematic applications. The purpose of this article is to get into the details of how applications work, and provide relevant guidelines and recipes for isolating and solving these problems. In the first recipe, we will learn how to find out and categorize applications that work over our network. Then, we will go through various types of applications to see how they work, how networks influence their behavior, and what can go wrong. Further, we will learn how to use Wireshark in order to resolve and troubleshoot common applications that are used in an enterprise network. These are Microsoft Terminal Server and Citrix, databases, and Simple Network Management Protocol (SNMP). This is an excerpt from Network Analysis using Wireshark 2 Cookbook - Second Edition written by Nagendra Kumar Nainar, Yogesh Ramdoss, Yoram Orzach. Find out what is running over your network The first thing to do when monitoring a new network is to find out what is running over it. There are various types of applications and network protocols, and they can influence and interfere with each other when all of them are running over the network. In some cases, you will have different VLANs, different Virtual Routing and Forwardings (VRFs), or servers that are connected to virtual ports in a blade server. Eventually, everything is running on the same infrastructure, and they can influence each other. There is a common confusion between VRFs and VLANs. Even though their purpose is quite the same, they are configured in different places. While VLANs are configured in the LAN in order to provide network separation in the OSI layers 1 and 2, VRFs are multiple instances of routing tables to make them coexist in the same router. This is a layer 3 operation that separates between different customer's networks. VRFs are generally seen in service provider environments using Multi-Protocol Label Switching (MPLS) to provide layer 3 connectivity to different customers over the same router's network, in such a way that no customer can see any other customer's network. In this recipe, we will see how to get to the details of what is running over the network, and the applications that can slow it down. The term blade server refers to a server enclosure, which is a chassis of server shelves on the front and LAN switches on the back. There are several different acronyms for it; for example, IBM calls them blade center and HP calls them blade system. Getting ready When you get into a new network, the first thing to do is connect Wireshark to sniff what is running over the applications and protocols. Make sure you follow these points: When you are required to monitor a server, port-mirror it and see what is running on its connection to the network. When you are required to monitor a remote office, port-mirror the router port that connects you to the WAN connection. Then, check what is running over it. When you are required to monitor a slow connection to the internet, port-mirror it to see what is going on there. In this recipe, we will see how to use the Wireshark tools for analyzing what is running and what can cause problems. How to do it... For analyzing, follow these steps: Connect Wireshark using one of the options mentioned in the previous section. You can use the following tools: Navigate to Statistics | Protocol Hierarchy to view the protocols that run over the network and the percentage of the total traffic Navigate to Statistics | Conversations to see who is talking and what protocols are used In the Protocol Hierarchy feature, you will get a window that will help you analyze who is talking over the network. It is shown in the following screenshot: In the preceding screenshot, you can see the protocol distribution: Ethernet: IP, Logical-Link Control (LLC), and configuration test protocol (loopback) Internet Protocol Version 4: UDP, TCP, Protocol Independent Multicast (PIM), Internet Group Management Protocol (IGMP), and Generic Routing Encapsulation  (GRE) If you click on the + sign, all the underlying protocols will be shown. To see a specific protocol throughput, click down to the protocols as shown in the following screenshot. You will see the application average throughput during the capture (HTTP in this example): Clicking on the + sign to the left of HTTP will open a list of protocols that run over HTTP (XML, MIME, JavaScripts, and more) and their average throughput during the capture period. There's more... In some cases (especially when you need to prepare management reports), you are required to provide a graphical picture of the network statistics. There are various sources available for this, for example: Etherape (for Linux): http://etherape.sourceforge.net/ Compass (for Windows): http://download.cnet.com/Compass-Free/3000-2085_4-75447541.html?tag=mncol;1 Analyzing Microsoft Terminal Server and Citrix communications problems Microsoft Terminal Server, which uses Remote Desktop Protocol (RDP) and Citrix metaframe Independent Computing Architecture (ICA) protocols, are widely used for local and remote connectivity for PCs and thin clients. The important thing to remember about these types of applications is that they transfer screen changes over the network. If there, are only a few changes, they will require low bandwidth. If there many changes, they will require high bandwidth. Another thing is that the traffic in these applications is entirely asymmetric. Downstream traffic takes from tens of Kbps up to several Mbps, while the upstream traffic will be at most several Kbps. When working with these applications, don't forget to design your network according to this. In this recipe, we will see some typical problems of these applications and how to locate them. For the convenience of writing, we will refer to Microsoft Terminal Server, and every time we write Microsoft Terminal Server, we will refer to all applications in this category, for example, Citrix Metaframe. Getting ready When suspecting a slow performance with Microsoft Terminal Server, first check with the user what the problem is. Then, connect the Wireshark to the network with port-mirror to the complaining client or to the server. How to do it... For locating a problem when Microsoft Terminal Server is involved, start with going to the users and asking questions. Follow these steps: When users complain about a slow network, ask them a simple question: Do they see the slowness in the data presented on the screen or when they switch between windows? If they say that the switch between windows is very fast, it is not a Microsoft Terminal Server problem. Microsoft Terminal Server problems will cause slow window changes, picture freezes, slow scrolling of graphical documents, and so on. If they say that they are trying to generate a report (when the software is running over Microsoft Terminal Server), but the report is generated after a long period of time, this is a database problem and not Microsoft Terminal Server or Citrix. When a user works with Microsoft Terminal Server over a high-delay communication line and types very fast, they might experience delays with the characters. This is because Microsoft Terminal Server is transferring window changes, and with high delays, these windows changes will be transferred slowly. When measuring the communication line with Wireshark: Use I/O graphs to monitor the line Use filters to monitor the upstream and the downstream directions Configure bits per second on the y axis You will get the following screenshot: In the preceding screenshot, you can see a typical traffic pattern with high downstream and very low upstream traffic. Notice that the Y-Axis is configured to Bits/Tick. In the time between 485s and 500s, you see that the throughput got to the maximum. This is when applications will slow down and users will start to feel screen freezes, menus that move very slowly, and so on. When a Citrix ICA client connects to a presentation server, it uses TCP ports 2598 or 1494. When monitoring Microsoft Terminal Server servers, don't forget that the clients access the server with Microsoft Terminal Server and the servers access the application with another client that is installed on the server. The performance problem can come from Microsoft Terminal Server or the application. If the problem is an Microsoft Terminal Server problem, it is necessary to figure out whether it is a network problem or a system problem: Check the network with Wireshark to see if there are any loads. Loads such as the one shown in the previous screenshot can be solved by simply increasing the communication lines. Check the server's performance. Applications like Microsoft Terminal Server are mostly memory consuming, so check mostly for memory (RAM) issues. How it works... Microsoft Terminal Server, Citrix Metaframe, and applications simply transfer window changes over the network. From your client (PC with software client or thin client), you connect to the terminal server; and the terminal server, runs various clients that are used to connect from it to other servers. In the following screenshot, you can see the principle of terminal server operation: There's more... From the terminal server vendors, you will hear that their applications improve two things. They will say that it improves manageability of clients because you don't have to manage PCs and software for every user; you simply install everything on the server, and if something fails, you fix it on the server. They will also say that traffic over the network will be reduced. Well, I will not get into the first argument. This is not our subject, but I strongly reject the second one. When working with a terminal client, your traffic entirely depends on what you are doing: When working with text/character-based applications, for example, some Enterprise Resource Planning (ERP) screens, you type in and read data. When working with the terminal client, you will connect to the terminal server that will connect to the database server. Depending on the database application you are working with, the terminal server can improve performance significantly or does not improve it at all. We will discuss this in the database section. Here, you can expect a load of tens to hundreds of Kbps. If you are working with regular office documents such as Word, PowerPoint, and so on, it entirely depends on what you are doing. Working with a simple Word document will require tens to hundreds of Kbps. Working with PowerPoint will require hundreds of Kbps to several Mbps, and when you present the PowerPoint file with full screen (the F5 function), the throughput can jump up to 8 to 10 Mbps. Browsing the internet will take between hundreds of Kbps and several Mbps, depending on what you will do over it. High resolution movies over terminal server to the internet-well, just don't do it. Before you implement any terminal environment, test it. I once had a software house that wanted their logo (at the top-right corner of the user window) to be very clear and striking. They refreshed it 10 times a second, which caused the 2 Mbps communication line to be blocked. You never know what you don't test! Analyzing the database traffic and common problems Some of you may wonder why we have this section here. After all, databases are considered to be a completely different branch in the IT environment. There are databases and applications on one side and the network and infrastructure on the other side. It is correct since we are not supposed to debug databases; there are DBAs for this. But through the information that runs over the network, we can see some issues that can help the DBAs with solving the relevant problems. In most of the cases, the IT staff will come to us first because people blame the network for everything. We will have to make sure that the problems are not coming from the network and that's it. In a minority of the cases, we will see some details on the capture file that can help the DBAs with what they are doing. Getting ready When the IT team comes to us complaining about the slow network, there are some things to do just to verify that it is not the case. Follow the instructions in the following section to make sure you avoid the slow network issue. How to do it... In the case of database problems, follow these steps: When you get complaints about the slow network responses, start asking these questions: Is the problem local or global? Does it occur only in the remote offices or also in the center? When the problem occurs in the entire network, it is not a WAN bandwidth issue. Does it happen the same for all clients? If not, there might be a specific problem that happens only with some users because only those users are running a specific application that causes the problem. Is the communication line between the clients and the server loaded? What is the application that loads them? Do all applications work slowly, or is it only the application that works with the specific database? Maybe some PCs are old and tired, or is it a server that runs out of resources? When we are done with the questionnaire, let's start our work: Open Wireshark and start capturing packets. You can configure port-mirror to a specific PC, the server, a VLAN, or a router that connects to a remote office in which you have the clients. Look at the TCP events (expert info). Do they happen on the entire communication link, on specific IP address/addresses, or on specific TCP port number/numbers? This will help you isolate the problem and check whether it is on a specific link, server, or application. When measuring traffic on a connection to the internet, you will get many retransmissions and duplicate ACKs to websites, mail servers, and so on. This is the internet. In an organization, you should expect 0.1 to 0.5 percent of retransmissions. When connecting to the internet, you can expect much higher numbers. But there are some network issues that can influence database behavior. In the following example, we see the behavior of a client that works with the server over a communication line with a round trip delay of 35 to 40 ms. We are looking at the TCP stream number 8 (1) and the connection started with TCP SYN/SYN-ACK/ACK. I've set this as a reference (2). We can see that the entire connection took 371 packets (3): The connection continues, and we can see time intervals of around 35 ms between DB requests and responses: Since we have 371 packets travelling back and forth, 371 x 35 ms gives us around 13 seconds. Add to this some retransmissions that might happen and some inefficiencies, and this leads to a user waiting for 10 to 15 seconds and more for a database query. In this case, you should consult the DBA on how to significantly reduce the number of packets that run over the network, or you can move to another way of access, for example, terminal server or web access. Another problem that can happen is that you will have a software issue that will reflect in the capture file. If you have a look at the following screenshot, you will see that there are five retransmissions, and then a new connection is opened from the client side. It looks like a TCP problem but it occurs only in a specific window in the software. It is simply a software procedure that stopped processing, and this stopped the TCP from responding to the client: How it works... Well, how databases work was always be a miracle to me. Our task is to find out how they influence the network, and this is what we've learned in this section. There's more... When you right-click on one of the packets in the database client to the server session, a window with the conversation will open. It can be helpful to the DBA to see what is running over the network. When you are facing delay problems, for example, when working over cellular lines over the internet or over international connections, the database client to the server will not always be efficient enough. You might need to move to web or terminal access to the database. An important issue is how the database works. If the client is accessing the database server, and the database server is using files shared from another server, it can be that the client-server works great; but the problems come from the database server to the shared files on the file server. Make sure that you know all these dependencies before starting with your tests. And most importantly, make sure you have very professional DBAs among your friends. One day, you will need them! Analyzing SNMP SNMP is a well-known protocol that is used to monitor and manage different types of devices in a network by collecting data and statistics at regular intervals. Beyond just monitoring, it can also be used to configure and modify settings with appropriate authorization given to SNMP servers. Devices that typically support SNMP are switches, routers, servers, workstations, hosts, VoIP Phones, and many more. It is important to know that there are three versions of SNMP: SNMPv1, SNMPv2c, and SNMPv3. Versions v2c and v3, which came later, offer better performance and security. SNMP consists of three components: The device being managed (referred to as managed device). SNMP Agent. This is a piece of software running on the managed device that collects the data from the device and stores it in a database, referred to as the Managed Information Base (MIB) database. As configured, this SNMP agent exports the data/statistics to the server (using UDP port 161) at regular intervals, and also any events and traps. SNMP server, also called Network Management Server (NMS). This is a server that communicates with all the agents in the network to collect the exported data and build a central repository. SNMP server provides access to the IT staff managing network; they can monitor, manage, and configure the network remotely. It is very important to be aware that some of the MIBs implemented in a device could be vendor-specific. Almost all the vendors publicize these MIBs implemented in their devices. Getting ready Generally, the complaints we get from the network management team are about not getting any statistics or traps from a device(s) for a specific interval, or having completely no visibility to a device(s). Follow the instructions in the following section to analyze and troubleshoot these issues. How to do it... In the case of SNMP problems, follow these steps. When you get complaints about SNMP, start asking these questions: Is this a new managed device that has been brought into the network recently? In other words, did the SNMP in the device ever work properly? If this is a new device, talk to relevant device administrator and/or check the SNMP-related configurations, such as community strings. If SNMP configurations looks correct, make sure that the NMS's IP address configured is correct and also check the relevant password credentials. If SNMP v3 is in use, which supports encryption, make sure to check encryption-related settings like transport methods. If the setting and configuration looks valid and correct, make sure the managed devices have connectivity with the NMS, which can be verified by simple ICMP pings. If it is a managed device that has been working properly and didn't report any statistics or alerts for a specific duration: Did the device in discussion have any issues in the control plane or management plane that stopped it from exporting SNMP statistics? Please be aware that for most devices in the network, SNMP is a least-priority protocol, which means that if a device has a higher-priority process to work on, it will hold the SNMP requests and responses in the queue. Is the issue experienced only for a specific device or for multiple devices in the network? Did the network (between managed device and NMS) experience any issue? For example, during any layer 2 spanning-tree convergence, traffic loss could occur between the managed device and SNMP server, by which NMS would lose visibility to the managed devices. As you can see in the following picture, an SNMP Server with IP address 172.18.254.139 is performing SNMP walk with a sequence of GET-NEXT-REQUEST to a workstation with IP address 10.81.64.22, which in turn responds with GET-RESPONSE. For simplicity, the Wireshark filter used for these captures is SNMP. The workstation is enabled with SNMP v2c, with community string public. Let's discuss some of the commonly seen failure scenarios. Polling a managed device with a wrong SNMP version As I mentioned earlier, the workstation is enabled with v2c, but when the NMS polls the device with the wrong SNMP version, it doesn't get any response. So, it is very important to make sure that the managed devices are polled with the correct SNMP version. Polling a managed device with a wrong MIB object ID (OID) In the following example, the NMS is polling the managed device to get a number of bytes sent out on interfaces. The MIB OID for byte count is .1.3.6.1.2.1.2.2.1.16, which is ifOutOctets. The managed device in discussion has two interfaces, mapped to OID .1.3.6.1.2.1.2.2.1.16.1 and .1.3.6.1.2.1.2.2.1.16.2. When NMS polls the device to check the statistics for the third interface (which is not present), it returns a noSuchInstance error. How it works... As you have learned in the earlier sections, SNMP is a very simple and straightforward protocol and all its related information on standards and MIB OIDs is readily available in the internet. There's more... Here are some of the websites with good information about SNMP and MIB OIDs: Microsoft TechNet SNMP: https://technet.microsoft.com/en-us/library/cc776379(v=ws.10).aspx Cisco IOS MIB locator: http://mibs.cloudapps.cisco.com/ITDIT/MIBS/servlet/index We have learned to perform enterprise-level network analysis with real-world examples like Analyzing Microsoft Terminal Server and Citrix communications problems. Get to know more about security and network forensics from our book Network Analysis using Wireshark 2 Cookbook - Second Edition. What’s new in Wireshark 2.6 ? Top 5 penetration testing tools for ethical hackers 5 pen testing rules of engagement: What to consider while performing Penetration testing
Read more
  • 0
  • 0
  • 5471
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-top-10-it-certifications-for-cloud-and-networking-professionals-in-2018
Vijin Boricha
05 Jul 2018
7 min read
Save for later

Top 10 IT certifications for cloud and networking professionals in 2018

Vijin Boricha
05 Jul 2018
7 min read
Certifications have always proven to be one of the best ways to boost one’s IT career. Irrespective of the domain you choose, you will always have an upperhand if your resume showcases some valuable IT certifications. Certified professionals attract employers as certifications are an external validation that an individual is competent in that said technical skill. Certifications enable individuals to start thinking out of the box, become more efficient in what they do, and execute goals with minimum errors. If you are looking at enhancing your skills and increasing your salary, this is a tried and tested method. Here are the top 10 IT certifications that will help you in uprising your IT career. AWS Certified Solution Architect - Associate: AWS is currently the market leader in the public cloud. Packt Skill Up Survey 2018 confirms this too. Source: Packt Skill Up Survey 2018 AWS Cloud from Amazon offers a cutting-edge platform for architecting, building, and deploying web-scale cloud applications. With rapid adaptation of cloud platform the need for cloud certifications has also increased. IT professionals with some experience of AWS Cloud, interested in designing effective Cloud solutions opt for this certification. This exam promises to scale your ability of architecting and deploying secure and robust applications on AWS technologies. Individuals who fail to pass an exam must wait 14 days before they are eligible to retake the exam. There isn’t any attempt limit for this exam. AWS Certification passing scores depend on statistical analysis and are subject to change. Exam Fee: $150 Average Salary: $119,233 per annum Number of Questions: 65 Types of Question: MCQ Available Languages: English, Japanese AWS Certified Developer - Associate: This is another role-based AWS certification that has gained enough traction for industries to keep it as a job validator. This exam helps individuals validate their software development knowledge which helps them develop cloud applications on AWS. IT professionals with hands-on experience in designing and maintaining AWS-based applications should definitely go for this certification to stand-out. Individuals who fail to pass an exam must wait 14 days before they are eligible to retake the exam. There isn’t any attempt limit for this exam. AWS Certification passing scores depend on statistical analysis and are subject to change. Exam Fee: $150 Average Salary: $116,456 per annum Number of Questions: 65 Types of Question: MCQ Available Languages: English, Simplified Chinese, and Japanese Project Management Professional (PMP) Project management Professional is one of the most valuable certifications for project managers. The beauty of this certification is that it not only teaches individuals creative methodologies but makes them proficient in any industry domain they look forward to pursuing. The techniques and knowledge one gets from this certification is applicable in any industry globally. This certification promises that PMP certified project managers are capable of completing projects on time, in a desired budget and ensure meeting the original project goal. Exam Fee: Non-PMI Members: $555/ PMI Members: $405 Average Salary: $113,000 per annum Number of Questions: 200 Type of Question: A combination of Multiple Choice and Open-end Passing Threshold: 80.6% Certified Information Systems Security Professional (CISSP) CISSP is one of the globally recognized security certifications. This cybersecurity certification is a great way to demonstrate your expertise and build industry-level security skills. On achieving this certification users will be well-versed in designing, engineering, implementing, and running an information security program. Users need at least 5 years of minimum working experience in order to be eligible for this certification. This certification will help you measure your competence in designing and maintaining a robust environment. Exam Fee: $699 Average Salary: $111,638 per annum Number of Questions: 250 (each question carries 4 marks) Type of Question: Multiple Choice Passing Threshold: 700 marks CompTIA Security+ CompTIA Security+ certification is a vendor neutral certification used to kick-start one’s career as a security professional. It helps users get acquainted to all the aspects related to IT security. If you are inclined towards systems administration, network administration, and security administration, this is something that you should definitely go for. With this certification users learn the latest trends and techniques in risk management, risk mitigation, threat management and intrusion detection. Exam Fee: $330 Average Salary: $95,829 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English (Japanese, Portuguese and Simplified Chinese estimated Q2 2018) Passing Threshold: 750/900 CompTIA Network+ Another CompTIA certification! Why? CompTIA Network+ is a certification that helps individuals in developing their career and validating their skills to troubleshoot, configure, and manage both wired and wireless networks. So, if you are an entry-level IT professional interested in managing, maintaining, troubleshooting and configuring complex network infrastructures then, this one is for you. Exam Fee: $302 Average Salary: $90,280 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English (In Development: Japanese, German, Spanish, Portuguese) Passing Threshold: 720 (on a scale of 100-900) VMware Certified Professional 6.5 – Data Center Virtualization (VCP6.5-DCV) Yes, even today virtualization is highly valued in a lot of industries. Data Center Virtualization Certification helps individuals develop skills and abilities to install, configure, and manage a vSphere 6.5 infrastructure. This industry-recognized certification validates users’ knowledge on implementing, managing, and troubleshooting a vSphere V6.5 infrastructure. It also helps IT professionals build a  foundation for business agility that can accelerate the transformation to cloud computing. Exam Fee: $250 Average Salary: $82,342 per annum Number of Questions: 46 Available language: English Type of Question: Single and Multiple Choice Passing Threshold: 300 (on a scale of 100-500) CompTIA A+ Yet another CompTIA certification that helps entry level IT professionals have an upper hand. This certification is specially for individuals interested in building their career in technical support or IT operational roles. If you are thinking more than just PC repair then, this one is for you. By entry level certification I mean this is a certification that one can pursue simultaneously while in college or secondary school. CompTIA A+ is a basic version of Network+ as it only touches basic network infrastructure issues while making you proficient as per industry standards. Exam Fee: $211 Average Salary:$79,390 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English, German, Japanese, Portuguese, French and Spanish Passing Threshold: 72% for 220-801 exam and 75% for 220-802 exam Cisco Certified Networking Associate (CCNA) Cisco Certified Network Associate (CCNA) Routing and Switching is one of the most important IT certifications to stay up-to date with your networking skills. It is a foundational certification for individuals interested in a high level networking profession. The exam helps candidates validate their knowledge and skills in networking, LAN switching, IPv4 and IPv6 routing, WAN, infrastructure security, and infrastructure management. This certification not only validates users networking fundamentals but also helps them stay relevant with skills needed to adopt next generation technologies. Exam Fee: $325 Average Salary:$55,166-$90,642 Number of Questions: 60-70 Available Languages: English, Japanese Type of Question: Multiple Choice Passing Threshold: 825/1000 CISM (Certified Information Security Manager) Lastly, we have Certified Information Security Manager (CISM), a nonprofit certification offered by ISACA that caters to security professionals involved in information security, risk management and governance. This is an advanced-level certification for experienced individuals who develop and manage enterprise information security programs. Only users who hold five years of verified experience, out of which 3 year of experience in infosec management, are eligible for this exam. Exam Fee: $415- $595 (Cheaper for members) Average Salary: $52,402 to $243,610 Number of Questions: 200 Passing Threshold: 450  (on a scale of 200-800) Type of Question: Multiple Choice Are you confused as to which certification you should take-up? Well, leave your noisy thoughts aside and choose wisely. Pick-up an exam that is inclined to your interest. If you want to pursue IT security don’t end-up going for Cloud certifications. No career option is fun unless you want to pursue it wholeheartedly. Take a right step and make it count. Why AWS is the prefered cloud platform for developers working with big data? 5 reasons why your business should adopt cloud computing Top 5 penetration testing tools for ethical hackers  
Read more
  • 0
  • 0
  • 2854

article-image-why-choose-ansible-for-your-automation-and-configuration-management-needs
Savia Lobo
03 Jul 2018
4 min read
Save for later

Why choose Ansible for your automation and configuration management needs?

Savia Lobo
03 Jul 2018
4 min read
Off late, organizations are moving towards getting their systems automated. The benefits are many. Firstly, it saves off a huge chunk of time and secondly saves investments in human resources for simple tasks such as updates and so on. Few years back, Chef and Puppet were the two popular names when asked about tools for software automation. Over the years, these have got a strong rival which has surpassed them and now sits as one of the famous tools for software automation. Ansible is the one! Ansible is an open source tool for IT configuration management, deployment, and orchestration. It is perhaps the definitive configuration management tool. Chef and Puppet may have got there first, but its rise over the last couple of years is largely down to its impressive automation capabilities. And with the demands on operations engineers and sysadmins facing constant time pressures, the need to automate isn’t “nice to have”, but a necessity. Its tagline is “allowing smart people to do smart things.” It’s hard to argue that any software should aim to do much more than that. Ansible’s rise in popularity Ansible, originated in the year 2013, is a leader in IT automation and DevOps. It was bought by Red Hat in the year 2015 to achieve their goal of creating frictionless IT. The reason Red Hat acquired Ansible was its simplicity and versatility. It got the second mover advantage of entering the DevOps world after Puppet. It meant that it can orchestrate multi-tier applications in the cloud. This results in server uptime by implementing an ‘Immutable server architecture’ for deploying, creating, delete, or migrate servers across different clouds. For those starting afresh, it is easy to write, maintain automation workflows and gives them a plethora of modules which make it easy for newbies to get started. Benefits Red Hat and its community Ansible complements Red Hat’s popular cloud products, OpenStack and OpenShift. Red Hat proved to be a complex yet safe open source software for enterprises. However, it was not easy-to-use. Due to this many developers started migrating to other cloud services for easy and simple deployment options. By adopting Ansible, Red Hat finally provided an easy option to automate and modernize theri IT solutions. Customers can now focus on automating various baseline tasks. It also aids Red Hat to refresh its traditional playbooks; it allows enterprises to use IT services and infrastructure together with the help of Ansible’s YAML. The most prominent benefit of using Ansible for both enterprises and individuals is that it is agentless. It achieves this by leveraging SSH and Windows remote Management. Both these approaches reuse connections and use minimal network traffic. The approach also has added security benefits and improves both client and central management server resource utilization. Thus, the user does not have to worry about the network or server management, and can focus on other priority tasks. What can you use it for? Easy Configurations: Ansible provides developers with easy to understand configurations; understood by both humans and machines. It also includes many modules and user-built roles. Thus, one need not start building from scratch. Application lifecycle management: One can be rest assured about their application development lifecycle with Ansible. Here, it is used for defining the application and Red Hat Ansible Tower is used for managing the entire deployment process. Continuous Delivery: Manage your business with the help of Ansible push-based architecture, which allows a more sturdy control over all the required operations. Orchestration of server configuration in batches makes it easy to roll out changes across the environment. Security and Compliance: While security policies are defined in Ansible, one can choose to integrate the process of scanning and solving issues across the site into other automated processes. Scanning of jobs and system tracking ensures that systems do not deviate from the parameters assigned. Additionally, Ansible Tower provides a secure storage for machine credentials and RBAC (role-based access control). Orchestration: It brings in a high amount of discipline and order within the environment. This ensures all application pieces work in unison and are easily manageable; despite the complexity of the said applications. Though it is popular as the IT automation tool, many organizations use it in combination with Chef and Puppet. This is because it may have scaling issues and lacks in performance for larger deployments. Don’t let that stop you from trying Ansible; it is most loved by DevOps as it is written in Python and thus it is easy to learn. Moreover, it offers a credible support and an agentless architecture, which makes it easy to control servers and much more within an application development environment. An In-depth Look at Ansible Plugins Mastering Ansible – Protecting Your Secrets with Ansible Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0
Read more
  • 0
  • 0
  • 3812

article-image-docker-isnt-going-anywhere
Savia Lobo
22 Jun 2018
5 min read
Save for later

Docker isn't going anywhere

Savia Lobo
22 Jun 2018
5 min read
To create good software, developers often have to weave in the UI, frameworks, databases, libraries, and of course a whole bunch of code modules. These elements together build an immersive user experience on the front-end. However, deploying and testing software is way too complex these days as all these elements should be properly set-up in order to build successful software. Here, containers are of a great help as they enable developers to pack all the contents of their app, including the code, libraries, and other dependencies, and ship it over as a singular package. One can think of software as a puzzle and containers just help one to get all the pieces in their proper position for the effective functioning of the software. Docker is one of the popular choices in containers. The rise of Docker Containers Linux Containers have been in the market for almost a decade. However, it was after the release of Docker five years ago that developers widely started using containers in a simple way. At present, containers, especially Docker containers are popular and in use everywhere and this popularity seems set to stay. As per our Packt Skill Up developer survey on top sysadmin and virtualization tools, almost 46% of the developer crowd voted that they use Docker containers on a regular basis.  It ranked third after Linux and Windows OS in the lead. Source: Packt Skill Up survey 2018 Also, organizations such as Red Hat, Canonical, Microsoft, Oracle and all other major IT companies and cloud businesses that have adopted Docker. Docker is often confused with virtual machines; read our article on Virtual machines vs Containers to understand the differences between the two. VMs such as Hyper-V, KVM, Xen, and so on are based on the concept of emulating hardware virtually. As such, they come with huge system requirements. On the other hand, Docker containers or in general containers use the same OS and kernel. Apart from this, Docker is just right if you want to use minimal hardware to run multiple copies of your app at the same time. This would, in turn, save huge costs on power and hardware for data centers annually. Docker containers boot within a fraction of seconds unlike virtual machines that require 10-20 GB of operating system data to boot, which eventually slows down the whole process. For CI/CD, Docker makes it easy to set up environments for local development that replicates a live server. It helps run multiple development environments using different software, OS, and configurations; all from the same host. One can run test projects on new or different servers and can also work on the same project with similar settings, irrespective of the local host environment. Docker can also be deployed on the cloud as it is designed for integration within most of the DevOps platforms including Puppet, Chef, and so on. One can even manage standalone development environments with it. Why developers love Docker Docker brought in novel ideas in the market for the organizations starting with making containers easy to use and deploy. In the year 2014, Docker announced that it was partnering with the major tech leaders Google, Red Hat, and Parallels on its open-source component libcontainer. This made libcontainer the defacto standard for Linux containers. Microsoft also announced that it would bring Docker-based containers to its Azure Cloud. Docker has also donated its software container format and its runtime, along with its specifications to Linux’s Open Container Project. This project includes all the contents of the libcontainer project, nsinit, and all other modifications such that it can independently run without Docker.  Further, Docker Containerd, is also hosted by Cloud Native Computing Foundation (CNCF). Few reasons why Docker is preferred by many: It has a great user experience, which helps developers to use the programming language of their choice. One requires to perform less amount of coding One can run Docker on any operating system such as Windows, Linux, Mac, and etc. The Docker Kubernetes combo DevOps can be used to deploy and monitor Docker containers but they are not highly optimized for this task. Containers need to be individually monitored as they contain huge density in respect to the matter they contain. The possible solution to this is cloud orchestration tools, and what better than Kubernetes as it is one of the most dominant cloud orchestration tools in the market. As Kubernetes has a bigger community and a bigger share of the market, Docker made a smart move to include Kubernetes as one of its offerings. With this, Docker users and customers can not only use Kubernetes’ secure orchestration experience but also an end-to-end Docker experience. Docker gives an A to Z experience for developers and system administrators. With Docker, Developers can focus on writing code and forget about the rest of the deployment. They can also make use of different programs designed to run on Docker and can make use of it in their own projects. System administrators in a way can reduce system overhead as compared to VMs. Docker’s portability and ease of installation make it easy for admins to save a bunch of time lost in installing individual VM components. Also, with Google, Microsoft, RedHat, and others absorbing Docker technology in their daily operations, it is surely not going anywhere soon. Docker’s future is bright and we can expect machine learning to be a part of it sooner than later. Are containers the end of virtual machines? How to ace managing the Endpoint Operations Management Agent with vROps Atlassian open sources Escalator, a Kubernetes autoscaler project  
Read more
  • 0
  • 0
  • 2873

article-image-keep-your-serverless-aws-applications-secure-tutorial
Savia Lobo
18 Jun 2018
11 min read
Save for later

Keep your serverless AWS applications secure [Tutorial]

Savia Lobo
18 Jun 2018
11 min read
Handling security is an extensive and complex topic. If not done right, you open up your app to dangerous hacks and breaches. Even if everything is right, it may be hacked. So it's important we understand common security mechanisms to avoid exposing websites to vulnerabilities and follow the recommended practices and methodologies that have been largely tested and proven to be robust. In this tutorial, we will learn how to secure serverless applications using AWS. Additionally, we will learn about the security basics and then move on to handle authorization and authentication using AWS. This article is an excerpt taken from the book, 'Building Serverless Web Applications' wriiten by Diego Zanon. Security basics in AWS One of the mantras of security experts is this: don't roll your own. It means you should never use in a production system any kind of crypto algorithm or security model that you developed by yourself. Always use solutions that have been highly used, tested, and recommended by trusted sources. Even experienced people may commit errors and expose a solution to attacks, especially in the cryptography field, which requires advanced math. However, when a proposed solution is analyzed and tested by a great number of specialists, errors are much less frequent. In the security world, there is a term called security through obscurity. It is defined as a security model where the implementation mechanism is not publicly known, so there is a belief that it is secure because no one has prior information about the flaws it has. It can be indeed secure, but if used as the only form of protection, it is considered as a poor security practice. If a hacker is persistent enough, he or she can discover flaws even without knowing the internal code. In this case, again, it's better to use a highly tested algorithm than your own. Security through obscurity can be compared to someone trying to protect their own money by burying it in the backyard when the common security mechanism would be to put the money in a bank. The money can be safe while buried, but it will be protected only until someone finds about its existence and starts to look for it. Due to this reason, when dealing with security, we usually prefer to use open source algorithms and tools. Everyone can access and discover flaws in them, but there are also a great number of specialists that are involved in finding the vulnerabilities and fixing them. In this section, we will discuss other security concepts that everyone must know when building a system. Information security When dealing with security, there are some attributes that need to be considered. The most important ones are the following: Authentication: Confirm the user's identity by validating that the user is who they claim to be Authorization: Decide whether the user is allowed to execute the requested action Confidentiality: Ensure that data can't be understood by third-parties Integrity: Protect the message against undetectable modifications Non-repudiation: Ensure that someone can't deny the authenticity of their own message Availability: Keep the system available when needed These terms will be better explained in the next sections. Authentication Authentication is the ability to confirm the user's identity. It can be implemented by a login form where you request the user to type their username and password. If the hashed password matches what was previously saved in the database, you have enough proof that the user is who they claim to be. This model is good enough, at least for typical applications. You confirm the identity by requesting the user to provide what they know. Another kind of authentication is to request the user to provide what they have. It can be a physical device (like a dongle) or access to an e-mail account or phone number. However, you can't ask the user to type their credentials for every request. As long as you authenticate it in the first request, you must create a security token that will be used in the subsequent requests. This token will be saved on the client side as a cookie and will be automatically sent to the server in all requests. On AWS, this token can be created using the Cognito service. How this is done will be described later in this chapter. Authorization When a request is received in the backend, we need to check if the user is allowed to execute the requested action. For example, if the user wants to checkout the order with ID 123, we need to make a query to the database to identify who is the owner of the order and compare if it is the same user. Another scenario is when we have multiple roles in an application and we need to restrict data access. For example, a system developed to manage school grades may be implemented with two roles, such as student and teacher. The teacher will access the system to insert or update grades, while the students will access the system to read those grades. In this case, the authentication system must restrict the actions insert and update for users that are part of the teachers group and users in the students group must be restricted to read their own grades. Most of the time, we handle authorization in our own backend, but some serverless services don't require a backend and they are responsible by themselves to properly check the authorization. For example, in the next chapter, we are going to see how serverless notifications are implemented on AWS. When we use AWS IoT, if we want a private channel of communication between two users, we must give them access to one specific resource known by both and restrict access to other users to avoid the disclosure of private messages. Confidentiality Developing a website that uses HTTPS for all requests is the main drive to achieve confidentiality in the communication between the users and your site. As the data is encrypted, it's very hard for malicious users to decrypt and understand its contents. Although there are some attacks that can intercept the communication and forge certificates (man-in-the-middle), those require the malicious user to have access to the machine or network of the victim user. From our side, adding HTTPS support is the best thing that we can do to minimize the chance of attacks. Integrity Integrity is related to confidentiality. While confidentiality relies on encrypting a message to prevent other users from accessing its contents, integrity deals with protecting the messages against modifications by encrypting messages with digital signatures (TLS certificates). Integrity is an important concept when designing low level network systems, but all that matters for us is adding HTTPS support. Non-repudiation Non-repudiation is a term that is often confused with authentication since both of them have the objective to prove who has sent the message. However, the main difference is that authentication is more interested in a technical view and the non-repudiation concept is interested in legal terms, liability, and auditing. When you have a login form with user and password input, you can authenticate the user who correctly knows the combination, but you can't have 100% certain since the credentials can be correctly guessed or stolen by a third-party. On the other hand, if you have a stricter access mechanism, such as a biometric entry, you have more credibility. However, this is not perfect either. It's just a better non-repudiation mechanism. Availability Availability is also a concept of interest in the information security field because availability is not restricted to how you provision your hardware to meet your user needs. Availability can suffer attacks and can suffer interruptions due to malicious users. There are attacks, such as Distributed Denial of Service (DDoS), that aim to create bottlenecks to disrupt site availability. In a DDoS attack, the targeted website is flooded with superfluous requests with the objective to overload the systems. This is usually accomplished by a controlled network of infected machines called a botnet. On AWS, all services run under the AWS Shield service, which was designed to protect against DDoS attacks with no additional charge. However, if you run a very large and important service, you may be a direct target of advanced and large DDoS attacks. In this case, there is a premium tier offered in the AWS Shield service to ensure your website's availability even in worst case scenarios. This requires an investment of US$ 3,000 per month, and with this, you will have 24x7 support of a dedicated team and access to other tools for mitigation and analysis of DDoS attacks. Security on AWS We use AWS credentials, roles, and policies, but security on AWS is much more than handling authentication and authorization of users. This is what we will discuss in this section. Shared responsibility model Security on AWS is based on a shared responsibility model. While Amazon is responsible for keeping the infrastructure safe, the customers are responsible for patching security updates to software and protecting their own user accounts. AWS's responsibilities include the following: Physical security of the hardware and facilities Infrastructure of networks, virtualization, and storage Availability of services respecting Service Level Agreements (SLAs) Security of managed services such as Lambda, RDS, DynamoDB, and others A customer's responsibilities are as follows: Applying security patches to the operating system on EC2 machines Security of installed applications Avoiding disclosure of user credentials Correct configuration of access policies and roles Firewall configurations Network traffic protection (encrypting data to avoid disclosure of sensitive information) Encryption of server-side data and databases In the serverless model, we rely only on managed services. In this case, we don't need to worry about applying security patches to the operating system or runtime, but we do need to worry about third-party libraries that our application depends on to execute. Also, of course, we need to worry about all the things that we need to configure (firewalls, user policies, and so on), the network traffic (supporting HTTPS) and how data is manipulated by the application. The Trusted Advisor tool AWS offers a tool named Trusted Advisor, which can be accessed through https://console.aws.amazon.com/trustedadvisor. It was created to offer help on how you can optimize costs or improve performance, but it also helps identify security breaches and common misconfigurations. It searches for unrestricted access to specific ports on your EC2 machines, if Multi-Factor Authentication is enabled on the root account and if IAM users were created in your account. You need to pay for AWS premium support to unlock other features, such as cost optimization advice. However, security checks are free. Pen testing A penetration test (or pen test) is a good practice that all big websites must perform periodically. Even if you have a good team of security experts, the usual recommendation is to hire a specialized third-party company to perform pen tests and to find vulnerabilities. This is because they will most likely have tools and procedures that your team may not have tried yet. However, the caveat here is that you can't execute these tests without contacting AWS first. To respect their user terms, you can only try to find breaches on your own account and assets, in scheduled time frames (so they can disable their intrusion detection systems for your assets), and only on restricted services, such as EC2 instances and RDS. AWS CloudTrail AWS CloudTrail is a service that was designed to record all AWS API calls that are executed on your account. The output of this service is a set of log files that register the API caller, the date/time, the source IP address of the caller, the request parameters, and the response elements that were returned. This kind of service is pretty important for security analysis, in case there are data breaches, and for systems that need the auditing mechanism for compliance standards. MFA Multi-Factor Authentication (MFA) is an extra security layer that everyone must add to their AWS root account to protect against unauthorized access. Besides knowing the user and password, a malicious user would also need physical access to your smartphone or security token, which greatly restricts the risks. On AWS, you can use MFA through the following means: Virtual devices: Application installed on Android, iPhone, or Windows phones Physical devices: Six-digit tokens or OTP cards SMS: Messages received on your phone We have discussed the basic security concepts and how to apply them on a serverless project. If you've enjoyed reading this article, do check out 'Building Serverless Web Applications' to implement signup, sign in, and log out features using Amazon Cognito. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Analyzing CloudTrail Logs using Amazon Elasticsearch How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 2730
article-image-a-serverless-online-store-on-aws-could-save-you-money-build-one
Savia Lobo
14 Jun 2018
9 min read
Save for later

A serverless online store on AWS could save you money. Build one.

Savia Lobo
14 Jun 2018
9 min read
In this article you will learn to build an entire serverless project of an AWS online store, beginning with a React SPA frontend hosted on AWS followed by a serverless backend with API Gateway and Lambda functions. This article is an excerpt taken from the book, 'Building Serverless Web Applications' written by Diego Zanon. In this book, you will be introduced to the AWS services, and you'll learn how to estimate costs, and how to set up and use the Serverless Framework. The serverless architecture of AWS' online store We will build a real-world use case of a serverless solution. This sample application is an online store with the following requirements: List of available products Product details with user rating Add products to a shopping cart Create account and login pages For a better understanding of the architecture, take a look at the following diagram which gives a general view of how different services are organized and how they interact: Estimating costs In this section, we will estimate the costs of our sample application demo based on some usage assumptions and Amazon's pricing model. All pricing values used here are from mid 2017 and considers the cheapest region, US East (Northern Virginia). This section covers an example to illustrate how costs are calculated. Since the billing model and prices can change over time, always refer to the official sources to get updated prices before making your own estimations. You can use Amazon's calculator, which is accessible at this link: http://calculator.s3.amazonaws.com/index.html. If you still have any doubts after reading the instructions, you can always contact Amazon's support for free to get commercial guidance. Assumptions For our pricing example, we can assume that our online store will receive the following traffic per month: 100,000 page views 1,000 registered user accounts 200 GB of data transferred considering an average page size of 2 MB 5,000,000 code executions (Lambda functions) with an average of 200 milliseconds per request Route 53 pricing We need a hosted zone for our domain name and it costs US$ 0.50 per month. Also, we need to pay US$ 0.40 per million DNS queries to our domain. As this is a prorated cost, 100,000 page views will cost only US$ 0.04. Total: US$ 0.54 S3 pricing Amazon S3 charges you US$ 0.023 per GB/month stored, US$ 0.004 per 10,000 requests to your files, and US$ 0.09 per GB transferred. However, as we are considering the CloudFront usage, transfer costs will be charged by CloudFront prices and will not be considered in S3 billing. If our website occupies less than 1 GB of static files and has an average per page of 2 MB and 20 files, we can serve 100,000 page views for less than US$ 20. Considering CloudFront, S3 costs will go down to US$ 0.82 while you need to pay for CloudFront usage in another section. Real costs would be even lower because CloudFront caches files and it would not need to make 2,000,000 file requests to S3, but let's skip this detail to reduce the complexity of this estimation. On a side note, the cost would be much higher if you had to provision machines to handle this number of page views to a static website with the same availability and scalability. Total: US$ 0.82 CloudFront pricing CloudFront is slightly more complicated to price since you need to guess how much traffic comes from each region, as they are priced differently. The following table shows an example of estimation: RegionEstimated trafficCost per GB transferredCost per 10,000 HTTPS requestsNorth America70%US$ 0.085US$ 0.010Europe15%US$ 0.085US$ 0.012Asia10%US$ 0.140US$ 0.012South America5%US$ 0.250US$ 0.022 As we have estimated 200 GB of files transferred with 2,000,000 requests, the total will be US$ 21.97. Total: US$ 21.97 Certificate Manager pricing Certificate Manager provides SSL/TLS certificates for free. You only need to pay for the AWS resources you create to run your application. IAM pricing There is no charge specifically for IAM usage. You will be charged only by what AWS resources your users are consuming. Cognito pricing Each user has an associated profile that costs US$ 0.0055 per month. However, there is a permanent free tier that allows 50,000 monthly active users without charges, which is more than enough for our use case. Besides that, we are charged for Cognito Syncs of our user profiles. It costs US$ 0.15 for each 10,000 sync operations and US$ 0.15 per GB/month stored. If we estimate 1,000 active and registered users with less than 1 MB per profile, with less than 10 visits per month in average, we can estimate a charge of US$ 0.30. Total: US$ 0.30 IoT pricing IoT charges starts at US$ 5 per million messages exchanged. As each page view will make at least 2 requests, one to connect and another to subscribe to a topic, we can estimate a minimum of 200,000 messages per month. We need to add 1,000 messages if we suppose that 1% of the users will rate the products and we can ignore other requests like disconnect and unsubscribed because they are excluded from billing. In this setting, the total cost would be of US$ 1.01. Total: US$ 1.01 SNS pricing We will use SNS only for internal notifications, when CloudWatch triggers a warning about issues in our infrastructure. SNS charges US$ 2.00 per 100,000 e-mail messages, but it offers a permanent free tier of 1,000 e-mails. So, it will be free for us. CloudWatch pricing CloudWatch charges US$ 0.30 per metric/month and US$ 0.10 per alarm and offers a permanent free tier of 50 metrics and 10 alarms per month. If we create 20 metrics and expect 20 alarms in a month, we can estimate a cost of US$ 1.00. Total: US$ 1.00 API Gateway pricing API Gateway starts charging US$ 3.50 per million of API calls received and US$ 0.09 per GB transferred out to the Internet. If we assume 5 million requests per month with each response with an average of 1 KB, the total cost of this service will be US$ 17.93. Total: US$ 17.93 Lambda pricing When you create a Lambda function, you need to configure the amount of RAM memory that will be available for use. It ranges from 128 MB to 1.5 GB. Allocating more memory means additional costs. It breaks the philosophy of avoiding provision, but at least it's the only thing you need to worry about. The good practice here is to estimate how much memory each function needs and make some tests before deploying to production. A bad provision may result in errors or higher costs. Lambda has the following billing model: US$ 0.20 per 1 million requests US$ 0.00001667 GB-second Running time is counted in fractions of seconds, rounding up to the nearest multiple of 100 milliseconds. Furthermore, there is a permanent free tier that gives you 1 million requests and 400,000 GB-seconds per month without charges. In our use case scenario, we have assumed 5 million requests per month with an average of 200 milliseconds per execution. We can also assume that the allocated RAM memory is 512 MB per function: Request charges: Since 1 million requests are free, you pay for 4 million that will cost US$ 0.80. Compute charges: Here, 5 million executions of 200 milliseconds each gives us 1 million seconds. As we are running with a 512 MB capacity, it results in 500,000 GB-seconds, where 400,000 GB-seconds of these are free, resulting in a charge of 100,000 GB-seconds that costs US$ 1.67. Total: US$ 2.47 SimpleDB pricing Take a look at the following SimpleDB billing where the free tier is valid for new and existing users: US$ 0.14 per machine-hour (25 hours free) US$ 0.09 per GB transferred out to the internet (1 GB is free) US$ 0.25 per GB stored (1 GB is free) Take a look at the following charges: Compute charges: Considering 5 million requests with an average of 200 milliseconds of execution time, where 50% of this time is waiting for the database engine to execute, we estimate 139 machine hours per month. Discounting 25 free hours, we have an execution cost of US$ 15.96. Transfer costs: Since we'll transfer data between SimpleDB and AWS Lambda, there is no transfer cost. Storage charges: If we assume a 5 GB database, it results in US$ 1.00, since 1 GB is free. Total: US$ 16.96, but this will not be added in the final estimation since we will run our application using DynamoDB. DynamoDB DynamoDB requires you to provision the throughput capacity that you expect your tables to offer. Instead of provisioning hardware, memory, CPU, and other factors, you need to say how many read and write operations you expect and AWS will handle the necessary machine resources to meet your throughput needs with consistent and low-latency performance. One read capacity unit represents one strongly consistent read per second or two eventually consistent reads per second, where objects have a size up to 4 KB. Regarding the writing capacity, one unit means that you can write one object of size 1 KB per second. Considering these definitions, AWS offers in the permanent free tier 25 read units and 25 write units of throughput capacity, in addition to 25 GB of free storage. It charges as follows: US$ 0.47 per month for every Write Capacity Unit (WCU) US$ 0.09 per month for every Read Capacity Unit (RCU) US$ 0.25 per GB/month stored US$ 0.09 GB per GB transferred out to the Internet Since our estimated database will have only 5 GB, we are on the free tier and we will not pay for transferred data because there is no transfer cost to AWS Lambda. Regarding read/write capacities, we have estimated 5 million requests per month. If we evenly distribute them, we will get two requests per second. In this case, we will consider that it's one read and one write operation per second. We need to estimate now how many objects are affected by a read and a write operation. For a write operation, we can estimate that we will manipulate 10 items on average and a read operation will scan 100 objects. In this scenario, we would need to reserve 10 WCU and 100 RCU. As we have 25 WCU and 25 RCU for free, we only need to pay for 75 RCU per month, which costs US$ 6.75. Total: US$ 6.75 Total pricing Let's summarize the cost of each service in the following table: ServiceMonthly CostsRoute 53US$ 0.54S3US$ 0.82CloudFrontUS$ 21.97CognitoUS$ 0.30IoTUS$ 1.01CloudWatchUS$ 1.00API GatewayUS$ 17.93LambdaUS$ 2.47DynamoDBUS$ 6.75TotalUS$ 52.79 It results in a total cost of ~ US$ 50 per month in infrastructure to serve 100,000 page views. If you have a conversion rate of 1%, you can get 1,000 sales per month, which means that you pay US$ 0.05 in infrastructure for each product that you sell. Thus, in this article you learned the serverless architecture of AWS online store also learned how to estimate its costs. If you've enjoyed reading the excerpt, do check out, Building Serverless Web Applications to monitor the performance, efficiency and errors of your apps and also learn how to test and deploy your applications. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Serverless computing wars: AWS Lambdas vs Azure Functions Using Amazon Simple Notification Service (SNS) to create an SNS topic
Read more
  • 0
  • 0
  • 6824

article-image-containers-end-of-virtual-machines
Vijin Boricha
13 Jun 2018
5 min read
Save for later

Are containers the end of virtual machines?

Vijin Boricha
13 Jun 2018
5 min read
For quite sometime now virtual machines (VMs) have gained a lot of traction. The major reason for this trend was IT industries were totally convinced about the fact that instead of having a huge room filled with servers, it is better to deploy all your workload on a single piece of hardware. There is no doubt that virtual machines have succeeded as they save a lot of cost and work pretty well making failovers easier. In a similar sense when containers were introduced they received a lot attention and have recently gained even more popularity amongst IT organisations. Well, there are a set of considerable reasons for this buzz; they are highly scalable, easy to use, portable, have faster execution and are mainly cost effective. Containers also subside management headaches as they share a common operating system. With this kind of flexibility it is quite easier to fix bugs, place update patches and make other alterations. All-in-all containers are lightweight and more portable than virtual machines. If all of this is true, are virtual machines going extinct? Well, for this answer you will have to deep dive into the complexities of both worlds. How Virtual Machines work? A virtual machine is an individual operating system installed on your usual operating system. The entire implementation is done by software emulation and hardware virtualization. Usually multiple virtual machines are used on servers where the physical machine remains the same but each virtual environment runs a completely separate service. Consider a Ubuntu server as a VM and use it to install all or any service you need. Now, if your deployment needs a set of software to handle web applications you provide all the necessary services to your application. Suddenly, there is a requirement for an additional service where your situation gets tighter, as all your resources are preoccupied. All you need to do is, install the new service on the guest virtual machine and you are all set to relax. Advantages of using virtual machines Multiple OS environments can run simultaneously on the same physical machine Easy to maintain, highly available, convenient recovery, and application provisioning Virtual machines tend to be more secure than containers Operating system flexibility on VMs is better than that of containers Disadvantages of using virtual machines Simultaneously running virtual machines may introduce an unstable performance, depending on the workload on the system by other running virtual machines Hardware accessibility becomes quite difficult when it comes to virtual machines Virtual machines are heavier in size taking up several gigabytes How Containers work? You can consider containers as lightweight, executable packages that provide everything an application needs to run and function as desired. A container usually sits on top of a physical server and its host OS allowing applications to run reliably in different environments by subtracting the operating system and physical infrastructure. So where VMs depend totally on hardware we have a new popular kid in town that requires significantly lesser hardware and does the task with ease and efficiency. Suppose you want to deploy multiple web servers faster, containers make it easier. The reason for this is, as you are deploying single services the containers require lesser hardware compared to virtual machines. The benefit of using containers does not end here. Docker, a popular container solution, creates a cluster of docker engines in such a way that they are managed as a single virtual system. So if you’re looking at deploying apps with scale, and lesser failovers your first preference should be containers. Advantages of using Containers You can any day add more computing workload on the same server as containers consume less resources Servers can load more containers than virtual machines as they are usually in megabytes Containers makes it easier to allocate resources to processes which helps running your applications in different environments Containers are cost effective solutions that help in decreasing both operating and development cost. Bug tracking and testing is easier in containers as there isn’t any difference in running your application locally, or on test servers, or in production Development, testing, and deployment time decreases with containers Disadvantages of using Containers Since containers share the kernel and other components of host operating system it become more vulnerable and can impact security of other containers as well Lack of operating system flexibility. Everytime you want to run a container on a different operating system you need to start a new server. Now coming to the original question. Are containers worth it? Will they eliminate virtualization entirely? Well, after reading this article you must have already guessed the clear winner considering the advantages over disadvantages of each platform. So, in virtual machines the hardware is virtualized to run multiple operating system instances. If one needs a complete platform that can provide multiple services then, virtual machines is your answer as it is considered a matured and a secure technology. If you're looking at achieving high scalability, agility, speed, lightweight, and portability, all this comes under just one hood, containers. With this standardised unit of software, one can stay ahead of the competition. If you still have concerns over security and how a vulnerable kernel can jeopardize the cluster than you need, DevSecOps is your knight in shining armor. The whole idea of DevSecOps is to bring operations and development together with security functions. In a nutshell, everyone involved in a software development life cycle is responsible for security. Kubernetes Containerd 1.1 Integration is now generally available Top 7 DevOps tools in 2018 What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 3869

article-image-reasons-your-business-to-adopt-cloud-computing
Vijin Boricha
11 Jun 2018
6 min read
Save for later

5 reasons why your business should adopt cloud computing

Vijin Boricha
11 Jun 2018
6 min read
Businesses are moving their focus to using existing technology to accomplish their 2018 business targets. Although cloud services have been around for a while, many organisations hesitated to make the move. But recent enhancements such as cost-effectiveness, portability, agility, and faster connectivity have grabbed more attention from new and not so famous organisations. So, if your organization is looking for ways to achieve greater heights and you are exploring healthy investments that benefit your organisation then, your first choice should be cloud computing as the on-premises server system is fading away. You don’t need any proof to agree that cloud computing is playing a vital role in changing the way businesses work today. Organizations have started looking for cloud options to widen their businesses’ reach (read revenue, growth, sales) and to run more efficiently (read cost savings, bottom line, ROI). There are three major cloud options that growing businesses can look at: Public Cloud Private Cloud Hybrid Cloud A Gartner report states that by the year 2020 big vendors will shift from cloud-first to cloud-only policies. If you are wondering what could fuel this predicted rise in cloud adoption, look no further.Below are some factors contributing to this trend of businesses adopting cloud computing. Cloud offers increased flexibility One of the most beneficial aspects of adopting cloud computing is its flexibility no matter the size of the organization or the location your employee is placed at. Cloud computing comes with a wide range of options from modifying storage space to supporting both in-office and remotely located employees. This makes it easy for businesses to increase and decrease server loads along with providing employees with the benefit of working from anywhere at anytime with zero timezone restrictions. Cloud computing services, in a way, help businesses focus on revenue growth than spending time and resources on building hardware/software capabilities. Cloud computing is cost effective Cloud-backed businesses definitely benefit on cost as there is no need to maintain expensive in-house servers and other expensive devices given that everything is handled on the cloud. If you want your business to grow, you just need to spend on storage space and pay for the services you use. Cost transparency helps organizations plan their expenditure and pay-per-use is one of the biggest advantage businesses can leverage. With cloud adoption you eliminate spending on increasing processing power, hard drive space or building a large data center. When there are less hardware facilities to manage, you do not need a large IT team to handle it. Software licensing cost is automatically eliminated as the software is already stored on cloud and businesses have an option of paying as per their use. Scalability is easier with cloud The best part about cloud computing is its support for unpredicted requirements which helps businesses scale or downsize resources quickly and efficiently. It’s all about modifying your subscription plan which allows you to upgrade your storage or bandwidth plans as per your business needs.This kind of scalability option helps increasing business performance and minimizes the risk of up-front investments of operational issues and maintenance. Better availability means less downtime and better productivity So with cloud adoption you need not worry about downtime as they are reliable and maintain about 100% uptime. This means whatever you own on the cloud is available to your customers at any point. For every server breakdown, the cloud service providers make sure of having a backup server in place to avoid missing out on essential data. This can barely be achieved by traditional on-premises infrastructure, which is another reason businesses should switch to cloud. All of the above mentioned mechanism makes it easy to share files and documents with teammates, thanks to its flexible accessibility. Teams can collaborate more effectively when they can access documents anytime and anywhere. This obviously improves workflow and gives businesses a competitive edge. Being present is office to complete tasks is no longer a requirement for productivity;  a work/life balance is an added side-effect of such an arrangement. In short, you need not worry about operational disasters and you can get the job done without physically being present in office. Automated Backups One major problem with an on-premises data center is that everything depends on the functioning of your physical system. In cases where you lose your device or some kind of disaster befalls your physical system, it may lead to loss of data as well. This is never the case with cloud as you can access your files and documents from any device or location no matter the physical device you use. Organizations have to bear a massive expense for regular back-ups whereas cloud computing comes with automatic back-ups and provides enterprise grade functioning to all sizes of businesses. If you’re thinking about data security, cloud is a safer option as each of the cloud computing variants (private, public, and hybrid) has its own set of benefits. If you are not dealing with sensitive data, choosing public cloud would be the best option whereas for sensitive data, businesses should opt for private cloud where they have total control of the security policies. On the other hand, hybrid cloud allows you to benefit from both worlds. So, if you are looking for scalable solutions along with a much more controlled architecture for data security, hybrid cloud architecture will blend well with your business needs. It allows users to pick and choose the public or private cloud service they require to fulfill their business requirements. Migrating your business to cloud definitely has more advantages over disadvantages. It helps increase organizational efficiency and fuels business growth. Cloud computing helps reduce time-to-market, facilitates product development, keeps employees happy, and builds a desired workflow. This in the end helps your organisation achieve greater success. It doesn’t hurt that the cost you saved thus is available for you to invest in areas that are in dire need of some cash inflow! Read Next: What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Serverless computing wars: AWS Lambdas vs Azure Functions How machine learning as a service is transforming cloud
Read more
  • 0
  • 0
  • 3471
article-image-alibaba-the-dark-horse-in-the-public-cloud-race
Gebin George
07 Jun 2018
3 min read
Save for later

Why Alibaba cloud could be the dark horse in the public cloud race

Gebin George
07 Jun 2018
3 min read
Public cloud market seems to be highly dominated by industry giants from the west like Amazon Web Services (AWS) and Microsoft Azure. One of China’s tech giants, Alibaba cloud has entered the public cloud market recently and seems to be catching up pretty quickly with its Software-as-a-Service (SaaS). Infrastructure-as-Service (IaaS), Platform-as-a-Service offerings. According to reports, in December 2017, Alibaba cloud witnessed 56% YoY growth and revenue as good as 12.8 billion USD. It is expected to be better in the Q2 2018 report, where-in the market size will increase to a sizeable amount. Alibaba cloud is already leading China’s cloud market share. It provides around 100 core services, with datacenter spread around 17 regions as a whole. Some of the stunning features of Alibaba cloud include: Elastic Computing ECS services of Alibaba cloud are highly scalable, quick and powerful with high-range Intel CPUs, which brings down the latency to give staggering results. It comes with extra security layer for protecting applications from DDoS and Trojan attacks. The services involved here includes ECS, container services, Autoscaling and so on. Networking Alibaba cloud enables you with hybrid and distributed network, ideal for enterprises which demand high network coverages. This network involves communication between two VPCs and communication between VPCs and IDCs. Security It has a built-in Anti-DDoS management and security assessment services. This definitely reduces the cost of hiring and training quality security engineers to analyze and manage security services and data breaches. Storage and CDN Alibaba cloud’s OSS (Object Storage Service) helps you store, backup, and archive huge amount of data on cloud. This service is absolutely flexible and you only need to pay as per your usage and there are no additional cost involved in it. Analytics It comprises of a wide range of analytics services like business analytics, data processing, stream analytics and so on. Services like Elastic MapReduce, Apache Hadoop and Apache Spark can be run easily on Alibaba cloud for efficient Cloud Analytics. For detailed products and services from Alibaba cloud, refer their official site. AWS and Azure were dominating the public cloud market with an array of services which changed as per the market requirements. Considering the current advancements in the Alibaba cloud and its affordable and highly competitive price range, Alibaba joins the others in the race to dominate the public cloud market. Microsoft Build 2018 Day 1: Azure meets Artificial Intelligence How to create your own AWS CloudTrail Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 2403

article-image-why-aws-is-the-prefered-cloud-platform-for-developers-working-with-big-data
Savia Lobo
07 Jun 2018
4 min read
Save for later

Why AWS is the preferred cloud platform for developers working with big data

Savia Lobo
07 Jun 2018
4 min read
The cloud computing revolution has well and truly begun. But the market is fiercely competitive - there are a handful of cloud vendors leading the pack and it’s not easy to say which one is best. AWS, Google Cloud Platform, Microsoft Azure, and Oracle are leading the way when it comes to modern cloud-based infrastructure and it’s hard to separate them. Big data is in high demand as businesses can flesh out useful insights. Organizations carry out advanced analytics in order to leverage deep and exploratory perspective on the data. After a deep analysis is performed, BI tools such as Tableau, Microsoft Power BI, Qlik Sense, and so on, are used in drafting out dashboard visualizations, reports, performance management metrics etc. that makes the data analytics actionable. Thus, we see how analytics and BI tools are important in getting the best out of big data. In this year’s Skill Up survey, there emerged a frontrunner for developers: AWS Source : Packt Skill Up Survey Let’s talk AWS Amazon is said to outplay any other cloud platform players in the market. AWS provides its customers with a highly robust infrastructure with commendable security options. In its inception year, 2006, AWS already had more than 150,000 developers who signed up to use the AWS services. Amazon announced this at a press release that year. In a recent survey conducted by the Synergy Research, AWS is among the top cloud platform providers with a 35% market share. The top customers of AWS include NASA, Netflix, Adobe Systems, Airbnb, and many more. Cloud technology is not a new and emerging trend anymore and has truly become mainstream. What sets AWS on a different plateau is, it has caught developers’ attention by its impressive suite of developer tools. It’s a cloud platform that is designed with continuous delivery and DevOps in mind. AWS: Every developer’s den Once you’re an AWS’ member, you can experience hundreds of different platforms that it offers. Starting form Core Computation and Content Delivery Networks, one can even take advantage of the IoT and game development platforms. If you’re worried how to payback for all that you have used, don’t worry, AWS offers its complete package of solutions across six modes of payments. It also offers hundreds of templates in every programming language to glide along one’s choice of project. Pay-as-you-go feature in AWS enables customers to use the features that are required. This avoids unnecessary buying of resources that would add no value to businesses. Security on AWS is something users appreciate. AWS’ configuration options, management policies, and their reliable security are the reasons why one can easily trust their cloud services. AWS has layers of security encryptions that enable high-end user data protection. It also decides on user privileges using the IAM (Identity and Access Management) roles. This helps to keep restrictions on the number of resources used by the user. It also helps in greatly reducing malpractices. AWS provides developers with autoscaling, as it is one of the most important features that every developer needs. With AutoScaling, developers can suspend their unimportant management issues on autopilot; AWS takes care of it. Developers can instead focus more on processes, development, and programming. The free tier within AWS runs an Amazon EC2, which includes an S3 storage, EC2 compute hours, Elastic Load balancer time, and so on. This enables developers to try AWS’ API within their software to enhance it further. AWS cuts down deployment time required to provision a web server. By using Amazon Machine Images, one can have a machine deployed and ready to accept connections in a short time. Amazon’s logo says it all. It provides A to Z services under one hood for developers, businesses,and for general users. Each service is tailored to serve different purposes and also has a dedicated and specialized hardware. Developers can easily choose Amazon for their development needs with their pay-as-you-go and make the most of it without even buying stuff. Though, there are other service providers such as Microsoft Azure, Google Cloud Platform and so on, Amazon offers functionalities which others are yet to match. Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider How to secure ElasticCache in AWS How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 4507