Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides - Cloud & Networking

65 Articles
article-image-polycloud-a-better-alternative-to-cloud-agnosticism
Richard Gall
16 May 2018
3 min read
Save for later

Polycloud: a better alternative to cloud agnosticism

Richard Gall
16 May 2018
3 min read
What is polycloud? Polycloud is an emerging cloud strategy that is starting to take hold across a range of organizations. The concept is actually pretty simple: instead of using a single cloud vendor, you use multiple vendors. By doing this, you can develop a customized cloud solution that is suited to your needs. For example, you might use AWS for the bulk of your services and infrastructure, but decide to use Google's cloud for its machine learning capabilities. Polycloud has emerged because of the intensely competitive nature of the cloud space today. All three major vendors - AWS, Azure, and Google Cloud - don't particularly differentiate their products. The core features are pretty much the same across the market. Of course, there are certain subtle differences between each solution, as the example above demonstrates; taking a polycloud approach means you can leverage these differences rather than compromising with your vendor of choice. What's the difference between a polycloud approach and a cloud agnostic approach? You might be thinking that polycloud sounds like cloud agnosticism. And while there are clearly many similarities, the differences between the two are very important. Cloud agnosticism aims for a certain degree of portability across different cloud solutions. This can, of course, be extremely expensive. It also adds a lot of complexity, especially in how you orchestrate deployments across different cloud providers. True, there are times when cloud agnosticism might work for you; if you're not using the services being provided to you, then yes, cloud agnosticism might be the way to go. However, in many (possibly most) cases, cloud agnosticism makes life harder. Polycloud makes it a hell of a lot easier. In fact, it ultimately does what many organizations have been trying to do with a cloud agnostic strategy. It takes the parts you want from each solution and builds it around what you need. Perhaps one of the key benefits of a polycloud approach is that it gives more power back to users. Your strategic thinking is no longer limited to what AWS, Azure or Google offers - you can instead start with your needs and build the solution around that. How quickly is polycloud being adopted? Polycloud first featured in Thoughtworks' Radar in November 2017. At that point it was in the 'assess' stage of Thoughtworks' cycle; this means it's simply worth exploring and investigating in more detail. However, in its May 2018 Radar report, polycloud had moved into the 'trial' phase. This means it is seen as being an approach worth adopting. It will be worth watching the polycloud trend closely over the next few months to see how it evolves. There's a good chance that we'll see it come to replace cloud agnosticism. Equally, it's likely to impact the way AWS, Azure and Google respond. In many ways, the trend a reaction to the way the market has evolved; it may force the big players in the market to evolve what they offer to customers and clients. Read next Serverless computing wars: AWS Lambdas vs Azure Functions How to run Lambda functions on AWS Greengrass
Read more
  • 0
  • 0
  • 20903

article-image-serverless-computing-aws-lambdas-azure-functions
Vijin Boricha
03 May 2018
5 min read
Save for later

Serverless computing wars: AWS Lambdas vs Azure Functions

Vijin Boricha
03 May 2018
5 min read
In recent times, local servers and on-premises computers are counted as old school. Users and organisations have shifted their focus on Cloud to store, manage, and process data. Cloud computing has evolved in ways that DevOps teams can now focus on improving code and processes rather than focusing on provisioning, scaling, and maintaining servers. This means we have now entered the Serverless era, and the big players of this era are AWS Lambda and Azure Functions. So if you are a developer now you need not worry about low-level infrastructure decision. Coming to the bigger question. What is Serverless Computing / Function-as-a-Service? Serverless Computing / Function-as-a-Service FaaS can be described as a concept of serverless computing where applications depend on third party services to manage server-side logics. This means application developers can concentrate on building their applications rather than thinking about servers. So if you want to build any type of application or backend service, just go about with it as everything required to run and scale your application is already being handled for you. Following are popular platforms that support Faas. AWS Lambda Azure Functions Cloud Functions Iron.io Webtask.io Benefits of Serverless Computing Serverless applications and architectures are gaining momentum and are increasingly being used by companies of all sizes. Serverless technology rapidly reduces production time and minimizes your costs, while you still have the freedom to customize your code, without hindering functionalities. For good reason, the serverless-based software takes care of many of the problems developers face when running systems and servers such as fault-tolerance, centralized logging, horizontal scalability, and deployments, to name a few. Additionally, the serverless pay-per-invocation model can result in drastic cost savings. Since AWS Lambda and Azure Functions are the most popular and widely used serverless computing platforms, we will discuss these services further. AWS Lambda AWS is recognized as one of the largest market leaders for cloud computing. One of the recent services within the AWS umbrella that has gained a lot of traction is AWS Lambda. It is the part of Amazon Web Services that lets you run your code without provisioning or managing servers. AWS Lambda is a compute service that enables you to deploy applications and back-end services that operate with zero upfront cost and requires no system administration. Although seemingly simple and easy to use, Lambda is a highly effective and scalable compute service that provides developers with a powerful platform to design and develop serverless event-driven systems and applications. Pros: Supports automatic scaling Support unlimited number of functions Takes 1 million requests for free, then charges $0.20/1 million invocations, plus $0.00001667/GB per sec Cons: Limited concurrent executions (1000 executions per account) Supports lesser languages in comparison to Azure (JavaScript, Java, C#, and Python) Azure Functions Microsoft provides a solution you can use to easily run small segments of code in the Cloud: Azure Functions. It provides solutions for processing data, integrating systems, and building simple APIs and microservices. Azure Functions help you easily run small pieces of code in cloud without worrying about a whole application or the infrastructure to run it. With Azure functions, you can use triggers to execute your code and bindings to simplify the input and output of your code. Pros: Supports unlimiter concurrent executions Supports C#, JavaScript, F#, Python, Batch, PHP, PowerShell Supports unlimited number of functions Takes 1 million requests for free, then charges $0.20/1 million invocations, plus $0.000016/GB per sec Cons: Manual scaling (App Service Plan) Conclusion When compared with the traditional Client-server approach, serverless architecture saves a lot effort and proves to cost effective for many organisations, no matter its size. The most important aspect of choosing the right platform is understanding which platform benefits your organisation the best. AWS Lambda has been around for a while with infinite support to Linux-based platforms but Azure Functions is not behind in supporting Windows-based suite even after entering the serveless market recently. If you are going to adopt AWS you will be to make the most of its; availability to open source integration, pay-as-you-go model, and high performance computing environment. Azure, on the other hand is easier to use as it’s a Windows platform. It also supports a precise pricing model where they charge by the minute and it has extended support for MacOS and Linux. So if you are looking for a clear winner here you shouldn't be surprised that AWS and Azure are similar in many ways and it would be a tie if it was to choose who is better or worse than the other. This battle would always be heated and experts will be placing their bets on who wins the race. In the end, the entire discussion would drill down to what your business needs. After all the mission would always be to grow your business at a marginal cost. The Lambda programming model How to Run Code in the Cloud with AWS Lambda Download Microsoft Azure serverless computing e-book for free
Read more
  • 0
  • 0
  • 5032

article-image-top-7-devops-tools-2018
Vijin Boricha
25 Apr 2018
5 min read
Save for later

Top 7 DevOps tools in 2018

Vijin Boricha
25 Apr 2018
5 min read
DevOps is a methodology or a philosophy. It's a way of improving the friction between development and operations. But while we could talk about what DevOps is and isn't for decades (and people probably will), there are a range of DevOps tools that are integral to putting its principles into practice. So, while it's true that adopting a DevOps mindset will make the way you build software more efficiently, it's pretty hard to put DevOps into practice without the right tools. Let's take a look at some of the best DevOps tools out there in 2018. You might not use all of them, but you're sure to find something useful in at least one of them - probably a combination of them. DevOps tools that help put the DevOps mindset into practice Docker Docker is a software that performs OS-level virtualization, also known as containerization. Docker uses containers to package up all the requirements and dependencies of an application making it shippable to on-premises devices, data center VMs or even Cloud. It was developed by Docker, Inc, back in 2013 with complete support for Linux and limited support for Windows. By 2016 Microsoft had already announced integration of Docker with Windows 10 and Windows Server 2016. As a result, Docker enables developers to easily pack, ship, and run any application as a lightweight, portable container, which can run virtually anywhere. Jenkins Jenkins is an open source continuous integration server in Java. When it comes to integrating DevOps processes, continuous integration plays the most important part and this is where Jenkins comes into picture. It was released in 2011 to help developers integrate DevOps stages with a variety of in-built plugins. Jenkins is one of those prominent tools that helps developers find and solve code bugs quickly and also automates the testing of their builds. Ansible Ansible was developed by the Ansible community back in 2012 to automate network configuration, software provisioning, development environment, and application deployment. In a nutshell, it is responsible for delivering simple IT automation that puts a stop to repetitive task. This eventually helps DevOps teams to focus on more strategic work. Ansible is completely agentless where in it uses syntax written in YAML and follows a master-slave architecture. Puppet Puppet is an open source software configuration management tool written in C++ and Closure. It was released back in 2005 licensed under the GNU General Public License (GPL) until version 2.7.0. Later it was licensed under Apache License 2.0. Puppet is an open-source configuration management tool used to deploy, configure and manage servers. It uses a Master Slave architecture where the Master and Slave use secure encrypted channels to communicate. Puppet runs on any platform that supports Ruby, for example CentOS, Windows Server, Oracle Enterprise Linux, Microsoft, and more. Git Git is a version control system that allows you to track file changes which in turn helps in coordinating with team members working on those files. Git was released in 2005 where it was majorly used for Linux Kernel development. Its primary use case is source code management in software development. Git is a distributed version control system where every contributor can create a local repository by cloning the entire main repository. The main advantage of this system is that contributors can update their local repository without any interference to the main repository. Vagrant Vagrant is an open source tool released in 2010 by HashiCorp and it used to build and maintain virtual environments. It provides a simple command-line interface to manage virtual machines with custom configurations so that DevOps team members have an identical development environment. While Vagrant is written in Ruby, it supports development in all major languages. It works seamlessly on Mac, Windows, and all popular Linux distributions. If you are considering building and configuring a portable, scalable, and lightweight environment, Vagrant is your solution. Chef Chef is a powerful configuration management tool used to transform infrastructure into code. It was released back in 2009 and is written in Ruby and Erlang. Chef uses a pure-ruby domain specific language (DSL) to write system configuration 'recipes' which are put together as cookbook for easier management. Unlike Puppet’s master-slave architecture Chef uses a client-server architecture. Chef supports multiple cloud environments which makes it easy for infrastructures to manage data centers and maintain high availability. Think carefully about the DevOps tools you use To increase efficiency and productivity, the right tool is key. In a fast-paced world where DevOps engineers and their entire teams do all the extensive work, it is really hard to find the right tool that fits your environment perfectly. Your best bet is to choose your tool based on the methodology you are going to adopt. Before making a hard decision it is worth taking a step back to analyze what would work best to increase your team’s productivity and efficiency. The above tools have been shortlisted based on current market adoptions. We hope you find a tool in this list that eventually saves a lot of your time in choosing the right one. Learning resources Here is a small selection of books and videos from our Devops portfolio to help you and your team master the DevOps tools that fit your requirements: Mastering Docker (Second Edition) Mastering DevOps [Video] Mastering Docker [Video] Ansible 2 for Beginners [Video] Learning Continuous Integration with Jenkins (Second Edition) Mastering Ansible (Second Edition) Puppet 5 Beginner's Guide (Third Edition) Effective DevOps with AWS
Read more
  • 0
  • 0
  • 4473
Banner background image

article-image-how-machine-learning-as-a-service-transforming-cloud
Vijin Boricha
18 Apr 2018
4 min read
Save for later

How machine learning as a service is transforming cloud

Vijin Boricha
18 Apr 2018
4 min read
Machine learning as a service (MLaaS) is an innovation that is growing out of 2 of the most important tech trends - cloud and machine learning. It's significant because it enhances both. It makes cloud an even more compelling proposition for businesses. That's because cloud typically has three major operations: computing, networking and storage. When you bring machine learning into the picture, the data that cloud stores and processes can be used in radically different ways, solving a range of business problems. What is machine learning as a service? Cloud platforms have always competed to be the first or the best to provide new services. This includes platform as a service (PaaS) solutions, infrastructure as a service (IaaS) solutions and software as a service (SaaS) solutions. In essense, cloud providers like AWS and Azure provide sets of software to different things so their customers don't have to. Machine learning as a service is simply another instance of the services offered by cloud providers. It could include a wide range of features, from data visualization to predictive analytics and natural language processing. It makes running machine learning models easy, effectively automating some of the work that might have typically done manually by a data engineering team. Here are the biggest cloud providers who offer machine learning as a service: Google Cloud Platform Amazon Web Services Microsoft Azure IBM Cloud Every platform provides a different suite of services and features. It will ultimately depend on what's most important to you which one you choose. Let's take a look now at the key differences between these cloud providers' machine learning as a service offerings. Comparing the leading MLaaS products Google Cloud AI Google Cloud Platform has always provided their own services to help businesses grow. They provide modern machine learning services with pre-trained models and a service to generate your own tailored models. Majority of Google applications like Photos (image search), the Google app (voice search), and Inbox (Smart Reply) have been built using the same services that they provide to their users. Pros: Cheaper in comparison to other Cloud providers Provides IaaS and PaaS Solutions Cons: Google Prediction API is going to be discontinued (May 1st, 2018) Lacks a visual interface You'll need to know TensorFlow Amazon Machine Learning Amazon Machine Learning provides services for building ML models and generating predictions which help users develop robust, scalable, and cost-effective smart applications. With the help of Amazon Machine Learning you are able to use powerful machine learning technology without having any prior experience in machine learning algorithms and techniques. Pros: Provides versatile automated solutions It's accessible - users don't need to be machine learning experts Cons: The more you use, the more expensive it is Azure Machine Learning Studio Microsoft Azure provides you with Machine Learning Studio - a simple browser-based, drag-and-drop environment which functions without any kind of coding. You are provided with fully-managed cloud services that enable you to easily build, deploy and share predictive analytics solutions. Here you are also provided with a platform (Gallery) to share and contribute to the community. Pros: Consists of most versatile toolset for MLaaS You can contribute to and reuse machine learning solutions from the community Cons: Comparatively expensive A lot of manual work is required Watson Machine Learning Similar to the above platforms, IBM Watson Machine Learning is a service that helps  users to create, train, and deploy self-learning models to integrate predictive capabilities within their applications. This platform provides automated and collaborative workflows to grow intelligent business applications. Pros: Automated workflows Data science skills is not necessary Cons: Comparatively limited APIs and services Lacks streaming analytics Selecting the machine learning as a service solution that's right for you There are so many machine learning as a service solutions out there that it's easy to get confused. The crucial step to take before you make a decision to purchase anything is to plan your business requirements. Think carefully not only about what you want to achieve, but what you already do too. You want your MLaaS solution to easily integrate into the way you currently work. You also don't want it to replicate any work you're currently doing that you're pretty happy with. It gets repeated so much but it remains as true as it has ever been - make sure your software decisions are fully aligned with your business needs. It's easy to get seduced by the promise of innovative new tools, but without the right alignment they're not going to help you at all.
Read more
  • 0
  • 0
  • 3661

article-image-aws-fargate-makes-container-infrastructure-management-a-piece-of-cake
Savia Lobo
17 Apr 2018
3 min read
Save for later

AWS Fargate makes Container infrastructure management a piece of cake

Savia Lobo
17 Apr 2018
3 min read
Containers such as Docker, FreeBSD Jails, and many more, are a substantial way for developers to develop and deploy their applications. Also, with container orchestration solutions such as Amazon ECS and EKS (Kubernetes), developers can easily manage and scale these containers, thus enabling them to perform other activities quickly. However, in spite of these management solutions at hand, one also has to take an account of the infrastructure maintenance, its availability, capacity and so on which are added tasks. AWS Fargate eases out these tasks and streamlines all deployments for you, resulting in faster completion of deliverables. At the Re:Invent in November 2017, AWS launched Fargate, a technology which enables one to manage containers without having to worry about managing the container infrastructure underneath. AWS Fargate comes to your rescue here. It is an easy way to deploy your containers on AWS. One can start using Fargate on ECS or EKS, try processes and workloads and later migrate workloads to Fargate. It eliminates most of the management such as placement of resources, scheduling, scaling, and so on, which is a requirement for containers. All you have to do is, Build your container image, Specify the CPU and memory requirements, Define your networking and IAM policies, and Launch your container application Some key benefits of AWS Fargate It allows developers to focus on design, development, and deployment of applications. This eliminates the need to manage a cluster of Amazon EC2 instances. One can easily scale applications using Fargate. Once, the application requirements such as CPU, memory, and so on are defined, Fargate manages effective scaling and infrastructure needed to make containers highly-available. One can launch thousands of containers in no time and easily scale them to run most of the mission-critical applications. AWS Fargate is integrated with Amazon ECS and EKS. Fargate launches and manages containers once CPU and memory needed, IAM policies that container needs are defined and uploaded to Amazon ECS. With Fargate, one gets flexible configuration options that matches one’s applications’ needs. Also, one pays on the basis of per-second granularity. Adoption of Container management as a trend is steadily increasing. Kubernetes, at present, is one of the popular and most used containerized application management platforms. However, users and developers are often confused about who the best Kubernetes provider is. Microsoft and Google have their managed Kubernetes services, but AWS Fargate provides an added ease to Amazon’s EKS (Elastic Container Service for Kubernetes) by eliminating the hassle of container infrastructure management. Read more about AWS Fargate on AWS’ official website.
Read more
  • 0
  • 0
  • 3514

article-image-differences-kubernetes-docker-swarm
Richard Gall
02 Apr 2018
4 min read
Save for later

The key differences between Kubernetes and Docker Swarm

Richard Gall
02 Apr 2018
4 min read
The orchestration war between Kubernetes and Docker Swarm appears to be over. Back in October, Docker announced that its Enterprise Edition could be integrated with Kubernetes. This move was widely seen as the Docker team conceding to Kubernetes dominance as an orchestration tool. But Docker Swarm nevertheless remains popular; it doesn't look like it's about to fall off the face of the earth. So what is the difference between Kubernetes and Docker Swarm? And why should you choose one over the other?  To start with it's worth saying that both container orchestration tools have a lot in common. Both let you run a cluster of containers, allowing you to increase the scale of your container deployments significantly without cloning yourself to mess about with the Docker CLI (although as you'll see, you could argue that one is more suited to scalability than the other). Ultimately, you'll need to view the various features and key differences between Docker Swarm and Kubernetes in terms of what you want to achieve. Do you want to get up and running quickly? Are you looking to deploy containers on a huge scale? Here's a brief but useful comparison of Kubernetes and Docker Swarm. It should help you decide which container orchestration tool you should be using. Docker Swarm is easier to use than Kubernetes One of the main reasons you’d choose Docker Swarm over Kubernetes is that it has a much more straightforward learning curve. As popular as it is, Kubernetes is regarded by many developers as complex. Many people complain that it is difficult to configure. Docker Swarm, meanwhile, is actually pretty simple. It’s much more accessible for less experienced programmers. And if you need a container orchestration solution now, simplicity is likely going to be an important factor in your decision making. ...But Docker Swarm isn't as customizable Although ease of use is definitely one thing Docker Swarm has over Kubernetes, it also means there's less you can actually do with it. Yes, it gets you up and running, but if you want to do something a little different, you can't. You can configure Kubernetes in a much more tailored way than Docker Swarm. That means that while the learning curve is steeper, the possibilities and opportunities open to you will be far greater. Kubernetes gives you auto-scaling - Docker Swarm doesn't When it comes to scalability it’s a close race. Both tools are able to run around 30,000 containers on 1,000 nodes, which is impressive. However, when it comes to auto-scaling, Kubernetes wins because Docker doesn’t offer that functionality out of the box. Monitoring container deployments is easier with Kubernetes This is where Kubernetes has the edge. It has in-built monitoring and logging solutions. With Docker Swarm you’ll have to use third-party applications. That isn’t necessarily a huge problem, but it does make life ever so slightly more difficult. Whether or not it makes life more difficult to outweigh the steeper Kubernetes learning curve however is another matter… Is Kubernetes or Docker Swarm better? Clearly, Kubernetes is a more advanced tool than Docker Swarm. That's one of the reasons why the Docker team backed down and opened up their enterprise tool for integration with Kubernetes. Kubernetes is simply the software that's defining container orchestration. And that's fine - Docker has cemented its position within the stack of technologies that support software automation and deployment. It's time to let someone else take on the challenge of orchestration But although Kubernetes is the more 'advanced' tool, that doesn't mean you should overlook Docker Swarm. If you want to begin deploying container clusters, without the need for specific configurations, then don't allow yourself to be seduced by something shinier, something ostensibly more popular. As with everything else in software development, understand and define what job needs to be done - then choose the right tool for the job.
Read more
  • 0
  • 1
  • 7815
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-pets-cattle-analogy-demonstrates-how-serverless-fits-software-infrastructure-landscape
Russ McKendrick
20 Feb 2018
8 min read
Save for later

The pets and cattle analogy demonstrates how serverless fits into the software infrastructure landscape

Russ McKendrick
20 Feb 2018
8 min read
When you say serverless to someone, the first conclusion they jump to is that you are running your code without any servers. This can be quite a valid conclusion if you are using a public cloud service like AWS, but when it comes to running in your own environment, you can't avoid having to run on a server of some sort. This blog post is an extract from Kubernetes for Serverless Applications by Russ McKendrick. Before we discuss what we mean by serverless and Functions as a Service, we should discuss how we got here. As people who work with me will no doubt tell you, I like to use the pets versus cattle analogy a lot as this is quite an easy way to explain the differences in modern cloud infrastructures versus a more traditional approach. The pets, cattle, chickens insects, and snowflakes analogy I first came across the pets versus cattle analogy back in 2012 from a slide deck published by Randy Bias. The slide deck was used during a talk Randy Bias gave at the cloudscaling conference on architectures for open and scalable clouds. Towards the end of the talk, he introduced the concept of pets versus cattle, which Randy attributes to Bill Baker who at the time was an engineer at Microsoft. The slide deck primarily talks about scaling out and not up; let's go into this in a little more detail and discuss some of the additions that have been made since the presentation was first given five years ago. Pets: the bare metal servers and virtual machines Pets are typically what we, as system administrators, spend our time looking after. They are traditional bare metal servers or virtual machines: We name each server as you would a pet. For example, app-server01.domain.com and database-server01.domain.com. When our pets are ill, you will take them to the vets. This is much like you, as a system administrator, would reboot a server, check logs, and replace the faulty components of a server to ensure that it is running healthily. You pay close attention to your pets for years, much like a server. You monitor for issues, patch them, back them up, and ensure they are fully documented. There is nothing much wrong with running pets. However, you will find that the majority of your time is spent caring for them—this may be alright if you have a few dozen servers, but it does start to become unmanageable if you have a few hundred servers. Cattle: the sort of instances you run on public clouds Cattle are more representative of the instance types you should be running in public clouds such as Amazon Web Services (AWS) or Microsoft Azure, where you have auto scaling enabled. You have so many cattle in your herd you don't name them; instead they are given numbers and tagged so you can track them. In your instance cluster, you can also have too many to name so, like cattle, you give them numbers and tag them. For example, an instance could be called ip123067099123.domain.com and tagged as app-server. When a member of your herd gets sick, you shoot it, and if your herd requires it you replace it. In much the same way, if an instance in your cluster starts to have issues it is automatically terminated and replaced with a replica. You do not expect the cattle in your herd to live as long as a pet typically would, likewise you do not expect your instances to have an uptime measured in years. Your herd lives in a field and you watch it from afar, much like you don't monitor individual instances within your cluster; instead, you monitor the overall health of your cluster. If your cluster requires additional resources, you launch more instances and when you no longer require a resource, the instances are automatically terminated, returning you to your desired state. Chickens: an analogy for containers In 2015,  Bernard Golden added to the pets versus cattle analogy by introducing chickens to the mix in a blog post titled Cloud Computing: Pets, Cattle and Chickens? Bernard suggested that chickens were a good term for describing containers alongside pets and cattle: Chickens are more efficient than cattle; you can fit a lot more of them into the same space your herd would use. In the same way, you can fit a lot more containers into your cluster as you can launch multiple containers per instance. Each chicken requires fewer resources than a member of your herd when it comes to feeding. Likewise, containers are less resource-intensive than instances, they take seconds to launch, and can be configured to consume less CPU and RAM. Chickens have a much lower life expectancy than members of your herd. While cluster instances can have an uptime of a few hours to a few days, it is more than possible that a container will have a lifespan of minutes. Insects: An analogy for serverless Keeping in line with the animal theme, Eric Johnson wrote a blog post for RackSpace which introduced insects. This term was introduced to describe serverless and Functions as a Service. Insects have a much lower life expectancy than chickens; in fact, some insects only have a lifespan of a few hours. This fits in with serverless and Functions as a Service as these have a lifespan of seconds. Snowflakes Around the time Randy Bias gave his talk which mentioned pets versus cattle, Martin Fowler wrote a blog post titled SnowflakeServer. The post described every system administrator's worst nightmare: Every snowflake is unique and impossible to reproduce. Just like that one server in the office that was built and not documented by that one guy who left several years ago. Snowflakes are delicate. Again, just like that one server—you dread it when you have to log in to it to diagnose a problem and you would never dream of rebooting it as it may never come back up. Bringing the pets, cattle, chickens, insects and snowflakes analogy together... When I explain the analogy to people, I usually sum up by saying something like this: Organizations who have pets are slowly moving their infrastructure to be more like cattle. Those who are already running their infrastructure as cattle are moving towards chickens to get the most out of their resources. Those running chickens are going to be looking at how much work is involved in moving their application to run as insects by completely decoupling their application into individually executable components. But the most important take away is this:  No one wants to or should be running snowflakes. Serverless and insects As already mentioned, using the word serverless gives the impression that servers will not be needed. Serverless is a term used to describe an execution model. When executing this model you, as the end user, do not need to worry about which server your code is executed on as all of the decisions on placement, server management, and capacity are abstracted away from you—it does not mean that you literally do not need any servers. Now there are some public cloud offerings which abstract so much of the management of servers away from the end user that it is possible to write an application which does not rely on any user-deployed services and that the cloud provider will manage the compute resources needed to execute your code. Typically these services, which we will look at in the next section, are billed for the resources used to execute your code in per second increments. So how does that explanation fits in with the insect analogy? Let's say I have a website that allows users to upload photos. As soon as the photos are uploaded they are cropped, creating several different sizes which will be used to display as thumbnails and mobile-optimized versions on the site. In the pets and cattle world, this would be handled by a server which is powered on 24/7 waiting for users to upload images. Now this server probably is not just performing this one function; however, there is a risk that if several users all decide to upload a dozen photos each, then this will cause load issues on the server where this function is being executed. We could take the chickens approach, which has several containers running across several hosts to distribute the load. However, these containers would more than likely be running 24/7 as well; they will be watching for uploads to process. This approach could allow us to horizontally scale the number of containers out to deal with an influx of requests. Using the insects approach, we would not have any services running at all. Instead, the function should be triggered by the upload process. Once triggered, the function will run, save the processed images, and then terminate. As the developer, you should not have to care how the service was called or where the service was executed, so long as you have your processed images at the end of it.
Read more
  • 0
  • 0
  • 7522

article-image-5-things-to-remember-when-implementing-devops
Erik Kappelman
05 Dec 2017
5 min read
Save for later

5 things to remember when implementing DevOps

Erik Kappelman
05 Dec 2017
5 min read
DevOps is a much more realistic and efficient way to organize the creation and delivery of technology solutions to customers. But like practically everything else in the world of technology, DevOps has become a buzzword and is often thrown around willy-nilly. Let's cut through the fog and highlight concrete steps that will help an organization implement DevOps. DevOps is about bringing your development and operations teams together This might seem like a no-brainer, but DevOps is often explained in terms of tools rather than techniques or philosophical paradigms. At its core, DevOps is about uniting developers and operators, getting these groups to effectively communicate with each other, and then using this new communication to streamline various processes. This could include a physical change to the layout of an organization's workspace. It's incredible the changes that can happen just by changing the seating arrangements in an office. If you have a very large organization, development and operations might be in separate buildings, separate campuses, or even separate cities. While the efficacy of web-based communication has increased dramatically over the last few years, there is still no replacement for face-to-face daily human interactions. Putting developers and operators in the same physical space is going to increase the rate of adoption and efficacy of various DevOps tools and techniques. DevOps is all about updates Updates can be aimed at expanding functionality or simply fixing or streamlining existing processes. Updates present a couple of problems to developers and operators. First, we need to keep everybody working on the same codebase. This can be achieved by using a variety of continuous integration tools. The goal of continuous integration is to make sure that changes and updates to the codebase are implemented as close to continuously as possible. This helps avoid merging problems that can result from multiple developers working on the same codebase at the same time. Second, these updates need to be integrated into the final product. For this task, DevOps applies the concept of continuous deployment. This is essentially the same thing as continuous integration, but has to do with deploying changes to the codebase as opposed to integrating changes to the codebase. In terms of importance to the DevOps process, continues integration and deployment are equally important. Moving updates from a developer's workspace to the codebase to production should be seamless, smooth, and continuous. Implementing a microservices structure is imperative for an effective DevOps approach Microservices are an extension of the service-based structure. Basically a service structure calls for modulation of a solution’s codebase into units based on functionality. Microservices takes this a step further by implementing what consists of a service-based structure in which each service performs a single task. While a service-based or microservice structure is not required for implementation of DevOps, I have no idea why you wouldn’t because microservices lend themselves so well with DevOps. One way to think of a microservice structure is by imagining an ant hill in which all of the worker ants are microservices. Each ant has a specific set of abilities and is given a task from the queen. The ant then autonomously performs this task, usually gathering food, along with all of its ant friends. Remove a single ant from the pile, nothing really happens. Replace an old ant with a new ant, nothing really happens. The metaphor isn’t perfect, but it strikes at the heart of why microservices are valuable in a DevOps framework. If we need to be continuously integrating and deploying, shouldn’t we try to impact the codebase as directly as we can? When microservices are in use, changes can be made at an extremely granular level. This allows for continuous integration and deployment to really shine. Monitor your DevOps solutions In order to continuously deploy, applications need to also be continuously monitored. This allows for problems to be identified quickly. When problems are quickly identified, it tends to reduce the total effort required to fix the problems. Your application should obviously be monitored from the perspective of whether or not it is working as it currently should, but users need to be able to give feedback on the application’s functionality. When reasonable, this feedback can then be integrated into the application somehow. Monitoring user feedback tends to fall by the wayside when discussing DevOps. It shouldn’t. The whole point of the DevOps process is to improve the user experience. If you’re not getting feedback from users in a timely manner, it's kind of impossible to improve their experience. Keep it loose and experiment Part of the beauty of DevOps is that it can allow for more experimentation than other development frameworks. When microservices and continuous integration and deployment are being fully utilized, it's fairly easy to incorporate experimental changes to applications. If an experiment fails, or doesn’t do exactly what was expected, it can be removed just as easily. Basically, remember why DevOps is being used and really try to get the most out of it. DevOps can be complicated. Boiling anything down to five steps can be difficult but if you act on these five fundamental principles you will be well on your way to putting DevOps into practice. And while its fun to talk about what DevOps is and isn't, ultimately that's the whole point - to actually uncover a better way to work with others.
Read more
  • 0
  • 0
  • 3157

article-image-virtual-machines-vs-containers
Amit Kothari
17 Oct 2017
5 min read
Save for later

Virtual machines vs Containers

Amit Kothari
17 Oct 2017
5 min read
Virtual machines and containers are pretty similar, but they do possess some important differences. These differences will dictate which ones you decide to use. So, when you ask a question like 'virtual machines vs containers' there isn't necessarily going to be an outright winner - but there might be a winner for you in a given scenario. Let's take a look at what a virtual machine is, exactly, what a container is, and how they compare - as well as the key differences between the two. What is a virtual machine? Virtual machines are a product of hardware virtualization. They sit on top of physical machines with the hypervisor or virtual machine manager in between, acting as a layer of abstraction between the virtual machine and the underlying hardware. A virtualized physical machine can host multiple virtual machines, enabling better hardware utilization. Since the hypervisor abstracts the physical machine's hardware, it allows virtual machines to use a different operating system on the same host machine. The host operating system and virtual machine operating system run their own kernel. All the communication between the virtual machines and the host machine occurs through the hypervisor, resulting in high level of isolation. This means if one virtual machine crashes, it would not affect other virtual machines running on the same physical machine. Although the hypervisor's abstraction layer offers a high level of isolation, it also affects the performance. This problem can be solved by using a different virtualization technique. What is a container? Containers use lightweight operating system level virtualization. Similar to virtual machines, multiple containers can run on the same host machine. However, containers do not have their own kernel. They share the host machine's kernel, making them much smaller in size compared to virtual machines. They use process level isolation, allowing processes inside a container to be isolated from other containers. The difference between virtual machines and containers In his post Containers are not VMs, Mike Coleman use the analogy of houses and apartment buildings to compare virtual machines and containers. Self-contained houses have their own infrastructure while apartments are built around shared infrastructure. Similarly, virtual machines have their own operating system, with kernel, binaries, libraries etc. While containers share the host operating system kernel with other containers. Due to this, containers are much smaller in size allowing a physical machine to host more containers than virtual machines. Since containers use lightweight operating system level virtualization instead of a hypervisor, they are less resource intensive compared to virtual machines and offer better performance. Compared to virtual machines, containers are faster, quicker to provision, and easy to scale. As spinning a new container is quick and easy when a patch or an update is required, it is easy to start a new container and stop the old one instead of updating a running container. This allows us to build immutable infrastructure, which is reliable, portable and easy to scale. All of this makes containers a preferred choice for application deployment, especially with the teams that are using micro-services or similar architecture, where an application is composed of multiple small services instead of a monolith. In microservice architecture, an application is built as a suite of independent, self-contained services. This allows the teams to work independent of each other and deliver features quicker. However, decomposing applications into multiple parts adds operational complexity and overhead. Containers solve this problem. Containers can serve as a building block in the microservice world where each service can be packaged and deployed as a container. A container will have everything that is required to run a service, this includes service code, its dependencies, configuration files, libraries etc. Packaging a service and all its dependencies as a container makes it easy to distribute and deploy a service. Since the container includes everything that is required to run a service, it can be deployed reliably in different environments. A service packaged as a container will run the same way locally on a developer's machine, in a test environment, and in production. However, there are things to consider when using containers. Containers share the kernel and other components of the host operating system. This makes them less isolated compared to virtual machines, and thus less secure. Since each virtual machine has its own kernel, we can run virtual machines with a different operating system on the same physical machine. However since containers share the host operating system kernel, only the guest operating system that can work with the host operating system can be installed in a container. Virtual machines vs containers - in conclusion... Compared to virtual machines, containers are lightweight, performant and easy to provision. While containers seem to be the obvious choice to build and deploy applications, virtual machines have their own advantages. Compared to physical machines, virtual machines have the better tooling and are easier to automate. Virtual machines and containers can co-exist. Organizations with existing infrastructure built around virtual machines can take the benefits of containers by deploying them on virtual machines.
Read more
  • 0
  • 0
  • 4568

article-image-why-microservices-and-devops-are-match-made-heaven
Erik Kappelman
12 Oct 2017
4 min read
Save for later

Why microservices and DevOps are a match made in heaven

Erik Kappelman
12 Oct 2017
4 min read
What are microservices? In terms of software, ‘services’ could be thought of as little chunks of functionality. Services are a part of service-oriented architecture (SOA). Services are stateless, adhere to a contract (shared standards), are autonomous, relatively granular, and should be a ‘black-box’ for the user. Microservices are a logical extension of services. Microservices are services that perform only one function. This matches the Unix philosophy, “Do one thing, and do it well.” But who cares? And what about DevOps? Well, although the title is a cliche, DevOps and microservices based architecture are absolutely a match made in heaven. Following the philosophy of fully explaining terms, let's talk a bit about DevOps. What is DevOps? DevOps, which come from the words "development operations," is a process that is used to create software. DevOps is not one specific philosophy or process, there are many variants, but there are some shared features across most of the variants. DevOps advocates for a continuous development process where as many elements of this process are as automated as possible. Each iteration of a product is coded, built, tested, packaged, and released, and then monitored. This is referred to as the DevOps toolchain. When there is a need or desire to upgrade or change functionality or the way a product is designed, the process begins again. The idea is that DevOps should run in a circular fashion, always upgrading and always getting better. There are myriad tools in use right now within various flavors of DevOps. These tool are designed to meld with the DevOps tool chain and have revolutionized the development process for many developers and companies. Why microservices and DevOps go together You should already see why microservices and DevOps go together so well. DevOps calls for continuous monitoring, testing, and deployment of software. Microservices are inherently modular, because they are intended to perform a single function. Software that is modular easily fits into the DevOps structure. Incremental changes can be made to parts of a project, perhaps a single microservice. If the service contracts and control mechanisms are properly created, a single microservice should be able to be easily upgraded, built, tested, deployed and monitored without sending a cascading wave of bugs through adjacent services. DevOps really doesn’t make much sense outside of a structure like this. If your software is designed as a behemoth interconnected, interdependent ball of wax, changing part of the functionality will ‘break’ everything. This means that as changes or upgrades to a software are made, almost every change, no matter how big or small, will trigger what amounts to an almost full rewrite, or upgrade of the software in question. When applied to this kind of project, most DevOps processes would actually hinder the development process instead of helping. When projects are modularized at a relatively granular level, such as when a project is employing a microservice based structure, DevOps expedites delivery time and quality simultaneously. It should be noted that both a microservice architecture and DevOps processes are not tied to any specific tools or languages. These are philosophies for development and could be applied many different ways. That being said, there are many continuous integration and deployment tools, as well as, many different automation tools that are designed for use within a DevOps framework. Criticisms of microservices There are some criticisms of the microservice structure. One criticism is that using microservices does not get rid of the complexities of a traditional program. Those complexities are just moved onto the network that the services are using to communicate. Stress to the network is another criticism of the microservice architecture. This is because the service architecture distributes the elements of a program or process around a network in various places. In order for these services to perform cohesive functions, they must utilize the network to communicate. Depending on how many services make up a program or process, this could translate into significant network activity, which then creates problems of its own. Another criticism is that microservices can sometimes become, so called, ‘nanoservices.’ This is a service that performs a function so small that cost of the service actually outweighs the utility of the service. These criticisms should be kept in mind, but, in my opinion, they don’t amount to enough to impact the usual functionality of microservices in a DevOps environment. Using microservices in the DevOps process helps fully realize the potential of continuous integration, testing a deployment promised by DevOps. These tools combined can optimize computing in a manner that should be utilized during development whenever possible.
Read more
  • 0
  • 0
  • 5227
article-image-how-move-server-serverless-10-steps
Erik Kappelman
27 Sep 2017
7 min read
Save for later

How to move from server to serverless in 10 steps

Erik Kappelman
27 Sep 2017
7 min read
If serverless computing sounds a little contrived to you, you’re right, it is. Serverless computing isn't really serverless, well not yet anyway. It would be more accurate to call it serverless development. If you are a backend boffin, or you spend most of your time writing Dockerfiles, you are probably not going to be super into serverless computing. This is because serverless computing allows for applications to consist of chunks of code that do things in response to stimulus. What makes this different that other development is that the chunks of code don’t need to be woven into a traditional frontend-backend setup. Instead, serverless computing allows code to execute without the need for complicated backend configurations. Additionally, the services that provide serverless computing can easily scale an application as necessary, based on the activity the application is receiving. How AWS Lambda supports serverless computing We will discuss Amazon Web Services (AWS) Lambda, Amazon’s serverless computing offering. We are going to go over one of Amazon’s use cases to better understand the value of serverless computing, and how someone can get started. Have an application, build an application, or have an idea for an application. This could also be step zero, but you can’t really have a serverless application without an application. We are going to be looking at a simple abstraction of an app, but if you want to put this into practice, you’ll need a project. Create an AWS account, if you don’t already have one, and set up the AWS Command Line Interface on your machine. Quick Note: I am on OSX and I had a lot of trouble getting the AWS Command Line Interface installed and working. AWS recommends using pip to install, but the bash command never seemed to end up in the right place. Instead I used Homebrew and then it worked fine. Navigate to the S3 on AWS and create two buckets for testing purposes. One is going to be used for uploading, and the other is going to receive uploaded pictures that have been transformed from the other bucket. The bucket used to receive the transformed pictures should have a name of this form “Other buckets name”+“resized”. The code we are using requires this format in order to work. If you really don’t like that, you can modify the code to use a different format. Navigate to the AWS Lambda Management Console and choose the Create Function option, choose Author from scratch, and click the empty box next to the Lambda symbol in order to create a trigger. Choose S3. Now specify the bucket that the pictures are going to be initially uploaded into. Then under the event type choose Object Created (All). Leave the trigger disabled and press the Next button. Give your function a name, and for now, we are done with the console. On your local machine set up a workspace creating a root directory for our project with a node_modules folder. Then install the async and gm libraries. Create a JavaScript file named index.js and copy and paste the code from the end of the blog into the file. It needs to be name index.js for this example to work. There are settings that determine what the function entry point is that can be changed to look for a different filename. The code we are using comes from an example on AWS located here. I recommend you check out their documentation. If we look at the code that we are pasting into our editor we can learn a few things about using Lambda. We can see that there is an aws-sdk in use and that we use that dependency to create an S3 object. We get the information about the source bucket from the event object that is passed into the main function. This is why we named our buckets the way we did. We can get our uploaded picture using the getObject method of our S3 object. We have the S3 file information we want to get from the event object passed into the main function. This code grabs that file, puts it into a buffer, uses the gm library to resize the object and then use the same S3 object, specifying the destination bucket this time, to upload the file. Now we are ready ZIP up your root folder and let's deploy this function to our new Lambda instance that we have created. Quick Note: While using OSX I had to zip my JS file and node_modules folder directly into a ZIP archive instead of recursively zipping the root folder. For some reason the upload doesn’t work unless the zipping is done this way. This is at least true when using OSX. We are going upload using the Lambda Management Console, if you’re fancy you can use the AWS Command Line Interface. So, get to the management console and choose Upload a .ZIP File. Click the upload button, specify your ZIP file and then press the Save button. Now we will test our work. Click the Actions drop down and choose the Configure test event option. Now choose the S3 PUT test event and specify the bucket that images will be uploaded too. This creates a test that simulates an upload and if everything goes according to plan, your function should pass. Profit! I hope this introduction in AWS Lambda serves as a primer on Serverless development in general. The goal here is to get you started. Serverless computing has some real promise. As a primarily front-end developer, I revel in the idea of serverless anything. I find that the absolute worst part of any development project is the back-end. That being said, I don’t think that sysadmins will be lining up for unemployment checks tomorrow. Once serverless computing catches on, and maybe grows and matures a little bit, we’re going to have a real juggernaut on our hands. The code below is used in this example and comes from AWS: // dependencies varasync = require('async'); var AWS = require('aws-sdk'); var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration. var util = require('util'); // constants var MAX_WIDTH = 100; var MAX_HEIGHT = 100; // get reference to S3 client var s3 = new AWS.S3(); exports.handler = function(event, context, callback) { // Read options from the event. console.log("Reading options from event:n", util.inspect(event, {depth: 5})); var srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters. var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/+/g, " ")); var dstBucket = srcBucket + "resized"; var dstKey = "resized-" + srcKey; // Sanity check: validate that source and destination are different buckets. if (srcBucket == dstBucket) { callback("Source and destination buckets are the same."); return; } // Infer the image type. var typeMatch = srcKey.match(/.([^.]*)$/); if (!typeMatch) { callback("Could not determine the image type."); return; } var imageType = typeMatch[1]; if (imageType != "jpg"&& imageType != "png") { callback('Unsupported image type: ${imageType}'); return; } // Download the image from S3, transform, and upload to a different S3 bucket. async.waterfall([ functiondownload(next) { // Download the image from S3 into a buffer. s3.getObject({ Bucket: srcBucket, Key: srcKey }, next); }, functiontransform(response, next) { gm(response.Body).size(function(err, size) { // Infer the scaling factor to avoid stretching the image unnaturally. var scalingFactor = Math.min( MAX_WIDTH / size.width, MAX_HEIGHT / size.height ); var width = scalingFactor * size.width; var height = scalingFactor * size.height; // Transform the image buffer in memory. this.resize(width, height) .toBuffer(imageType, function(err, buffer) { if (err) { next(err); } else { next(null, response.ContentType, buffer); } }); }); }, functionupload(contentType, data, next) { // Stream the transformed image to a different S3 bucket. s3.putObject({ Bucket: dstBucket, Key: dstKey, Body: data, ContentType: contentType }, next); } ], function (err) { if (err) { console.error( 'Unable to resize ' + srcBucket + '/' + srcKey + ' and upload to ' + dstBucket + '/' + dstKey + ' due to an error: ' + err ); } else { console.log( 'Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey ); } callback(null, "message"); } ); }; Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 2641

article-image-biggest-cloud-adoption-challenges
Rick Blaisdell
10 Sep 2017
3 min read
Save for later

The biggest cloud adoption challenges

Rick Blaisdell
10 Sep 2017
3 min read
The cloud technology industry is growing rapidly, as companies are understanding the profitability and efficiency benefits that cloud computing can provide. Public, private, or a combination of various cloud models are used by 70 percent of U.S. companies who have at least one application in the cloud according to IDG Enterprise Cloud Computing Survey. In addition, almost 93 percent of organizations across the world use cloud services according to Building Trust in a Cloudy Sky Survey. Even though cloud adoption is increasing, it's important that companies develop a strategy before moving their data and using cloud technology to increase efficiency. This strategy is especially important because transitioning to the cloud is often a challenging process. If you're thinking of making this transition, here is a list of cloud adoption challenges that you should be aware of. Technology It's important to take into consideration the complex issues that can arise with new technology. For example, some applications are not built for cloud, or require certain compliance requirements that will not be met in a pure cloud environment. In this instance, a solution could be a hybrid environment with configured security requirements. People Moving to the cloud could be met with resistance, especially from people who have spent most of their time managing physical infrastructure. The largest organization will have a long transition to full cloud adoption. Small companies that are tech savvy will have an easier time making this change. Most modern IT departments will choose an agile approach to cloud adoption, although some employers might not be that experiences in these types of operational changes. The implementation takes time, but you can transform existing operating models to enable a cloud to be more approachable for the company. Psychological barriers Psychologically, there will be many questions. Will the cloud be more secure? Can I maintain my SLAs? Will I find the right technical support services? In 2017, cloud providers can meet all of those expectations and at the same time, reduce overall expenses. Costs Many organizations that decide to move to the cloud do not estimate costs properly. Even though the pricing seems to be simple, the more moving parts there are, the more the liklihood of incorrect costs estimates. When starting the migration to the cloud, look for tools that will help you estimate cloud costs and ROI, whilst taking into consideration all possible variables. Security One of the CIO's concerns when it comes to moving to the cloud is security and privacy. The management team needs to know if the cloud provider they plan to work with has a bullet proof environment. This is a big challenge because a data breach could not only put the company reputation at risk, but could also be the result of a huge financial loss for a company. The first step in adopting cloud services is to be able to identify all of the challenges that will come with the process. It is essential to work with the cloud provider to facilitate a successful cloud implementation. Are there any challenges that you consider crucial to a cloud transition? Let us know what you think in the comments section. About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies developing innovative technology strategies.
Read more
  • 0
  • 0
  • 2822

article-image-5-common-misconceptions-about-devops
Hari Vignesh
08 Aug 2017
4 min read
Save for later

5 common misconceptions about DevOps

Hari Vignesh
08 Aug 2017
4 min read
DevOps is a transformative operational concept designed to help development and production teams coordinate operations more effectively. In theory, DevOps is designed to be focused on cultural changes that stimulate collaboration and efficiency, but the focus often ends up being placed on everyday tasks, distracting organizations from the core principles — and values  — that DevOps is built around. This has led to many technology professionals developing misconceptions about DevOps because they have been part of deployments,or know people who have been involved in DevOps plans, who have strayed from the core principles of the movement. Let’s discuss a few of the misconceptions. We need to employ ‘DevOps’ DevOps is not a job title or a specific role. Your organization probably already has Senior Systems guys and Senior Developers who have many of the traits needed to work in the way that DevOps promotes. With a bit of effort and help from outside consultants, mailing lists or conferences, you might easily be able to restructure your business around the principles you propose without employing new people — or losing old ones. Again, there is no such thing as a DevOp person. It is not a job title. Feel free to advertise for people who work with a DevOps mentality, but there are no DevOps job titles. Oftentimes, good people to consider in the role as a bridge between teams are generalists, architects, and Senior Systems Administrators and Developers. Many companies in the past decade have employed a number of specialists — a DNS Administrator is not unheard of. You can still have these roles, but you’ll need some generalists who have a good background in multiple technologies. They should be able to champion the values of simple systems over complex ones, and begin establishing automation and cooperation between teams. Adopting tools makes you DevOps Some who have recently caught wind of the DevOps movement believe they can instantly achieve this nirvana of software delivery simply by following a checklist of tools to implement within their team. Their assumption is, that if they purchase and implement a configuration management tool like Chef, a monitoring service like Librato, or an incident management platform like VictorOps, then they’ve achieved DevOps! But that's not quite true. DevOps requires a cultural shift beyond simply implementing a new lineup of tools. Each department, technical or not, needs to understand the cultural shift behind DevOps. It’s one that emphasizes empathy and better collaboration. It’s more about people. DevOps emphasizes continuous change There’s no way around it — you will need to deal with more change and release tasks when integrating DevOps principles into your operations — the focus is placed heavily on accelerating deployment through development and operations integration, after all. This perception comes out of DevOps’ initial popularity among web app developers. It has been explained that most businesses will not face change that is so frequent, and do not need to worry about continuous change deployment just because they are supporting DevOps. DevOps does not equal “developers managing production” DevOps means development and operations teams working together collaboratively to put the operations requirements about stability, reliability, and performance into the development practices, while at the same time bringing development into the management of the production environment (e.g. by putting them on call, or by leveraging their development skills to help automate key processes). It doesn’t mean a return to the laissez-faire “anything goes” model, where developers have unfettered access to the production environment 24/7 and can change things as and when they like. DevOps eliminates traditional IT roles If, in your DevOps environment, your developers suddenly need to be good system admins, change managers and database analysts, something went wrong. DevOps as a movement that eliminates traditional IT roles will put too much strain on workers. The goal is to break down collaboration barriers, not ask your developers to do everything. Specialized skills play a key role in support effective operations, and traditional roles are valuable in DevOps.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 3921
article-image-most-important-skills-you-need-devops
Rick Blaisdell
19 Jul 2017
4 min read
Save for later

The most important skills you need in DevOps

Rick Blaisdell
19 Jul 2017
4 min read
During the last couple of years, we’ve seen how DevOps has exploded and has become one of the most competitive differentiators for every organization, regardless of their size. When talking about DevOps, we refer to agility and collaboration, keys that unlock a business’s success. However, to make it work for your business, you have to first understand how DevOps works, and what skills are required for adopting this agile business culture. Let’s look over this in more detail. DevOps culture Leaving the benefits aside, here are the three basic principles of a successful DevOps approach: Well-defined processes Enhanced collaboration across business functions Efficient tools and automation  DevOps skills you need Recently, I came across an infographic showing the top positions that technology companies are struggling to fill, and DevOps was number one on the list. Surprising? Not really. If we're looking at the skills required for a successful DevOps methodology, we will understand why finding a good DevOps engineer akin to finding a needle in a haystack. Besides communication and collaboration, which are the most obvious skills that a DevOps engineer must have, here is what draws the line between success, or failure. Knowledge of Infrastructure – whether we are talking about datacenter-based or cloud infrastructure, a DevOps engineer needs to have a deep understanding of different types of infrastructure and its components (virtualization, networking, load balancing, etc). Experience with infrastructure automation tools – taking into consideration that DevOps is mainly about automation, a DevOps engineer must have the ability to implement automation tools at any level. Coding - when talking about coding skills for DevOps engineers, I am not talking about just writing the code, but rather delivering solutions. In a DevOps organization, you need to have well-experienced engineers that are capable of delivering solutions. Experience with configuration management tools – tools such as Puppet, Chef, or Ansible are mandatory for optimizing software deployment and you need engineers with the know-how. Understanding continuous integration – being an essential part of a DevOps culture, continuous integration is the process that increases the engagement across the entire team and allows source code updates to be run whenever is required. Understanding security incident response – security is the hot button for all organizations, and one of the most pressing challenges to overcome. Having engineers that have a strong understanding of how to address various security incidents and developing a recovery plan, is mandatory for creating a solid DevOps culture.  Beside the above skills that DevOps engineers should have, there are also skills that companies need to adopt: Agile development – agile environment is the foundation on which the DevOps approach has been built. To get the most out of this innovative approach, your team needs to have strong collaboration capabilities to improve their delivery and quality. You can create your dream team by teaching different agile approaches such as Scrum, Kaizen, and Kanban. Process reengineering – forget everything you knew. This is one good piece of advice. The DevOps approach has been developed to polish and improve the traditional Software Development Lifecycle but also to highlight and encourage collaboration among teams, so an element of unlearning is required.  The DevOps approach has changed the way people collaborate with each other, and improving not only the processes, but their products and services as well. Here are the benefits:  Ensure faster delivery times – every business owner wants to see his product or service on the market as soon as possible, and the DevOps approach manages to do that. Moreover, since you decrease the time-to-market, you will increase your ROI; what more could you ask for? Continuous release and deployment – having strong continuous release and deployment practices, the DevOps approach is the perfect way to ensure the team is continuously delivering quality software within shorter timeframes. Improve collaboration between teams – there has always been a gap between the development and operation teams, a gap that has disappeared once DevOps was born. Today, in order to deliver high-quality software, the devs and ops are forced to collaborate, share, and revise strategies together, acting as a single unit.  Bottom line, DevOps is an essential approach that has changed not only results and processes, but also the way in which people interact with each other. Judging by the way it has progressed, it’s safe to assume that it's here to stay.  About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies, developing innovative technology strategies. 
Read more
  • 0
  • 0
  • 27650

article-image-will-oracle-become-key-cloud-player-and-what-will-it-mean-development-architecture-com
Phil Wilkins
13 Jun 2017
10 min read
Save for later

Will Oracle become a key cloud player, and what will it mean to development & architecture community?

Phil Wilkins
13 Jun 2017
10 min read
This sort of question and provoke some emotive reactions, and many technologists despite the stereotype can get pretty passionate about our views. So let me put my cards on the table. My first book as an author is about Oracle middleware (Implementing Oracle Integration Cloud). I am Oracle Ace Associate (soon to be full Ace) which is comparable to a Java Rockstar, Microsoft MVP or SAP Mentor. I work for Capgemini as a Senior Consultant as large SI we work with many vendors, so I need to able have a feel for all options, even though I specialise in Oracle now. Before I got involved with Oracle I worked with primarily Open Source technologies particularly JBoss and Fuse (before and after both where absorbed into RedHat) and I have technically reviewed a number of Open source books for Packt. So I should be able to provide a balanced argument. So onto the … A lot has been said about Oracle’s CIO Larry Ellison and his position on cloud technologies. Most notably for rubbishing it 2008, which is ironic since those of us who remember the late 90s Oracle heavily committed to a concept called the Network Machine which could have led to a more cloud like ecosystem had the conditions been right. The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do. ... The computer industry is the only industry that is more fashion-driven than women’s fashion. Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?[1] Since then we’ve seen a slow change.  The first cloud offerings we saw came in the form of Mobile Cloud Service which provided a Mobile Backend as a Service (MBaaS). At this time Oracle’s extensive programme to try and rationalize its portfolio and bring the best ideas and design together from Peoplesoft, E-Business Suite, Siebel to a single cohesive product portfolio started to show progress – Fusion applications. Fusion applications built with the WebLogic core and exploiting other investments provided the company with a product that had the potential to become cloud enabled. If that initiative hadn’t been started when it did then Oracle’s position may look very different.  But from a solid standardised container based product portfolio the transition to cloud has become a great deal easier, facilitated by the arrival of Oracle database 12c which provided the means to easily make the data storage at least multi-tenant. This combination gave Oracle its ability to then sell ERP modules as SaaS and meant that Oracle cloud start to think about competing with the SaaS darlings of SalesForce, NetSuite and Workday. However,ERPs don’t live in isolation. Any organisation has to deal with its oddities, special needs, departmental solutions as well as those systems that are unique and differentiate companies form their competition. This has driven the need to provide the means to provide PaaS and IaaS. Not only that, Oracle themselves admitted making SaaS as cost effective as possible it needed to revise the infrastructure and software platform to maximise the application density. A lesson that Amazon with AWS has long understood from the outset and done well in realizing. It has also had the benefit of being a later starter, looked at what has and hasn’t worked, and used to its deep pockets to ensure it got the best skills to build the ideal answers by passing many of the mistakes and issues the pioneers had to go through. This brought us to the state a couple of years ago, where its core products had a cloud existence and Oracle where making headway winning new mid-market customers – after all Oracle ERP is seen as something of a Rolls Royce of ERPs, globally capable and well tested and now cost accessible to more of the mid-market. So as an ERP vendor Oracle will continue to be a player, if there is a challenger, Oracle’s pockets are deep enough to buy the competition which is what happened with Netsuite.This maybe very interesting to enterprise architects who need to take off the shelf building blocks and provide the solid corporate foundation, but those of us who prefer to build, do something different not so exciting. In the last few years we have seen a lot of talk about digital disruptors, the need for serious agility (as in the ability to change and react rather than the development ethos). To have this capability you need to be able build, radically change solutions quickly and yet still work with those core backbone accounting tasks.  To use a Gartner expression, we need to be bimodal[2], to innovate.  When applications packages change comparatively slowly (they need to be slow and steady, if you want to show that your accounting isn’t going to look like Enron[3] or Lehmann Brothers[4]). With this growing need to drive innovation and change ever faster we have seen some significant changes in the way things tend to be done. In a way the need to innovate has impacted to the point that,you could almost say in the process of trying to disrupt existing businesses through IT we have achieved the disruption of software development. With the facilitation of the cloud particularly IaaS, the low cost of startupandtry new solutions and either grow them if they succeed or mothball them with minimal capital loss or delay if they don't; we have seen … The pace of service adoption accelerate exponentially meaning the rate of scale up and dynamic demand particularly for end user facing services has needed new techniques for scaling. Standards moving away from being formulated by committee of companies wanting to influence/dominate a market segment which while resulted in some great ideas (UDDI as a concept was fabulous) but often very unwieldy (ebXML, SOAP, UDDI for example) to simpler standards that have largely evolved through simplicity and quickly recognized value (JSON, REST) to become de-facto standards. New development paradigms that enable large solutions to be delivered whilst still keeping delivery on short cycles and supporting organic change (Agile, microservices). Continuous Integration and DevOps breaking down organisational structures and driving accountability – you build it, you make it run. The open source business model as a way to break into the industry with a new software technologywithout needing deep pockets for marketing etc has become the predominant. route in, and at the same time acceptance that open source software can be as well supported as a closed source product. For a long time, despite Oracle being the ‘guardian’ for Java and then a little more recently MySQL they haven't really managed to establish themselves as a ‘cool’ vendor. If you wanted, a cool vendor you’d historically probably look at RedHat one of the first businesses to really get open source and community thinking. The perception at least has been Oracle have acquired these technologies either as a biproduct of a bigger game or as a view as creating an ‘on ramp’ to their bigger more profitable products. Oracle have started to recognise that to be seriously successful in the cloud like AWS you need to be pretty pervasive and not only connect with the top of the decision tree but also those at the code face. To do that you need a bit of the ‘cool’ factor. That means doing things beyond just the database and your core middleware. These areas are becoming more and more frequently being subject to potential disruption such as Hadoop and big data, NoSQL and things like Kafka in the middleware space. This also fits with the narrative that do do well with SaaS you at least a very good IaaS and the way Oracle has approached SaaS you definitely need good PaaS. So they might as well also make these commercial offerings. This has resulted in Oracle moving from half dozen cloud offerings to something in the order of nearly forty offerings classified as PaaS. Plus a range of IaaS offerings that will appeal to developers and architects such as direct support for Docker through to Container Cloud which provides a simplified Docker model, and onto Kafka, Node.js, MySQL, NoSQL and others. The web tier is pretty interesting with JET which is an enterprise hardened certified version of Angular, React and Express with extra tooling which has been made available as open source. So the technology options are becoming a lot more interesting. Oracle are also starting to target new startups and looking to get new organisations onto the Oracle platform from day one, in the same way it is easy for a startup to leverage AWS. Oracle have made some commitment to the Java developer community though JavaOne which runs alongside the big brother conference of Open World. They are now seriously trying to reach out to the hardcore development community (not just Java as the new Oracle cloud offerings are definitely polyglot) through Oracle Code. I was fortunate enough to present at the London occurrence of the event (see my blog here). What Oracle has not yet quiet reached the point of being clearly easy to start working with compared to AWSand Azure. Yes, Oracle provide new sign ups with 300 dollars of credit but when you have a reputation (deserved or otherwise) of being expensive it isn't going to necessarily get people onboard in droves – say compared to AWS’s free micro-instance for a year. Conclusion  In all of this, I am of the view that Oracle are making headway, they are recognising what needs to be done to be a player; I have said in the past, and I believe it is still true – Oracle is a like an oil tanker or aircraft carrier, takes time to decide to turn, and turning isn't quick, but once a coarse is set a real head of stream and momentum will be built, and I wouldn't want to be in the company’s path.so let’s look at some hard facts – Oracle’s revenues remain pretty steady, surprisingly Oracle showed up in the last week on LinkedIn’s top employers list[5]. Oracle isn’t going to just disappear, it's Database business alone will keep it alive for a very long time to come. Its SaaS business appears to be on a good trajectory although more work on API enablement needs to take place. As an IaaS andPaaS technology provider Oracle appear to be getting a handle on things. Oracle is going to be attractive to end user executives as it is one of the very few vendors that covers all tiers of cloud from IaaS to PaaS providing the benefits of traditional hosting when needed and fully managed solutions and the benefits it offers.Oracle does still need to overcome some perception challenges, in many respects Oracle are seen in the same way Microsoft were in 90s and 2000s, something as a necessary evil and can be expensive. [1]http://www.businessinsider.com/best-larry-ellison-quotes-2013-4?op=1&IR=T/#oud-computing-maybe-im-an-idiot-but-i-have-no-idea-what-anyone-is-talking-about-1 [2]http://www.gartner.com/it-glossary/bimodal/ [3]http://www.investopedia.com/updates/enron-scandal-summary/ [4]https://en.wikipedia.org/wiki/Bankruptcy_of_Lehman_Brothers [5]https://www.linkedin.com/pulse/linkedin-top-companies-2017-where-us-wants-work-now-daniel-roth
Read more
  • 0
  • 0
  • 2680