Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Cloud & Networking

65 Articles
article-image-devops-evolution-and-revolution
Julian Ursell
24 Jul 2014
4 min read
Save for later

DevOps: An Evolution and a Revolution

Julian Ursell
24 Jul 2014
4 min read
Are we DevOps now? The system-wide software development methodology that breaks down the problematic divide between development and operations is still in that stage where enterprises implementing the idea are probably asking that question, working out whether they've reached the endgame of effective collaboration between the two spheres and a systematic automation of their IT service infrastructure. Considered to be the natural evolution of Agile development and practices, the idea and business implementation of DevOps is rapidly gaining traction and adoption in significant commercial enterprises and we're very likely to see a saturation of DevOps implementations in serious businesses in the near future. The benefits of DevOps for scaling and automating the daily operations of businesses are wide-reaching and becoming more and more crucial, both from the perspective of enabling rapid software development as well as delivering products to clients who demand and need more and more frequent releases of up-to-date applications. The movement towards DevOps systems moves in close synchronization with the growing demand for experiencing and accessing everything in real time, as it produces the level of infrastructural agility to roll out release after release with minimal delays. DevOps has been adopted prominently by hitters as big as Spotify, who have embraced the DevOps culture throughout the formative years of their organization and still hold this philosophy now. The idea that DevOps is an evolution is not a new one. However, there’s also the argument to be made that the actual evolution from a non-DevOps system to a DevOps one entails a revolution in thinking. From a software perspective, an argument could be made that DevOps has inspired a minor technological revolution, with the spawning of multiple technologies geared towards enabling a DevOps workflows. Docker, Chef, Puppet, Ansible, and Vagrant are all powerful key tools in this space and vastly increase the productivity of developers and engineers working with software at scale. However, it is one thing to mobilize DevOps tools and implement them physically into a system (not easy in itself), but it is another thing entirely to revolve the thinking of an organization round to a collaborative culture where developers and administrators live and breathe in the same DevOps atmosphere. As a way of thinking, it requires a substantial cultural overhaul and a breaking down of entrenched programming habits and the silo-ization of the two spheres. It's not easy to transform the day-to-day mind-set of a developer so that they incorporate thinking in ops (monitoring, configuration, availability) or vice versa of a system engineer so they are thinking in terms of design and development. One can imagine it is difficult to cultivate this sort of culture and atmosphere within a large enterprise system with many large moving parts, as opposed to a startup which may have the “day zero” flexibility to employ a DevOps approach from the roots up. To reach the “state” of DevOps is a long journey, and one that involves a revolution in thinking. From a systematic as well as cultural point of view, it takes a considerable degree of ground breaking in order to shatter (what is sometimes) the monolithic wall between development and operations. But for organizations that realize that they need the responsiveness to adapt to clients on demand and have the foresight to put in place system mechanics that allow them to scale their services in the future, the long term benefits of a DevOps revolution are invaluable. Continuous and automated deployment, shorter testing times, consistent application monitoring and performance visibility, flexibility when scaling, and a greater margin for error all stem from a successful DevOps implementation. On top of that a survey showed that engineers working in a DevOps environment spent less time firefighting and more productive time focusing on self-improvement, infrastructure improvement, and product quality. Getting to that point where engineers can say “we’re DevOps now!” however is a bit of a misconception, because it’s more than a matter of sharing common tools, and there will be times where keeping the bridge between devs and ops stable and productive is challenging. There is always the potential that new engineers joining an organization can dilute the DevOps culture, and also the fact that DevOps engineers don't grow overnight. It is an ongoing philosophy, and as much an evolution as it is a revolution worth having.
Read more
  • 0
  • 0
  • 25532

article-image-things-consider-when-migrating-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Things to Consider When Migrating to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
After the decision is made to make use of a cloud solution like Amazon Web Services or Microsoft Azure, there is one main question that needs to be answered – “What’s next?...” There are many factors to consider when migrating to the cloud, and this post will discuss the major steps for completing the transition. Gather background information Before getting started, it’s important to have a clear picture of what is meant to be accomplished in order to call the transition a success.Keeping the following questions at the forefront during the planning stages will help guide your process and ensure the success of the migration. What are the reasons for moving to the cloud? There are many benefits of moving to the cloud, and it is important to know what the focus of the transition should be. If the cost savings are the primary driver, vendor choice may be important. Prices between vendors vary, as do the support services that are offered–that might make a difference in future iterations. In other cases, the elasticity of hardware may be the main appeal. It will be important to ensure that the customization options are available at the desired level. Which applications are being moved? When beginning the migration process, it is important to make sure that the scope of the effort is clear. Consider the option of moving data and applications to the cloud selectively in order to ease the transition. Once the organization has completed a successful small-scale migration into the cloud, a second iteration of the process can take care of additional applications. What is the anticipated cost? A cloud solution will have variable costs associated with it, but it is important to have some estimation of what is expected. This will help when selecting vendors, and it will allow for guidance in configuring the system. What is the long-term plan? Is the new environment intended to eventually replace the legacy system? To work alongside it? Begin to think about the plan beyond the initial migration. Ensure that the selected vendor provides service guarantees that may become requirements in the future, like disaster recovery options or automatic backup services. Determine your actual cloud needs One important thing to maximize the benefits of making use of the cloud is to ensure that your resources are sufficient for your needs. Cloud computing services are billed based on actual usage, including processing power, storage, and network bandwidth. Configuring too few nodes will limit the ability to support the required applications, and too many nodes will inflate costs. Determine the list of applications and features that need to be present in the selected cloud vendor. Some vendors include backup services or disaster recovery options as add-on services that will impact the cost, so it important to decide whether or not these services are necessary. A benefit with most vendors is that these services are extremely configurable, so subscriptions can be modified. However, it is important to choose a vendor with packages that make sense for your current and future needs as much as possible, since transitioning between vendors is not typically desirable. Implement security policies Since the data and applications in the cloud are accessed over the Internet, it is of the utmost importance to ensure that all available vendor security policies are implemented correctly. In addition to the main access policies, determine if data security is a concern. Sensitive data such as PII or PCI may have regulations that impact data encryption rules, especially when being accessed through the cloud. Ensure that the selected vendor is reliable in order to safeguard this information properly. In some cases, applications that are being migrated will need to be refactored so that they will work in the cloud. Sometimes this means making adjustments to connection information or networking protocols. In other cases, this means adjusting access policies or opening ports. In all cases, a detailed plan needs to be made at the networking, software, and data levels in order to make the transition smooth. Let’s get to work! Once all of the decisions have been made and the security policies have been established and implemented, the data appropriate for the project can be uploaded to the cloud. After the data is transferred, it is important to ensure that everything was successful by performing data validation and testing of data access policies. At this point, everything will be configured and any application-specific refactoring or testing can begin. In order to ensure the success of the project, consider hiring a consulting firm with cloud experience that can help guide the process. In any case, the vendor, virtual machine specifications, configured applications and services, and privacy settings must be carefully considered in order to ensure that the cloud services provide the solution necessary for the project. Once the initial migration is complete, the plan can be revised in order to facilitate the migration of additional datasets or processes into the cloud environment. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 9860

article-image-buying-versus-renting-pros-and-cons-moving-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Buying versus Renting: The Pros and Cons of Moving to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
Convenience One major benefit of the IaaS model is the promise of elasticity to support unforeseen demand. This means that the Cloud vendor will provide the ability to quickly and easily scale the provided resources up or down, based on the actual usage requirements. This typically means that an organization can plan for the ″average″ case instead of the “worst case” of usage, simultaneously saving on costs and preventing outages. Additionally, since the systems provided through cloud vendors are usually virtual machines running on the vendor’s underlying hardware, the process of adding new machines, increasing the disk space, or subscribing to new services is usually just a change through a web UI, instead of a complicated hardware or software acquisition process. This flexibility is an appealing factor because it significantly reduces the waiting time required to support a new capability. However, this automation benefit is sometimes a hindrance to administrators and developers that need to access the low-level configuration settings of certain software. Additionally, since the services are being offered through a virtualized system, continuity in the underlying environment can’t be guaranteed. Some applications – for example, benchmarking tools – may not be suitable for that type of environment. Cost One appealing factor for the transition to the cloud is cost–but in certain situations, using the cloud may not actually be cheaper. Before making a decision, your organization should evaluate the following factors to make sure the transition will be beneficial. One major benefit is the impact on your organization′s budget. If the costs are transitioned to the cloud, they will usually count as operational expenditures, as opposed to capital expenditures. In some situations, this might make a difference when trying to get the budget for the project approved. Additionally, some savings may come in the form of reduced maintenance and licensing fees. These expenditures will be absorbed into the monthly cost, rather than being an upfront requirement. When subscribing to the cloud, you can disable any unnecessary resources ondemand, reducing costs. In the same situation with real hardware, the servers would be required to remain on 24/7 in order to provide the same access benefits. On the other hand, consider the size of the data. Vendors have costs associated with moving data into or out of the cloud, in addition to the charge for storage. In some cases, the data transfer time alone would prohibit the transition. Also, the previously mentioned elasticity benefits that draw some people into the cloud–scaling up automatically to meet unexpected demand–can also have unexpected impact on the monthly bill. These costs are sometimes difficult to predict, and since the cloud computing pricing model is based on usage, it is important to weigh the possibility of an unanticipated hefty bill against an initial hardware investment. Reliability Most cloud vendors typically guarantee service availability or access to customer support. This places that burden on the vendor, as opposed to being assumed by the project′s IT department. Similarly, most cloud vendors provide backup and disaster recovery options either as add-ons or built-in to the main offering. This can be a benefit for smaller projects that have the requirement, but do not have the resources to support two full clusters internally. However, even with these guarantees, vendors still need to perform routine maintenance on their hardware. Some server-side issues will result in virtual machines being disabled or relocated – usually communicated with some advanced notice. In certain cases this will cause interruptions and require manual interaction from the IT team. Privacy All data and services that get transitioned into the cloud will be accessible from anywhere via the web–for better or worse. Using this system, the technique of isolating the hardware onto its own private network or behind a firewall is no longer possible. On the positive side, this means that everyone on the team will be able to work using any Internet-connected device. On the negative side, this means that every precaution needs to be taken so that the data stays safe from prying eyes. For some organizations, the privacy concerns alone are enough to keep projects out of the cloud. Even assuming that the cloud can be made completely secure, stories in the news about data loss and password leakage will continue to project a negative perception of inherent danger. It is important to document all precautions being taken to protect the data and make sure that all affected parties in the organization are comfortable moving to the cloud. Conclusion The decision of whether or not to move into the cloud is an important one for any project or organization. The benefits of flexibility of hardware requirements, built–in support, and general automation must be weighed against the drawbacks of decreased control over the environment and privacy. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 1558

article-image-3-reasons-why-the-cloud-is-a-terrible-metaphor-and-one-why-it-isnt-2
Sarah
01 Jul 2014
4 min read
Save for later

3 Reasons Why "the Cloud" Is a Terrible Metaphor (and One Why It Isn't)

Sarah
01 Jul 2014
4 min read
I have a lot of feelings about “the cloud” as a metaphor for networked computing. All my indignation comes too late, of course. I’ve been having this rant for a solid four years, and that ship has long since sailed–the cloud is here to stay. As a figurative expression for how we compute these days, it’s proven to have way more sticking power than, say, the “information superhighway”. (Remember that one?) Still, we should always be careful about the ways we use figurative language. Sure, you and I know we’re really talking about odd labyrinths of blinking lights in giant refrigerator buildings. But does your CEO? I could talk a lot about the dangers of abstracting away our understanding of where our data actually is and who has the keys. But I won’t, because I have even better arguments than that. Here are my three reasons why “the cloud” is a terrible metaphor: 1. Clouds are too easy to draw. Anyone can draw a cloud. If you’re really stuck you just draw a sheep and then erase the black bits. That means that you don’t have to have the first clue about things like SaaS/PaaS/IaaS or local persistent storage to include “the cloud” in your Power Point presentation. If you have to give a talk in half an hour about the future of your business, clouds are even easier to draw than Venn Diagrams about morale and productivity. Had wecalled it “ Calabi–Yau Manifold Computing” the world would have saved hundreds of man hours spent in nonsensical meetings.The only thing sparingus from a worse fate is the stalling confusion that comes from trying to combine slide one–“The Cloud”–and slide two–”BlueSky Thinking!”. 2. Hundreds of Victorians died from this metaphor. Well, okay, not exactly. But in the nineteenth century, the Victorians had their own cloud concept–the miasma. The basic tenet was that epidemic illnesses were caused by bad air in places too full of poor people wearing fingerless gloves (for crime). It wasn’t until John Snow pointed to the infrastructure that people worked out where the disease was coming from. Snow mapped the pattern of pipes delivering water to infected areas and demonstrated that germs at one pump were causing the problem. I’m not saying our situation is exactly analogous. I’m just saying if we’re going to do the cloud metaphor again, we’d better be careful of metaphorical cholera. 3. Clouds might actually be alive. Some scientists reckon that the mechanism that lets clouds store and release precipitation is biological in nature. If this understanding becomes widespread, the whole metaphor’s going to change underneath us. Kids in school who’ve managed to convince the teacher to let them watch a DVD instead of doing maths will get edu-tained about it. Then we’re all going to start imagining clouds as moving colonies of tiny little cartoon critters. Do you want to think about that every time you save pictures of your drunken shenanigans to your Dropbox? And one reason why it isn’t a bad metaphor at all: 1. Actually, clouds are complex and fascinating . Quick pop quiz–what’s the difference between cirrus fibrates and cumulonimbus? If you know the answer to that, you’re most likely either a meteorologist, or you’re overpaid to sit at your desk googling the answers to rhetorical questions. In the latter case, you’ll have noticed that the Wikipedia article on clouds is about seventeen thousand words long. That’s a lot of metaphor. Meteorological study helps us to track clouds as they move from one geographic area to another, affecting climate, communications, and social behaviour. Through careful analysis of their movements and composition, we can make all kinds of predictions about how our world will look tomorrow. The important point came when we stopped imagining chariots and thundergods, and started really looking at what lay behind the pictures we’d painted for ourselves.
Read more
  • 0
  • 0
  • 1682

article-image-what-zerovm
Lars Butler
30 Jun 2014
6 min read
Save for later

What is ZeroVM?

Lars Butler
30 Jun 2014
6 min read
ZeroVM is a lightweight virtualization technology based on Google Native Client (NaCl). While it shares some similarities with traditional hypervisors and container technologies, it is unique in a number of respects. Unlike KVM and LXC, which provide an entire virtualized operating system environment, it isolates single processes and provides no operating system or kernel. This allows instances to start up in a very short time: about five milliseconds. Combined with a high level of security and zero execution overhead, ZeroVM is well-suited to ephemeral processes running untrusted code in multi-tenant environments. There are of course some limitations inherent in the design. ZeroVM cannot be used as a drop-in replacement for something like KVM or LXC. These limitations, however, were the deliberate design decisions necessary in order to create a virtualization platform specifically for building cloud applications. How ZeroVM is different to other virtualization tools Blake Yeager and Camuel Gilyadov gave a talk at the 2014 OpenStack Summit in Atlanta which summed up nicely the main differences between hypervisor-based virtual machines (KVM, Xen, and so on), containers (LXC, Docker, and so on), and ZeroVM. Here are the key differences they outlined: Traditional VM Container ZeroVM Hardware Shared Shared Shared Kernel/OS Dedicated Shared None Overhead High Low Very low Startup time Slow Fast Fast Security Very secure Somewhat secure Very secure Traditional VMs and containers provide a way to partition and schedule shared server resources for multiple tenants. ZeroVM accomplishes the same goal using a different approach and with finer granularity. Instead of running one or more application processes in a traditional virtual machine, applications written for ZeroVM must be decomposed in microprocesses, and each one gets its own instance. The advantage of in this case is that you can avoid long running VMs/processes which accumulate state (leading to memory leaks and cache problems). The disadvantage, however, is that it can be difficult to port existing applications. Each process running on ZeroVM is a single stateless unit of computation (much like a function in the “purely functional” sense; more on that to follow), and applications need to be structured specifically to fit this model. Some applications, such as long-running server applications, would arguably be impossible to re-implement entirely on ZeroVM, although some parts could be abstracted away to run inside ZeroVM instances. Applications that are predominantly parallel and involve many small units of computation are better suited to run on ZeroVM. Determinism ZeroVM provides a guarantee of functional determinism. What this means in practice is that with a given set of inputs (parameters, data, and so on), outputs are guaranteed to always be the same. This works because there are no sources of entropy. For example, the ZeroVM toolchain includes a port of glibc, which has a custom implementation of time functions such that time advances in a deterministic way for CPU and I/O operations. No state is accumulated during execution and no instances can be reused. The ZeroVM Run-Time environment (ZRT) does provide an in-memory virtual file system which can be used to read/write files during execution, but all writes are discarded when the instance terminates unless an output “channel” is used to pipe data to the host OS or elsewhere. Channels and I/O “Channels” are the basic I/O abstraction for ZeroVM instances. All I/O between the host OS and ZeroVM must occur over channels, and channels must be declared explicitly in advance. On the host, a channel can map to a file, character device, pipe, or socket. Inside an instance, all channels are presented as files that can be written to/read from, including devices like stdin, stdout, and stderr. Channels can also be used to connect multiple instances together to create arbitrary multi-stage job pipelines. For example, a MapReduce-style search application with multiple filters could be implemented on ZeroVM by writing each filter as a separate application/script and piping data from one to the next. Security ZeroVM has two key security components: static binary validation and a limited system call API. Static validation occurs before “untrusted” user code is executed to ensure that there are no accidental or malicious instructions that could break out of the sandbox and compromise the host system. Binary validation in this instance is largely based on the NaCl validator. (For more information about NaCl and its validation, you can read the following whitepaper http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/34913.pdf.) To further lock down the execution environment, ZeroVM only supports six system calls via a "trap" interface: pread, pwrite, jail, unjail, fork, and exit. By comparison, containers (LXC) expose the entire Linux system call API which presents a larger attack surface and more potential for exploitation. ZeroVM is lightweight ZeroVM is very lightweight. It can start in about five milliseconds. After the initial validation, program code is executed directly on the hardware without interpretation overhead or hardware virtualization. It's easy to embed in existing systems The security and lightweight nature of ZeroVM makes it ideal to embed in existing systems. For example, it can be used for arbitrary data-local computation in any kind of data store, akin to stored procedures. In this scenario, untrusted code provided by any user with access to the system can be executed safely. Because inputs and outputs must be declared explicitly upfront, the only concerns remaining are data access rules and quotas for storage and computation. Contrasted with a traditional model, where storage and compute nodes are separate, data-local computing can be a more efficient model when the cost of transferring data over the network to/from compute nodes outweighs the actual computation time itself. The tool has already been integrated with OpenStack Swift using ZeroCloud (middleware for Swift). This turns Swift into a “smart” data store, which can be used to scale parallel computations (such as multi-stage MapReduce jobs) across large collections of objects. Language support C and C++ applications can run on ZeroVM, provided that they are cross-compiled to NaCl using the provided toolchain. At present there is also support for Python 2.7 and Lua. Licensing All projects under the ZeroVM umbrella are licensed under Apache 2.0, which makes ZeroVM suitable for both commercial and non-commercial applications (the same as OpenStack).
Read more
  • 0
  • 0
  • 3249
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime