Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Cloud Computing

27 Articles
article-image-polycloud-a-better-alternative-to-cloud-agnosticism
Richard Gall
16 May 2018
3 min read
Save for later

Polycloud: a better alternative to cloud agnosticism

Richard Gall
16 May 2018
3 min read
What is polycloud? Polycloud is an emerging cloud strategy that is starting to take hold across a range of organizations. The concept is actually pretty simple: instead of using a single cloud vendor, you use multiple vendors. By doing this, you can develop a customized cloud solution that is suited to your needs. For example, you might use AWS for the bulk of your services and infrastructure, but decide to use Google's cloud for its machine learning capabilities. Polycloud has emerged because of the intensely competitive nature of the cloud space today. All three major vendors - AWS, Azure, and Google Cloud - don't particularly differentiate their products. The core features are pretty much the same across the market. Of course, there are certain subtle differences between each solution, as the example above demonstrates; taking a polycloud approach means you can leverage these differences rather than compromising with your vendor of choice. What's the difference between a polycloud approach and a cloud agnostic approach? You might be thinking that polycloud sounds like cloud agnosticism. And while there are clearly many similarities, the differences between the two are very important. Cloud agnosticism aims for a certain degree of portability across different cloud solutions. This can, of course, be extremely expensive. It also adds a lot of complexity, especially in how you orchestrate deployments across different cloud providers. True, there are times when cloud agnosticism might work for you; if you're not using the services being provided to you, then yes, cloud agnosticism might be the way to go. However, in many (possibly most) cases, cloud agnosticism makes life harder. Polycloud makes it a hell of a lot easier. In fact, it ultimately does what many organizations have been trying to do with a cloud agnostic strategy. It takes the parts you want from each solution and builds it around what you need. Perhaps one of the key benefits of a polycloud approach is that it gives more power back to users. Your strategic thinking is no longer limited to what AWS, Azure or Google offers - you can instead start with your needs and build the solution around that. How quickly is polycloud being adopted? Polycloud first featured in Thoughtworks' Radar in November 2017. At that point it was in the 'assess' stage of Thoughtworks' cycle; this means it's simply worth exploring and investigating in more detail. However, in its May 2018 Radar report, polycloud had moved into the 'trial' phase. This means it is seen as being an approach worth adopting. It will be worth watching the polycloud trend closely over the next few months to see how it evolves. There's a good chance that we'll see it come to replace cloud agnosticism. Equally, it's likely to impact the way AWS, Azure and Google respond. In many ways, the trend a reaction to the way the market has evolved; it may force the big players in the market to evolve what they offer to customers and clients. Read next Serverless computing wars: AWS Lambdas vs Azure Functions How to run Lambda functions on AWS Greengrass
Read more
  • 0
  • 0
  • 12726

article-image-how-machine-learning-as-a-service-transforming-cloud
Vijin Boricha
18 Apr 2018
4 min read
Save for later

How machine learning as a service is transforming cloud

Vijin Boricha
18 Apr 2018
4 min read
Machine learning as a service (MLaaS) is an innovation that is growing out of 2 of the most important tech trends - cloud and machine learning. It's significant because it enhances both. It makes cloud an even more compelling proposition for businesses. That's because cloud typically has three major operations: computing, networking and storage. When you bring machine learning into the picture, the data that cloud stores and processes can be used in radically different ways, solving a range of business problems. What is machine learning as a service? Cloud platforms have always competed to be the first or the best to provide new services. This includes platform as a service (PaaS) solutions, infrastructure as a service (IaaS) solutions and software as a service (SaaS) solutions. In essense, cloud providers like AWS and Azure provide sets of software to different things so their customers don't have to. Machine learning as a service is simply another instance of the services offered by cloud providers. It could include a wide range of features, from data visualization to predictive analytics and natural language processing. It makes running machine learning models easy, effectively automating some of the work that might have typically done manually by a data engineering team. Here are the biggest cloud providers who offer machine learning as a service: Google Cloud Platform Amazon Web Services Microsoft Azure IBM Cloud Every platform provides a different suite of services and features. It will ultimately depend on what's most important to you which one you choose. Let's take a look now at the key differences between these cloud providers' machine learning as a service offerings. Comparing the leading MLaaS products Google Cloud AI Google Cloud Platform has always provided their own services to help businesses grow. They provide modern machine learning services with pre-trained models and a service to generate your own tailored models. Majority of Google applications like Photos (image search), the Google app (voice search), and Inbox (Smart Reply) have been built using the same services that they provide to their users. Pros: Cheaper in comparison to other Cloud providers Provides IaaS and PaaS Solutions Cons: Google Prediction API is going to be discontinued (May 1st, 2018) Lacks a visual interface You'll need to know TensorFlow Amazon Machine Learning Amazon Machine Learning provides services for building ML models and generating predictions which help users develop robust, scalable, and cost-effective smart applications. With the help of Amazon Machine Learning you are able to use powerful machine learning technology without having any prior experience in machine learning algorithms and techniques. Pros: Provides versatile automated solutions It's accessible - users don't need to be machine learning experts Cons: The more you use, the more expensive it is Azure Machine Learning Studio Microsoft Azure provides you with Machine Learning Studio - a simple browser-based, drag-and-drop environment which functions without any kind of coding. You are provided with fully-managed cloud services that enable you to easily build, deploy and share predictive analytics solutions. Here you are also provided with a platform (Gallery) to share and contribute to the community. Pros: Consists of most versatile toolset for MLaaS You can contribute to and reuse machine learning solutions from the community Cons: Comparatively expensive A lot of manual work is required Watson Machine Learning Similar to the above platforms, IBM Watson Machine Learning is a service that helps  users to create, train, and deploy self-learning models to integrate predictive capabilities within their applications. This platform provides automated and collaborative workflows to grow intelligent business applications. Pros: Automated workflows Data science skills is not necessary Cons: Comparatively limited APIs and services Lacks streaming analytics Selecting the machine learning as a service solution that's right for you There are so many machine learning as a service solutions out there that it's easy to get confused. The crucial step to take before you make a decision to purchase anything is to plan your business requirements. Think carefully not only about what you want to achieve, but what you already do too. You want your MLaaS solution to easily integrate into the way you currently work. You also don't want it to replicate any work you're currently doing that you're pretty happy with. It gets repeated so much but it remains as true as it has ever been - make sure your software decisions are fully aligned with your business needs. It's easy to get seduced by the promise of innovative new tools, but without the right alignment they're not going to help you at all.
Read more
  • 0
  • 0
  • 3545

article-image-biggest-cloud-adoption-challenges
Rick Blaisdell
10 Sep 2017
3 min read
Save for later

The biggest cloud adoption challenges

Rick Blaisdell
10 Sep 2017
3 min read
The cloud technology industry is growing rapidly, as companies are understanding the profitability and efficiency benefits that cloud computing can provide. Public, private, or a combination of various cloud models are used by 70 percent of U.S. companies who have at least one application in the cloud according to IDG Enterprise Cloud Computing Survey. In addition, almost 93 percent of organizations across the world use cloud services according to Building Trust in a Cloudy Sky Survey. Even though cloud adoption is increasing, it's important that companies develop a strategy before moving their data and using cloud technology to increase efficiency. This strategy is especially important because transitioning to the cloud is often a challenging process. If you're thinking of making this transition, here is a list of cloud adoption challenges that you should be aware of. Technology It's important to take into consideration the complex issues that can arise with new technology. For example, some applications are not built for cloud, or require certain compliance requirements that will not be met in a pure cloud environment. In this instance, a solution could be a hybrid environment with configured security requirements. People Moving to the cloud could be met with resistance, especially from people who have spent most of their time managing physical infrastructure. The largest organization will have a long transition to full cloud adoption. Small companies that are tech savvy will have an easier time making this change. Most modern IT departments will choose an agile approach to cloud adoption, although some employers might not be that experiences in these types of operational changes. The implementation takes time, but you can transform existing operating models to enable a cloud to be more approachable for the company. Psychological barriers Psychologically, there will be many questions. Will the cloud be more secure? Can I maintain my SLAs? Will I find the right technical support services? In 2017, cloud providers can meet all of those expectations and at the same time, reduce overall expenses. Costs Many organizations that decide to move to the cloud do not estimate costs properly. Even though the pricing seems to be simple, the more moving parts there are, the more the liklihood of incorrect costs estimates. When starting the migration to the cloud, look for tools that will help you estimate cloud costs and ROI, whilst taking into consideration all possible variables. Security One of the CIO's concerns when it comes to moving to the cloud is security and privacy. The management team needs to know if the cloud provider they plan to work with has a bullet proof environment. This is a big challenge because a data breach could not only put the company reputation at risk, but could also be the result of a huge financial loss for a company. The first step in adopting cloud services is to be able to identify all of the challenges that will come with the process. It is essential to work with the cloud provider to facilitate a successful cloud implementation. Are there any challenges that you consider crucial to a cloud transition? Let us know what you think in the comments section. About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies developing innovative technology strategies.
Read more
  • 0
  • 0
  • 2820
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-will-oracle-become-key-cloud-player-and-what-will-it-mean-development-architecture-com
Phil Wilkins
13 Jun 2017
10 min read
Save for later

Will Oracle become a key cloud player, and what will it mean to development & architecture community?

Phil Wilkins
13 Jun 2017
10 min read
This sort of question and provoke some emotive reactions, and many technologists despite the stereotype can get pretty passionate about our views. So let me put my cards on the table. My first book as an author is about Oracle middleware (Implementing Oracle Integration Cloud). I am Oracle Ace Associate (soon to be full Ace) which is comparable to a Java Rockstar, Microsoft MVP or SAP Mentor. I work for Capgemini as a Senior Consultant as large SI we work with many vendors, so I need to able have a feel for all options, even though I specialise in Oracle now. Before I got involved with Oracle I worked with primarily Open Source technologies particularly JBoss and Fuse (before and after both where absorbed into RedHat) and I have technically reviewed a number of Open source books for Packt. So I should be able to provide a balanced argument. So onto the … A lot has been said about Oracle’s CIO Larry Ellison and his position on cloud technologies. Most notably for rubbishing it 2008, which is ironic since those of us who remember the late 90s Oracle heavily committed to a concept called the Network Machine which could have led to a more cloud like ecosystem had the conditions been right. The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do. ... The computer industry is the only industry that is more fashion-driven than women’s fashion. Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?[1] Since then we’ve seen a slow change.  The first cloud offerings we saw came in the form of Mobile Cloud Service which provided a Mobile Backend as a Service (MBaaS). At this time Oracle’s extensive programme to try and rationalize its portfolio and bring the best ideas and design together from Peoplesoft, E-Business Suite, Siebel to a single cohesive product portfolio started to show progress – Fusion applications. Fusion applications built with the WebLogic core and exploiting other investments provided the company with a product that had the potential to become cloud enabled. If that initiative hadn’t been started when it did then Oracle’s position may look very different.  But from a solid standardised container based product portfolio the transition to cloud has become a great deal easier, facilitated by the arrival of Oracle database 12c which provided the means to easily make the data storage at least multi-tenant. This combination gave Oracle its ability to then sell ERP modules as SaaS and meant that Oracle cloud start to think about competing with the SaaS darlings of SalesForce, NetSuite and Workday. However,ERPs don’t live in isolation. Any organisation has to deal with its oddities, special needs, departmental solutions as well as those systems that are unique and differentiate companies form their competition. This has driven the need to provide the means to provide PaaS and IaaS. Not only that, Oracle themselves admitted making SaaS as cost effective as possible it needed to revise the infrastructure and software platform to maximise the application density. A lesson that Amazon with AWS has long understood from the outset and done well in realizing. It has also had the benefit of being a later starter, looked at what has and hasn’t worked, and used to its deep pockets to ensure it got the best skills to build the ideal answers by passing many of the mistakes and issues the pioneers had to go through. This brought us to the state a couple of years ago, where its core products had a cloud existence and Oracle where making headway winning new mid-market customers – after all Oracle ERP is seen as something of a Rolls Royce of ERPs, globally capable and well tested and now cost accessible to more of the mid-market. So as an ERP vendor Oracle will continue to be a player, if there is a challenger, Oracle’s pockets are deep enough to buy the competition which is what happened with Netsuite.This maybe very interesting to enterprise architects who need to take off the shelf building blocks and provide the solid corporate foundation, but those of us who prefer to build, do something different not so exciting. In the last few years we have seen a lot of talk about digital disruptors, the need for serious agility (as in the ability to change and react rather than the development ethos). To have this capability you need to be able build, radically change solutions quickly and yet still work with those core backbone accounting tasks.  To use a Gartner expression, we need to be bimodal[2], to innovate.  When applications packages change comparatively slowly (they need to be slow and steady, if you want to show that your accounting isn’t going to look like Enron[3] or Lehmann Brothers[4]). With this growing need to drive innovation and change ever faster we have seen some significant changes in the way things tend to be done. In a way the need to innovate has impacted to the point that,you could almost say in the process of trying to disrupt existing businesses through IT we have achieved the disruption of software development. With the facilitation of the cloud particularly IaaS, the low cost of startupandtry new solutions and either grow them if they succeed or mothball them with minimal capital loss or delay if they don't; we have seen … The pace of service adoption accelerate exponentially meaning the rate of scale up and dynamic demand particularly for end user facing services has needed new techniques for scaling. Standards moving away from being formulated by committee of companies wanting to influence/dominate a market segment which while resulted in some great ideas (UDDI as a concept was fabulous) but often very unwieldy (ebXML, SOAP, UDDI for example) to simpler standards that have largely evolved through simplicity and quickly recognized value (JSON, REST) to become de-facto standards. New development paradigms that enable large solutions to be delivered whilst still keeping delivery on short cycles and supporting organic change (Agile, microservices). Continuous Integration and DevOps breaking down organisational structures and driving accountability – you build it, you make it run. The open source business model as a way to break into the industry with a new software technologywithout needing deep pockets for marketing etc has become the predominant. route in, and at the same time acceptance that open source software can be as well supported as a closed source product. For a long time, despite Oracle being the ‘guardian’ for Java and then a little more recently MySQL they haven't really managed to establish themselves as a ‘cool’ vendor. If you wanted, a cool vendor you’d historically probably look at RedHat one of the first businesses to really get open source and community thinking. The perception at least has been Oracle have acquired these technologies either as a biproduct of a bigger game or as a view as creating an ‘on ramp’ to their bigger more profitable products. Oracle have started to recognise that to be seriously successful in the cloud like AWS you need to be pretty pervasive and not only connect with the top of the decision tree but also those at the code face. To do that you need a bit of the ‘cool’ factor. That means doing things beyond just the database and your core middleware. These areas are becoming more and more frequently being subject to potential disruption such as Hadoop and big data, NoSQL and things like Kafka in the middleware space. This also fits with the narrative that do do well with SaaS you at least a very good IaaS and the way Oracle has approached SaaS you definitely need good PaaS. So they might as well also make these commercial offerings. This has resulted in Oracle moving from half dozen cloud offerings to something in the order of nearly forty offerings classified as PaaS. Plus a range of IaaS offerings that will appeal to developers and architects such as direct support for Docker through to Container Cloud which provides a simplified Docker model, and onto Kafka, Node.js, MySQL, NoSQL and others. The web tier is pretty interesting with JET which is an enterprise hardened certified version of Angular, React and Express with extra tooling which has been made available as open source. So the technology options are becoming a lot more interesting. Oracle are also starting to target new startups and looking to get new organisations onto the Oracle platform from day one, in the same way it is easy for a startup to leverage AWS. Oracle have made some commitment to the Java developer community though JavaOne which runs alongside the big brother conference of Open World. They are now seriously trying to reach out to the hardcore development community (not just Java as the new Oracle cloud offerings are definitely polyglot) through Oracle Code. I was fortunate enough to present at the London occurrence of the event (see my blog here). What Oracle has not yet quiet reached the point of being clearly easy to start working with compared to AWSand Azure. Yes, Oracle provide new sign ups with 300 dollars of credit but when you have a reputation (deserved or otherwise) of being expensive it isn't going to necessarily get people onboard in droves – say compared to AWS’s free micro-instance for a year. Conclusion  In all of this, I am of the view that Oracle are making headway, they are recognising what needs to be done to be a player; I have said in the past, and I believe it is still true – Oracle is a like an oil tanker or aircraft carrier, takes time to decide to turn, and turning isn't quick, but once a coarse is set a real head of stream and momentum will be built, and I wouldn't want to be in the company’s path.so let’s look at some hard facts – Oracle’s revenues remain pretty steady, surprisingly Oracle showed up in the last week on LinkedIn’s top employers list[5]. Oracle isn’t going to just disappear, it's Database business alone will keep it alive for a very long time to come. Its SaaS business appears to be on a good trajectory although more work on API enablement needs to take place. As an IaaS andPaaS technology provider Oracle appear to be getting a handle on things. Oracle is going to be attractive to end user executives as it is one of the very few vendors that covers all tiers of cloud from IaaS to PaaS providing the benefits of traditional hosting when needed and fully managed solutions and the benefits it offers.Oracle does still need to overcome some perception challenges, in many respects Oracle are seen in the same way Microsoft were in 90s and 2000s, something as a necessary evil and can be expensive. [1]http://www.businessinsider.com/best-larry-ellison-quotes-2013-4?op=1&IR=T/#oud-computing-maybe-im-an-idiot-but-i-have-no-idea-what-anyone-is-talking-about-1 [2]http://www.gartner.com/it-glossary/bimodal/ [3]http://www.investopedia.com/updates/enron-scandal-summary/ [4]https://en.wikipedia.org/wiki/Bankruptcy_of_Lehman_Brothers [5]https://www.linkedin.com/pulse/linkedin-top-companies-2017-where-us-wants-work-now-daniel-roth
Read more
  • 0
  • 0
  • 2594

article-image-dispelling-myths-hybrid-cloud
Ben Neil
24 May 2017
6 min read
Save for later

Dispelling the myths of hybrid cloud

Ben Neil
24 May 2017
6 min read
The words "vendor lock" worry me more than I'd like to admit. Whether it's having too many virtual machines in ec2, an expensive lambda in Google Functions, or any random offering that I have been using to augment my on-premise Raspberry Pi cluster, it's really something I'vefeared. Over time, I realize it has impacted the way I have spoken about off-premises services. Why? Because I got burned a few times. A few months back I was getting a classic 3 AM call asking to check in on a service that was failing to report back to an on premise Sensu server, and my superstitious mind immediately went to how that third-party service had let my coworkers down. After a quick check, nothing was broken badly, only an unruly agent had hung on an on-premise virtual machine. I’ve had other issues and wanted to help dispel some of the myths around adopting hybrid cloud solutions. So, to those ends, what are some of these myths and are they actually true? It's harder and more expensive to use public cloud offerings Given some of the places I’ve worked, one of my memories was using VMware to spin up new VMs—a process that could take up to ten minutes to get baseline provisioning. This was eventually corrected by using packer to create an almost perfect VM, getting that into VMware images was time consuming, but after boot the only thing left was informing the salt master that a new node had come online.  In this example, I was using those VMs to startup a Scala http4s application that would begin crunching through a mounted drive containing chunks of data. While the on-site solution was fine, there was still a lot of work that had to be done to orchestrate this solution. It worked fine, but I was bothered by the resources that were being taken for my task. No one likes to talk with their coworker about their 75 machine VM cluster that bursts to existence in the middle of the workday and sets off resource alarms. Thus, I began reshaping the application using containers and Hyper.sh, which has lead to some incredible successes (and alarms that aren't as stressful), basically by taking the data (slightly modified), which needed to be crunched and adding that data to s3. Then pushing my single image to Hyper.sh, creating 100 containers, crunching data, removing those containers and finally sending the finalized results to an on premise service—not only was time saved, but the work flow has brought redundancy in data, better auditing and less strain on the on premise solution. So, while you can usually do all the work you need on-site, sometimes leveraging the options that are available from different vendors can create a nice web of redundancy and auditing. Buzzword bingo aside, the solution ended up to be more cost effective than using spot instances in ec2. Managing public and private servers is too taxing I’ll keep this response brief; monitoring is hard, no matter if the service, VM, database or container,is on-site or off. The same can be said for alerting, resource allocation, and cost analysis, but that said, these are all aspects of modern infrastructure that are just par for the course. Letting superstition get the better of you when experimenting with a hybrid solution would be a mistake.  The way I like to think of it is that as long as you have a way into your on-site servers that are locked down to those external nodes you’re all set. If you need to setup more monitoring, go ahead; the slight modification to Nagios or Zappix rules won’t take much coding and the benefit will always be at hand for notifying on-call. The added benefit, depending on the service, which exists off-site is maybe having a different level of resiliency that wasn't accounted for on-site, being more highly available through a provider. For example, sometimes I use Monit to restart a service or depend on systemd/upstart to restart a temperamental service. Using AWS, I can set up alarms that trigger different events to handle a predefined run-book’s, which can handle a failure and saves me from that aforementioned 3am wakeup. Note that both of these edge cases has their own solutions, which aren’t “taxing”—just par for the course. Too many tools not enough adoption You’re not wrong, but if your developers and operators are not embracing at least a rudimentary adoption of these new technologies, you may want to look culturally. People should want to try and reduce cost through these new choices, even if that change is cautious, taking a second look at that s3 bucket or Pivotal cloud foundry app nothing should be immediately discounted. Because taking the time to apply a solution to an off-site resource can often result in an immediate saving in manpower. Think about it for a moment, given whatever internal infrastructure you’re dealing with, the number of people that are around to support that application. Sometimes it's nice to give them a break. To take that learning curve onto yourself and empower your team and wiki of choice to create a different solution to what is currently available to your local infrastructure. Whether its a Friday code jam, or just taking a pain point in a difficult deployment, crafting better ways of dealing with those common difficulties through a hybrid cloud solution can create more options. Which, after all, is what a hybrid cloud is attempting to provide – optionsthat can be used to reduce costs, increase general knowledge and bolster an environment that invites more people to innovate. About the author Ben Neil is a polyglot engineer who has the privilege to fill a lot of critical roles, whether it's dealing with front/backend application development, system administration, integrating devops methodology or writing. He has spent 10+ years creating solutions, services, and full lifecycle automation for medium to large companies.  He is currently focused on Scala, container and unikernel technology following a bleeding edge open source community, which brings the benefits to companies that strive to be at the foremost of technology. He can usually be found either playing dwarf fortress or contributing on Github. 
Read more
  • 0
  • 0
  • 1668

article-image-openstack-jack-all-trades-master-none-cloud-solution
Richard Gall
06 Oct 2015
4 min read
Save for later

Is OpenStack a ‘Jack of All Trades, Master of None’ Cloud Solution?

Richard Gall
06 Oct 2015
4 min read
OpenStack looks like all things to all people. Casually perusing their website you can see that it emphasises the cloud solution’s high-level features – ‘compute’, ‘storage’ and ‘networking’. Each one of these is aimed at different types of users, from development teams to sysadmins. Its multi-faceted nature makes it difficult to define OpenStack – is it, we might ask, Infrastructure or Software as a Service? And while OpenStack’s scope might look impressive on paper, ticking the box marked ‘innovative’, when it comes actually choosing a cloud platform and developing a relevant strategy, it begins to look like more of a risk. Surely we’d do well to remember the axiom ‘jack of all trades, master of none’? Maybe if you’re living in 2005 – even 2010. But in 2015, if you’re frightened of the innovation that OpenStack offers (and, for those of you really lagging behind, cloud in general) you’re missing the bigger picture. You’re ignoring the fact that true opportunities for innovation and growth don’t simply lie in faster and more powerful tools, but instead in more efficient and integrated ways of working. Yes, this might require us to rethink the ways we understand job roles and how they fit together – but why shouldn’t technology challenge us to be better, rather than just make us incrementally lazier? OpenStack’s multifaceted offering isn’t simply an experiment that answers the somewhat masturbatory question ‘how much can we fit into a single cloud solution’, but is in fact informed (unconsciously, perhaps) by an agile philosophy. What’s interesting about OpenStack – and why it might be said to lead the way when it comes to cloud – is what it makes possible. Small businesses and startups have already realized this (although this is likely through simple necessity as much as strategic decision making considering how much cloud solutions can cost), but it’s likely that larger enterprises will soon be coming to the same conclusion. And why should we be surprised? Is the tiny startup really that different from the fortune 500 corporation? Yes, larger organizations have legacy issues – with both technology and culture – but this is gradually growing out, as we enter a new era where we expect something more dynamic from the tools and platforms we use. Which is where OpenStack fits in – no longer a curiosity or a ‘science project’, it is now the standard to which other cloud platforms are held. That it offers such impressive functionality for free (at a basic level) means that those enterprise solutions that once had a hold over the marketplace will now have to play catch up. It’s likely they’ll be using the innovations of OpenStack as a starting point for their own developments – which they will be hoping keep them ‘cutting-edge’ in the eyes of customers. If OpenStack is going to be the way forward and the wider technical and business communities (whoever they are exactly) are to embrace it with open arms, it means there will need to be a cultural change in how we use it. OpenStack might well be the Jack of all trades and master of all when it comes to cloud, but it nevertheless places an emphasis on users to use it in the ‘correct’ way. That’s not to say that there is a correct way – it’s more about using it strategically and thinking about what you want from OpenStack. CloudPro articulates this in a good way, arguing that OpenStack needs a ‘benevolent dictator’. ‘Are too many cooks spoiling the open-source broth?’ it asks, getting to the central problem with all open-source technologies – the existentially troubling fact that the possibilities are seemingly endless. This point doesn’t mean we need to step away from the collaborative potential of OpenStack; it emphasises that effective collaboration requires an effective and clear strategy. Orchestration is the aim of the game – whether you’re talking software or people. At this year’s OpenStack conference in Vancouver, COO Mark Collier described OpenStack as ‘an agnostic integration engine… one that puts users in the best position for success’. This is a great way to define OpenStack, and positions it as more than just a typical cloud platform. It is its agnosticism that is particularly crucial – it doesn’t take a position on what you do, it rather makes it possible for you to do what you think you need to do. Maybe, then, OpenStack is a jack of all trades that lets you become the master. For more OpenStack tutorials and extra content, visit our dedicated page. Find it here.
Read more
  • 0
  • 0
  • 1747
article-image-cloud-security-tips-locking-your-account-down-aws-identity-access-manger-iam
Robi Sen
15 Jul 2015
8 min read
Save for later

Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)

Robi Sen
15 Jul 2015
8 min read
With the growth of cloud services such as Google’s Cloud Platform, Microsoft Azure, Amazon’s Web Service, and many others,developers and organizations have unprecedented access to low cost, high performance infrastructure that can scale as needed. Everyone from individuals to major companies have embraced the cloud as their platform of choice to host their IT services and applications; especially small companies and start-ups. Yet for many reasons, those who have embraced the cloud have often been slow to recognize the unique security considerations that face cloud users. When you host your own servers, the cloud operates on a shared risk model were the cloud provider focuses on providing physical security, failover, and high level network perimeter protection. The cloud user is understood to be securing their operating systems, data, applications, and the like. This means that while your cloud provider provides incredible services for your business, you are responsible for much of the security, including implementing access controls, intrusion prevention, intrusion detection, encryption, and the like. Often, because cloud services are made so accessible and easy to setup, users don’t bother to secure them, nor often even know the need to. If you’re new to the cloud and new to security, this post is for you. While we will focus on using Amazon Web Services,the basic concepts apply to most cloud services regardless of vendor. Access control Since you’re using virtual resources that are already setup, the AWS cloud, one of the most important things you need to do right away is secure access to your account and images. First, you want to lock down your AWS account. This is the login and password that you are assigned when you setup your AWS account and anyone who has access to this can purchase new services, change your services, and generally cause complete havoc for you. Indeed AWS accounts sell on hacker’s sites and Darknet sites for good money; usually so the person who buys your hacked/stolen AWS account wants to setup bit coin miners. Don’t give yours out or make it easily accessible. For example, many developers embed logins, passwords, and AWS keys into their code, which is a very bad practice, and then have their accounts compromised by criminals. The first thing you need to do after getting your Amazon login and password is to store it using a tool such as mSecure or LastPass that allows you to save them in an encrypted file or database. It should then never go into a file, document, or public place. It is also strongly advised to use Multi-Factor Authentication (MFA). Amazon allows you MFA via physical devices or straightforward smart phone applications. You can read more about Amazon’s MFA here and here. Once your AWS account information is secure you should then use AWS’s Identity and Access Management (IAM) system to give each user under your master AWS account access with specific privileges according to best practices. Even if you are the only person who uses your AWS account, you should consider using IAM to create a couple of users that have access based on their role, such as a content developer who only has the ability to move files in out of specific directories, or a developer who can start and stop instances, and the like. Then always use the role with the least privileges needed to get your work done as possible. While this might seem cumbersome, you will quickly get used to it, you will be much safer, and finally if your project grows, you will already have the groundwork to ramp up safely. Creating an IAM group and user In this section, we will create an administrator group and add ourselves as the user. If you do not currently have an AWS account, you can get a free account from AWS here. Be advised you will have to have a valid credit card and a phone number to validate your account with, but Amazon will give you the account to play with free for a year (see terms here). For this example, what you need to do is: Create an administrator group that we will give group permissions to for our AWS account’s resources Make a user for ourselves and then add the user to the administrator group Finally create a password for the user so we can access to the AWS Management Console To do this, first sign into the IAM console. Now click on the Groups link and then select Create New Group: Now name the new group Administrator and select Next Step: Next, we need to assign a group policy. You can build your own (see more information here), but this should generally be avoided until you really understand AWS security policies and AWS in general. Amazon has a number of predeveloped policy templates that work great until your applications and architecture gets more complex and grows. So for now just simply select the Administrator Access policy as shown here: You should now see a screen that shows your new policy. You can then click next and then Create Group: You should now see the new Administrator group policy under Group Name: In reality, you would probably want to create all your different group accounts and then associate your users, but for now we are just going to create the Administrator account then create a single user and add it to the Administrator group. Creating a new IAM user account Now that you have created an Administrator group, let's add a user to it. To do this, go to the navigation menu, select the user, and then click Create New Users. You should then see a new screen. You have the option to create access keys for this user. Depending on the user, you may or may not need to do this, but for now go ahead and select that option box and then select Create: IAM will now create the user and give you the option to view the new key or download and save it. Go ahead and download the credentials. Usually it’s good practice to then save those credentials into your password manager such as mSecure or LastPass and not share them with anyone except for the specific user. Once you have downloaded the userscredentials, go ahead and select Close, which will return you to the Users screen. Now click on the user you created. You should now see something like the following (the username has been removed from the figure): Now select Add User to Groups. You should now see the group listing, which only shows one if you’re following along.Now select the Administrator group and then select Add to Groups. You should be taken back to the Users content page and should now see that your user is assigned to the Administrator group. Now, staying in the same screen, scroll down until you see the Security Credentials part of the page. Now click Manage Password. You will now be asked to either select an auto-generated password or click Assign a custom password. Go ahead and create your own password and select Apply. You should be taken back to your user content screen and under security credentials section, you should now see that the password field has been updated from No to Yes. You should also strongly consider using your MFA tool, in my case the AWS virtual MFA Android application, to make the account even more secure. Summary In this article, we talked about the first step in securing your cloud services is controlling access to them. We looked at how AWS allows this via the IAM, allowing you to create groups and group security policies tied to a group, and then how to add users to the group enablingyou to secure your cloud resources based on best practices. Now that you have done that, you can go ahead and add more groups and or users to your AWS account as you need to.However, before you do that, make sure you thoroughly read the AWS IAM documentation; links are supplied at the end of the post. Resources for AWS IAM IAM User Guide Information on IAM Permissions and Policies IAM Best Practices About Author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.            
Read more
  • 0
  • 0
  • 2964

article-image-encryption-cloud-overview
Robi Sen
31 Mar 2015
9 min read
Save for later

Encryption in the Cloud: An Overview

Robi Sen
31 Mar 2015
9 min read
In this post we will look at how to secure your AWS solution data using encryption (if you need a primer on encryption here is a good one). We will also look at some of various services from AWS and other third party vendors that will help you not only encrypt your data, but take care of more problematic issues such as managing keys. Why Encryption Whether it’s Intellectual Property (IP) or simply just user names and passwords, your data is important to you and your organization. So, keeping it safe is important. Although hardening your network, operating systems, access management and other steps can greatly reduce the chance of being compromised, the cold hard reality is that, at some point, in your companies’ existence that data will be compromised. So, assuming that you will be compromised is one major reason we need to encrypt data. Another major reason is the likelihood of accidental or purposeful inappropriate data access and leakage by employees which, depending on what studies you look at, is perhaps the largest reason for data exposure. Regardless of the reason or vector, you never want to expose important data unintentionally, and for this reason encrypting your sensitive information is fundamental to basic security. Three states of data Generally we classify data as having three distinct states: Data at rest, such as when your data is in files on a drive or data in a database Data in motion, such as web requests going over the Internet via port 80 Data in use, which is generally data in RAM or data being used by the CPU In general, the most at risk data is data at rest and data in motion, both of which are reasonably straight forward to secure in the cloud, although their implementation needs to be carefully managed to maintain strong security. What to encrypt and what not to Most security people would love to encrypt anything and everything all the time, but encryption creates numerous real or potential problems. The first of these is that encryption is often computationally expensive and can consume CPU resources, especially when you’re constantly encrypting and decrypting data. Indeed, this has been one of the main reasons why vendors like Google did not encrypt all search traffic until recently. Another reason people often do not widely apply encryption is that it creates potential system administration and support issues since, depending on the encryption approach you take, you can create complex issues for managing your keys. Indeed, even the most simple encryption systems, such as encrypting a whole drive with a single key, requires strong key management in order to be effective. This can create added expense and resource costs since organizations have to implement human and automated systems to manage and control keys. While there are many more reasons people do not widely implement encryption, the reality is that you usually have to make determinations on what to encrypt. Most organizations follow a process for deciding on what to encrypt in the following manner: 1- What data must be private? This might be Personal Identifying Information, credit card numbers, or the like that is required to be private for compliances reasons such as PCI or FISMA. 2- What level of sensitivity is this data? Some data such as PII often has federal data security requirements that are dictated by what industry you are in. For example, in health care HIPPA requirements dictate the minimum level of encryption you must use (see here for an example). Other data might require further encryption levels and controls. 3-What is the data’s value to my business? This is a tricky one. Many companies decide they need little to no encryption for data assuming it is not important, such as their user’s email addresses. Then they get compromised and their users spammed and have their identities stolen potentially causing real legal damages to the company or destroying their reputation. Depending on your business and your business model, even if you are not required to encrypt your data, you may want to in order to protect your company, its reputation or the brand. 4-What is the performance cost of using a specific encryption approach to data and how will it affect my business? These high level steps will give you a sense of what you should encrypt or need to encrypt and how to encrypt it. Item 4 is specifically important, in that while it might be nice to encrypt all your data with 4096 Elliptic Curve encryption keys, this will most likely create too high of a computational load and bottle neck on any high transactional application, such as an e-commerce store, to be practical to implement. This takes us to our next topic, which is choosing encryption approaches. Encryption choices in the cloud for Data at Rest Generally there are two major choices to make when encrypting data, especially data at rest. These are: 1 – Encrypt only key sensitive data such as logins, passwords, social security and similar data. 2 – Encrypt everything. As we have pointed out, while encrypting everything would be nice, there are a lot of potential issues with this. In some cases, however, such as backing up data to S3 or Glacier for long term storage, it might be a total no brainer. More typically, thought, numerous factors weigh in. Another choice you have to make with cloud solutions is where you will do your encryption. This needs to be influenced by your specific application requirements, business requirements, and the like. When deploying cloud solutions you also need think about how you interact with your cloud system. While you might be using a secure VPN from your office or home, you need to think about encrypting your data on your client systems that interact with your AWS-based system. For example, if you upload data to your system, don’t just trust in SSL. You should make sure you use the same level of encryption you use on AWS on your home or office systems. AWS allows you to support server side encryption, client side encryption, or server side encryption with the ability to use your own keys that you manage on the client. This is an important and recent feature - the ability to use your own - since various federal and business security standards require you to maintain possession of your own cryptographic keys. That being said, managing your own keys can be difficult to do well. AWS offers some help with Hardware Security Modules with their CloudHSM. Another route is the multiple vendors that offer services to help you manager enterprise key management such as CloudCipher. Data in Motion Depending on your application users, you may need to send sensitive data to your AWS instances without being able to encrypt the data on their side first. An example is when creating a membership to your site where you want to protect their password or during an e-commerce transition were you want to protect credit card and other information. In these cases, instead of using regular HTTP, you want to use HTTP Secure protocol or HTTPS. HTTPS makes use of SSL/TLS, an encryption protocol for data in motion, to encrypt data as it travels over the network. While HTTPS can affect performance of web servers or network applications, its benefits often far outweigh the negligible overheard it creates. Indeed, AWS makes extensive use of SSL/TLS to protect network traffic between you and AWS and between various AWS services. As such, you should make sure to protect any data, in motion, with a reputable SSL certificate. Also, if you are new to using SSL for your application, you should strongly consider reviewing OWASP’s excellent cheat sheet on SSL. Finally, as stated earlier, don’t just trust in SSL when sharing sensitive data. The best practice is to hash or encrypt any and all sensitive data when possible, since attackers can sometimes, and have, compromised SSL security. Data in Use Data in use encryption, the encryption of data when it’s being used in RAM or by the CPU, is generally a special case in encryption that is mostly ignored in modern hosted applications. This is because it is very difficult and often not considered worth the effort for systems hosted on the premise. Cloud vendors though, like AWS, create special considerations for customers, since the cloud vendor controls have physical access to your computer. This can potentially allow a malicious actor with access to that hardware to circumvent data encryption by accessing a system’s physical memory to steal encryption keys or steal data that is in plain text in memory. As of 2012, the Cloud Security Alliance has started to recommend the use of encryption for data in use as a best practice; see here. For this reason, a number of vendors have started offering data in use encryption specifically for cloud systems like AWS. This should be considered only for systems or applications that have the most extreme security requirements such as national security. Companies like Privatecore and Vaultive currently offer services that allow you to encrypt your data even from your service provider. Summary Encryption and its proper use is a huge subject and we have only been able to lightly touch on the topic. Implementing encryption is rarely easy, yet AWS takes much of the difficult out of encryption by providing a number of services for you. That being said, being aware of what your risks are, how encryption can help mitigate those risks, what specific types of encryption to use, and how it will affect your solution requires continued study. To help you with this, some useful reference material has been provided. Encryption References OWASP: Guide to Cryptography OWASP: Password Storage Cheat Sheet OWASP: Cryptographic Storage Cheat Sheet Best Practices: Encryption Technology Cloud Security Alliance: Implementation Guidance, Category 8: Encryption AWS Security Best Practices From 4th to the 10th April join us for Cloud Week - save 50% on our top cloud titles or pick up any 5 for just $50! Find them here. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 2001

article-image-things-consider-when-migrating-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Things to Consider When Migrating to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
After the decision is made to make use of a cloud solution like Amazon Web Services or Microsoft Azure, there is one main question that needs to be answered – “What’s next?...” There are many factors to consider when migrating to the cloud, and this post will discuss the major steps for completing the transition. Gather background information Before getting started, it’s important to have a clear picture of what is meant to be accomplished in order to call the transition a success.Keeping the following questions at the forefront during the planning stages will help guide your process and ensure the success of the migration. What are the reasons for moving to the cloud? There are many benefits of moving to the cloud, and it is important to know what the focus of the transition should be. If the cost savings are the primary driver, vendor choice may be important. Prices between vendors vary, as do the support services that are offered–that might make a difference in future iterations. In other cases, the elasticity of hardware may be the main appeal. It will be important to ensure that the customization options are available at the desired level. Which applications are being moved? When beginning the migration process, it is important to make sure that the scope of the effort is clear. Consider the option of moving data and applications to the cloud selectively in order to ease the transition. Once the organization has completed a successful small-scale migration into the cloud, a second iteration of the process can take care of additional applications. What is the anticipated cost? A cloud solution will have variable costs associated with it, but it is important to have some estimation of what is expected. This will help when selecting vendors, and it will allow for guidance in configuring the system. What is the long-term plan? Is the new environment intended to eventually replace the legacy system? To work alongside it? Begin to think about the plan beyond the initial migration. Ensure that the selected vendor provides service guarantees that may become requirements in the future, like disaster recovery options or automatic backup services. Determine your actual cloud needs One important thing to maximize the benefits of making use of the cloud is to ensure that your resources are sufficient for your needs. Cloud computing services are billed based on actual usage, including processing power, storage, and network bandwidth. Configuring too few nodes will limit the ability to support the required applications, and too many nodes will inflate costs. Determine the list of applications and features that need to be present in the selected cloud vendor. Some vendors include backup services or disaster recovery options as add-on services that will impact the cost, so it important to decide whether or not these services are necessary. A benefit with most vendors is that these services are extremely configurable, so subscriptions can be modified. However, it is important to choose a vendor with packages that make sense for your current and future needs as much as possible, since transitioning between vendors is not typically desirable. Implement security policies Since the data and applications in the cloud are accessed over the Internet, it is of the utmost importance to ensure that all available vendor security policies are implemented correctly. In addition to the main access policies, determine if data security is a concern. Sensitive data such as PII or PCI may have regulations that impact data encryption rules, especially when being accessed through the cloud. Ensure that the selected vendor is reliable in order to safeguard this information properly. In some cases, applications that are being migrated will need to be refactored so that they will work in the cloud. Sometimes this means making adjustments to connection information or networking protocols. In other cases, this means adjusting access policies or opening ports. In all cases, a detailed plan needs to be made at the networking, software, and data levels in order to make the transition smooth. Let’s get to work! Once all of the decisions have been made and the security policies have been established and implemented, the data appropriate for the project can be uploaded to the cloud. After the data is transferred, it is important to ensure that everything was successful by performing data validation and testing of data access policies. At this point, everything will be configured and any application-specific refactoring or testing can begin. In order to ensure the success of the project, consider hiring a consulting firm with cloud experience that can help guide the process. In any case, the vendor, virtual machine specifications, configured applications and services, and privacy settings must be carefully considered in order to ensure that the cloud services provide the solution necessary for the project. Once the initial migration is complete, the plan can be revised in order to facilitate the migration of additional datasets or processes into the cloud environment. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 1706
article-image-3-reasons-why-the-cloud-is-a-terrible-metaphor-and-one-why-it-isnt-2
Sarah
01 Jul 2014
4 min read
Save for later

3 Reasons Why "the Cloud" Is a Terrible Metaphor (and One Why It Isn't)

Sarah
01 Jul 2014
4 min read
I have a lot of feelings about “the cloud” as a metaphor for networked computing. All my indignation comes too late, of course. I’ve been having this rant for a solid four years, and that ship has long since sailed–the cloud is here to stay. As a figurative expression for how we compute these days, it’s proven to have way more sticking power than, say, the “information superhighway”. (Remember that one?) Still, we should always be careful about the ways we use figurative language. Sure, you and I know we’re really talking about odd labyrinths of blinking lights in giant refrigerator buildings. But does your CEO? I could talk a lot about the dangers of abstracting away our understanding of where our data actually is and who has the keys. But I won’t, because I have even better arguments than that. Here are my three reasons why “the cloud” is a terrible metaphor: 1. Clouds are too easy to draw. Anyone can draw a cloud. If you’re really stuck you just draw a sheep and then erase the black bits. That means that you don’t have to have the first clue about things like SaaS/PaaS/IaaS or local persistent storage to include “the cloud” in your Power Point presentation. If you have to give a talk in half an hour about the future of your business, clouds are even easier to draw than Venn Diagrams about morale and productivity. Had wecalled it “ Calabi–Yau Manifold Computing” the world would have saved hundreds of man hours spent in nonsensical meetings.The only thing sparingus from a worse fate is the stalling confusion that comes from trying to combine slide one–“The Cloud”–and slide two–”BlueSky Thinking!”. 2. Hundreds of Victorians died from this metaphor. Well, okay, not exactly. But in the nineteenth century, the Victorians had their own cloud concept–the miasma. The basic tenet was that epidemic illnesses were caused by bad air in places too full of poor people wearing fingerless gloves (for crime). It wasn’t until John Snow pointed to the infrastructure that people worked out where the disease was coming from. Snow mapped the pattern of pipes delivering water to infected areas and demonstrated that germs at one pump were causing the problem. I’m not saying our situation is exactly analogous. I’m just saying if we’re going to do the cloud metaphor again, we’d better be careful of metaphorical cholera. 3. Clouds might actually be alive. Some scientists reckon that the mechanism that lets clouds store and release precipitation is biological in nature. If this understanding becomes widespread, the whole metaphor’s going to change underneath us. Kids in school who’ve managed to convince the teacher to let them watch a DVD instead of doing maths will get edu-tained about it. Then we’re all going to start imagining clouds as moving colonies of tiny little cartoon critters. Do you want to think about that every time you save pictures of your drunken shenanigans to your Dropbox? And one reason why it isn’t a bad metaphor at all: 1. Actually, clouds are complex and fascinating . Quick pop quiz–what’s the difference between cirrus fibrates and cumulonimbus? If you know the answer to that, you’re most likely either a meteorologist, or you’re overpaid to sit at your desk googling the answers to rhetorical questions. In the latter case, you’ll have noticed that the Wikipedia article on clouds is about seventeen thousand words long. That’s a lot of metaphor. Meteorological study helps us to track clouds as they move from one geographic area to another, affecting climate, communications, and social behaviour. Through careful analysis of their movements and composition, we can make all kinds of predictions about how our world will look tomorrow. The important point came when we stopped imagining chariots and thundergods, and started really looking at what lay behind the pictures we’d painted for ourselves.
Read more
  • 0
  • 0
  • 1681

article-image-buying-versus-renting-pros-and-cons-moving-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Buying versus Renting: The Pros and Cons of Moving to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
Convenience One major benefit of the IaaS model is the promise of elasticity to support unforeseen demand. This means that the Cloud vendor will provide the ability to quickly and easily scale the provided resources up or down, based on the actual usage requirements. This typically means that an organization can plan for the ″average″ case instead of the “worst case” of usage, simultaneously saving on costs and preventing outages. Additionally, since the systems provided through cloud vendors are usually virtual machines running on the vendor’s underlying hardware, the process of adding new machines, increasing the disk space, or subscribing to new services is usually just a change through a web UI, instead of a complicated hardware or software acquisition process. This flexibility is an appealing factor because it significantly reduces the waiting time required to support a new capability. However, this automation benefit is sometimes a hindrance to administrators and developers that need to access the low-level configuration settings of certain software. Additionally, since the services are being offered through a virtualized system, continuity in the underlying environment can’t be guaranteed. Some applications – for example, benchmarking tools – may not be suitable for that type of environment. Cost One appealing factor for the transition to the cloud is cost–but in certain situations, using the cloud may not actually be cheaper. Before making a decision, your organization should evaluate the following factors to make sure the transition will be beneficial. One major benefit is the impact on your organization′s budget. If the costs are transitioned to the cloud, they will usually count as operational expenditures, as opposed to capital expenditures. In some situations, this might make a difference when trying to get the budget for the project approved. Additionally, some savings may come in the form of reduced maintenance and licensing fees. These expenditures will be absorbed into the monthly cost, rather than being an upfront requirement. When subscribing to the cloud, you can disable any unnecessary resources ondemand, reducing costs. In the same situation with real hardware, the servers would be required to remain on 24/7 in order to provide the same access benefits. On the other hand, consider the size of the data. Vendors have costs associated with moving data into or out of the cloud, in addition to the charge for storage. In some cases, the data transfer time alone would prohibit the transition. Also, the previously mentioned elasticity benefits that draw some people into the cloud–scaling up automatically to meet unexpected demand–can also have unexpected impact on the monthly bill. These costs are sometimes difficult to predict, and since the cloud computing pricing model is based on usage, it is important to weigh the possibility of an unanticipated hefty bill against an initial hardware investment. Reliability Most cloud vendors typically guarantee service availability or access to customer support. This places that burden on the vendor, as opposed to being assumed by the project′s IT department. Similarly, most cloud vendors provide backup and disaster recovery options either as add-ons or built-in to the main offering. This can be a benefit for smaller projects that have the requirement, but do not have the resources to support two full clusters internally. However, even with these guarantees, vendors still need to perform routine maintenance on their hardware. Some server-side issues will result in virtual machines being disabled or relocated – usually communicated with some advanced notice. In certain cases this will cause interruptions and require manual interaction from the IT team. Privacy All data and services that get transitioned into the cloud will be accessible from anywhere via the web–for better or worse. Using this system, the technique of isolating the hardware onto its own private network or behind a firewall is no longer possible. On the positive side, this means that everyone on the team will be able to work using any Internet-connected device. On the negative side, this means that every precaution needs to be taken so that the data stays safe from prying eyes. For some organizations, the privacy concerns alone are enough to keep projects out of the cloud. Even assuming that the cloud can be made completely secure, stories in the news about data loss and password leakage will continue to project a negative perception of inherent danger. It is important to document all precautions being taken to protect the data and make sure that all affected parties in the organization are comfortable moving to the cloud. Conclusion The decision of whether or not to move into the cloud is an important one for any project or organization. The benefits of flexibility of hardware requirements, built–in support, and general automation must be weighed against the drawbacks of decreased control over the environment and privacy. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 1558

article-image-what-zerovm
Lars Butler
30 Jun 2014
6 min read
Save for later

What is ZeroVM?

Lars Butler
30 Jun 2014
6 min read
ZeroVM is a lightweight virtualization technology based on Google Native Client (NaCl). While it shares some similarities with traditional hypervisors and container technologies, it is unique in a number of respects. Unlike KVM and LXC, which provide an entire virtualized operating system environment, it isolates single processes and provides no operating system or kernel. This allows instances to start up in a very short time: about five milliseconds. Combined with a high level of security and zero execution overhead, ZeroVM is well-suited to ephemeral processes running untrusted code in multi-tenant environments. There are of course some limitations inherent in the design. ZeroVM cannot be used as a drop-in replacement for something like KVM or LXC. These limitations, however, were the deliberate design decisions necessary in order to create a virtualization platform specifically for building cloud applications. How ZeroVM is different to other virtualization tools Blake Yeager and Camuel Gilyadov gave a talk at the 2014 OpenStack Summit in Atlanta which summed up nicely the main differences between hypervisor-based virtual machines (KVM, Xen, and so on), containers (LXC, Docker, and so on), and ZeroVM. Here are the key differences they outlined: Traditional VM Container ZeroVM Hardware Shared Shared Shared Kernel/OS Dedicated Shared None Overhead High Low Very low Startup time Slow Fast Fast Security Very secure Somewhat secure Very secure Traditional VMs and containers provide a way to partition and schedule shared server resources for multiple tenants. ZeroVM accomplishes the same goal using a different approach and with finer granularity. Instead of running one or more application processes in a traditional virtual machine, applications written for ZeroVM must be decomposed in microprocesses, and each one gets its own instance. The advantage of in this case is that you can avoid long running VMs/processes which accumulate state (leading to memory leaks and cache problems). The disadvantage, however, is that it can be difficult to port existing applications. Each process running on ZeroVM is a single stateless unit of computation (much like a function in the “purely functional” sense; more on that to follow), and applications need to be structured specifically to fit this model. Some applications, such as long-running server applications, would arguably be impossible to re-implement entirely on ZeroVM, although some parts could be abstracted away to run inside ZeroVM instances. Applications that are predominantly parallel and involve many small units of computation are better suited to run on ZeroVM. Determinism ZeroVM provides a guarantee of functional determinism. What this means in practice is that with a given set of inputs (parameters, data, and so on), outputs are guaranteed to always be the same. This works because there are no sources of entropy. For example, the ZeroVM toolchain includes a port of glibc, which has a custom implementation of time functions such that time advances in a deterministic way for CPU and I/O operations. No state is accumulated during execution and no instances can be reused. The ZeroVM Run-Time environment (ZRT) does provide an in-memory virtual file system which can be used to read/write files during execution, but all writes are discarded when the instance terminates unless an output “channel” is used to pipe data to the host OS or elsewhere. Channels and I/O “Channels” are the basic I/O abstraction for ZeroVM instances. All I/O between the host OS and ZeroVM must occur over channels, and channels must be declared explicitly in advance. On the host, a channel can map to a file, character device, pipe, or socket. Inside an instance, all channels are presented as files that can be written to/read from, including devices like stdin, stdout, and stderr. Channels can also be used to connect multiple instances together to create arbitrary multi-stage job pipelines. For example, a MapReduce-style search application with multiple filters could be implemented on ZeroVM by writing each filter as a separate application/script and piping data from one to the next. Security ZeroVM has two key security components: static binary validation and a limited system call API. Static validation occurs before “untrusted” user code is executed to ensure that there are no accidental or malicious instructions that could break out of the sandbox and compromise the host system. Binary validation in this instance is largely based on the NaCl validator. (For more information about NaCl and its validation, you can read the following whitepaper http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/34913.pdf.) To further lock down the execution environment, ZeroVM only supports six system calls via a "trap" interface: pread, pwrite, jail, unjail, fork, and exit. By comparison, containers (LXC) expose the entire Linux system call API which presents a larger attack surface and more potential for exploitation. ZeroVM is lightweight ZeroVM is very lightweight. It can start in about five milliseconds. After the initial validation, program code is executed directly on the hardware without interpretation overhead or hardware virtualization. It's easy to embed in existing systems The security and lightweight nature of ZeroVM makes it ideal to embed in existing systems. For example, it can be used for arbitrary data-local computation in any kind of data store, akin to stored procedures. In this scenario, untrusted code provided by any user with access to the system can be executed safely. Because inputs and outputs must be declared explicitly upfront, the only concerns remaining are data access rules and quotas for storage and computation. Contrasted with a traditional model, where storage and compute nodes are separate, data-local computing can be a more efficient model when the cost of transferring data over the network to/from compute nodes outweighs the actual computation time itself. The tool has already been integrated with OpenStack Swift using ZeroCloud (middleware for Swift). This turns Swift into a “smart” data store, which can be used to scale parallel computations (such as multi-stage MapReduce jobs) across large collections of objects. Language support C and C++ applications can run on ZeroVM, provided that they are cross-compiled to NaCl using the provided toolchain. At present there is also support for Python 2.7 and Lua. Licensing All projects under the ZeroVM umbrella are licensed under Apache 2.0, which makes ZeroVM suitable for both commercial and non-commercial applications (the same as OpenStack).
Read more
  • 0
  • 0
  • 3017