Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-bridging-gap-between-data-science-and-devops
Richard Gall
23 Mar 2016
5 min read
Save for later

Bridging the gap between data science and DevOps with DataOps

Richard Gall
23 Mar 2016
5 min read
What’s the real value of data science? Hailed as the sexiest job of the 21st century just a few years ago, there are rumors that it’s not quite proving its worth. Gianmario Spacagna, a data scientist for Barclays bank in London, told Computing magazine at Spark Summit Europe in October 2015 that, in many instances, there’s not enough impact from data science teams – “It’s not a playground. It’s not academic” he said. His solution sounds simple. We need to build a bridge between data science and DevOps - and DataOps is perhaps the answer. He says: "If you're a start-up, the smartest person you want to hire is your DevOps guy, not a data scientist. And you need engineers, machine learning specialists, mathematicians, statisticians, agile experts. You need to cover everything otherwise you have a very hard time to actually create proper applications that bring value." This idea makes a lot of sense. It’s become clear over the past few years that ‘data’ itself isn’t enough; it might even be distracting for some organizations. Sometimes too much time is spent in spreadsheets and not enough time is spent actually doing stuff. Making decisions, building relationships, building things – that’s where real value comes from. What Spacagna has identified is ultimately a strategic flaw within how data science is used in many organizations. There’s often too much focus on what data we have and what we can get, rather than who can access it and what they can do with it. If data science isn’t joining the dots, DevOps can help. True, a large part of the problem is strategic, but DevOps engineers can also provide practical solutions by building dashboards and creating APIs. These sort of things immediately give data additional value by making they make it more accessible and, put simply, more usable. Even for a modest medium sized business, data scientists and analysts will have minimal impact if they are not successfully integrated into the wider culture. While it’s true that many organizations still struggle with this, Airbnb demonstrate how to do it incredibly effectively. Take a look at their Airbnb Engineering and Data Science publication on Medium. In this post, they talk about the importance of scaling knowledge effectively. Although they don’t specifically refer to DevOps, it’s clear that DevOps thinking has informed their approach. In the products they’ve built to scale knowledge, for example, the team demonstrate a very real concern for accessibility and efficiency. What they build is created so people can do exactly what they want and get what they need from data. It’s a form of strict discipline that is underpinned by a desire for greater freedom. If you keep reading Airbnb’s publication, another aspect of ‘DevOps thinking’ emerges: a relentless focus on customer experience. By this, I don’t simply mean that the work done by the Airbnb engineers is specifically informed by a desire to improve customer experiences; that’s obvious. Instead, it’s the sense that tools through which internal collaboration and decision making take place should actually be similar to a customer experience. They need to be elegant, engaging, and intuitive. This doesn’t mean seeing every relationship as purely transactional, based on some perverse logic of self-interest, but rather having a deeper respect for how people interact and share ideas. If DevOps is an agile methodology that bridges the gap between development and operations, it can also help to bridge the gap between data and operations. DataOps - bringing DevOps and data science together This isn’t a new idea. As much as I’d like to, I can’t claim credit for inventing ‘DataOps’. But there’s not really much point in asserting that distinction. DataOps is simply another buzzword for the managerial class. And while some buzzwords have value, I’m not so sure that we need another one. More importantly, why create another gap between Data and Development? That gap doesn’t make sense in the world we’re building with software today. Even for web developers and designers, the products they are creating are so driven by data that separating the data from the dev is absurd. Perhaps then, it’s not enough to just ask more from our data science as Gianmario Spacagna does. DevOps offers a solution, but we’re going to miss out on the bigger picture if we start asking for more DevOps engineers and some space for them to sit next to the data team. We also need to ask how data science can inform DevOps too. It’s about opening up a dialogue between these different elements. While DevOps evangelists might argue that DevOps has already started that, the way forward is to push for more dialogue, more integration and more collaboration. As we look towards the future, with the API economy becoming more and more important to the success of both startups and huge corporations, the relationships between all these different areas are going to become more and more complex. If we want to build better and build smarter we’re going to have to talk more. DevOps and DataOps both offer us a good place to start the conversation, but it’s important to remember it’s just the start.
Read more
  • 0
  • 0
  • 30804

article-image-getting-started-devops
Michael Herndon
10 Feb 2016
7 min read
Save for later

Getting Started with DevOps

Michael Herndon
10 Feb 2016
7 min read
DevOps requires you to know many facets of people and technology. If you're interested in starting your journey into the world of DevOps, then take the time to know what you are getting yourself into, be ready to put in some work, and be ready to push out of your comfort zone. Know What You're Getting Yourself Into Working in a DevOps job where you're responsible for both coding and operational tasks means that you need to be able to shift mental gears. Mental context switching comes at a cost. You need to be able to pull yourself out of one mindset and switch to another, and you need to be able to prioritize. Accept your limitations and know when it's prudent to request more resources to handle the load. The amount of context switching will vary depending on the business. Let's say that you join a startup, and you're the only DevOps person on the team. In this scenario, you're most likely the operations team and still responsible for some coding tasks as well. This means that you need to tackle operations tasks as they come in. In this instance, Scrum and Agile will only carry you so far, you'll have to take more of a GTD approach. If you come from a development background, you will be tempted to put coding first as you have deadlines. However, if you are the operations team, then operations must come first. When you become a part of the operations team, employees at your business are now your customers too. Some days you can churn out code, other days are going to be an onslaught of important, time-sensitive requests. At the business that I currently work for, I took on the DevOps role so that other developers could focus on coding. One of the developers that I work with has exceptional code output. However, operational tasks were impeding their productivity. It was an obvious choice for me to jump in and take over the operational tasks so that the other developer could focus his efforts on bringing new features to customers. It's simply good business. Ego can get in the way of good business and DevOps. Leave your ego at home. In a bigger business, you may have a DevOps team where there is more breathing room to focus on things that you're more interested in, whether it's more coding or working with systems. Emergencies happen. When an emergency arises, you need to be able to calmly assess the situation, avoid the blame game, and provide solutions. Don't react. Respond. If you're too excitable or easily get caught up in the emotions of a given situation, DevOps may be your trial of fire. Work on pulling yourself outside of a situation so that you can see the whole picture and work towards solving the problem. Never play the blame game. Be the person who gets things done. Dive Into DevOps Start small. Taking on too much will overwhelm you and stifle progress. After you’ve done a few iterations of taking small steps, you'll be further along the journey than you realize. "It's a dangerous business, Frodo, going out your door. You step onto the road, and if you don't keep your feet, there's no knowing where you might be swept off to.” - Bilbo Baggins. If you're a developer, take one of your side projects and set up continuous delivery for the project. I would keep it simple and use something like Travis CI or AppVeyor and have your final output published somewhere. If you're using something like node, you could set up nightly builds for NPM. If its .NET you could use a service like MyGet. The second thing I would do as a developer is to focus on learning SSH, security access, and scheduled tasks. One of the things I've seen developers struggle with is locking down systems, so it's worth taking the time to dive into user access permissions. If you're on Windows, learn the windows task scheduler. If you're on Linux, learn to setup cron jobs. If you're from the operations and systems side of things, pick a scripting language that suits your needs. If you're working for a company that uses Microsoft technology, I'd suggest that you learn the Powershell scripting language and a language that compiles to .NET like C# or F#. If you're using open source technologies, I'd suggest learning bash and a language like Ruby or Python. Puppet and Chef use Ruby. Salt Stack uses Python. Build a simple web application with the language of your choice. That should give you enough familiarity with a language for you to start creating scripts that automate tasks. Read into DevOps books like Continuous Delivery or Continuous Delivery and DevOps Quickstart Guid. Expand your knowledge. Explore tools. Work on your intercommunication skills. Create a list of tasks that you wish to automate. Then create a habit out of reducing that list. Build A Habit Out Of Automating Infrastructure. Make it a habit to find time to automate your infrastructure while continuing to support your business. It's rare to get into a position that only focuses on automating infrastructure constantly as one's sole job, so it's important to be able to carve out time to remove mundane work so that you can focus your time and value on tasks that can't be automated. A habit loop is made up of three things. A cue, a routine, and a reward. For example, at 2pm your alarm goes off (cue). You go for a short run (routine). You feel awake and refreshed (reward). Design a cue that works for you. For example, every Friday at 2pm you could switch gears to work on automation. Spend some time on automating a task or infrastructure need (Routine), then find a reward that suits your lifestyle. A reward could be having a treat on Friday to celebrate all the hard work for the week or going home early (if your business permits this). Maybe learning something new is the reward and in that case, you spend a little time each week with a new DevOps related technology. Once you've removed some of the repetitive tasks that waste time, then you'll find yourself with enough time to take on bigger automation projects that seemed impossible to get to before. Repeat this process ad infinitum (To infinity and beyond). Lastly, Always Write and Communicate Whether you plan on going into DevOps or not, the ability to communicate will set you apart from others in your field. In DevOps, communication becomes a necessity because the value you provide may not always be apparent to everyone around you. Furthermore, you need to be able to resolve group conflicts, persuasively elicit buy-in, and provide a vision that people can follow. Always strive to improve your communication skills. Read books. Write. Work on your non-verbal communication skills. Non-verbal communication accounts for 93% of communication. It's worth knowing that messages that your body language sends could be preventing you from getting your ideas across. Communicating in a plain language to the lowest common denominator of your intended audience is your goal. People that are technical and nontechnical need to understand problems, solutions, and the value that you are giving them. Learn to use the right adjectives to paint bright illustrations in the minds of your readers to help them conceptualize hard-to-understand topics. The ability to persuade with writing is almost a lost art. It is a skill that transcends careers, disciplines, and fields of study. Used correctly, you can provide vision to guide your business into becoming a lean competitor that provides exceptional value to customers. At the end of the day, DevOps exists so that you can provide exceptional value to customers. Let your words guide and inspire the people around you. Off You Go All this is easier said than done. It takes time, practice, and years of experience. Don't be discouraged and don't give up. Instead, find things that light up your passion and focus on taking small incremental steps that allow you to win. You'll be there before you know it. About the author Michael Herndon is the head of DevOps at Solovis, creator of badmishka.co, and all around mischievous nerdy guy. 
Read more
  • 0
  • 0
  • 30170

article-image-5-examples-of-artificial-intelligence-in-web-apps
Sugandha Lahoti
20 Aug 2018
7 min read
Save for later

5 examples of Artificial Intelligence in Web apps

Sugandha Lahoti
20 Aug 2018
7 min read
Modern day web app development is increasingly focused on building a customer-facing front-end presence with the use of Artificial Intelligence. Web apps, use Artificial Intelligence not just for intelligent automation, but also for building recommendation engines, website implementation, and image recognition, among other application areas. In this post, we look at five key areas, illustrated by real-world examples, where web apps are employing Artificial intelligence to automate some part of their system. Recommendation Engines of Amazon and Netflix Curating content based on the user’s context is one of the most widely used AI features in web apps. Amazon, for instance, uses item-based collaborative filtering for product classification. Amazon’s recommendation system uses a combination of goods-based recommendation (users are recommended for those similar to what they liked in the past) and buddy-based recommendation (users are recommended things which their Facebook friends like.) Not just for their recommendation system, Amazon has been using AI for multiple tasks. Their AI Management Strategy is called The Flywheel, where one part of Amazon acts as a catalyst for AI and machine learning growth in other areas. Read more: Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics Another popular example is Netflix, who revamped their recommendation algorithm based on visual impressions. One of their research projects indicated that the artwork was not only the biggest influencer to a viewer's decision to watch content, but it also drew over 82% of their focus while browsing Netflix. This made them develop a new image recommendation algorithm which works in real time to project the image it thinks the user will respond to. They use implicit (user behavior) and Explicit data (user activity) and then feed this data to machine learning algorithms to figure out the relevant content for each user. For each title, users get the image with the highest rank based on their profile. Side by side, it continues collecting data from its 100 million other subscribers to improve its engine’s performance. Read more: What software stack does Netflix use? Google and Microsoft using Image recognition Image recognition can serve multiple uses for web apps including object and pattern recognition, locating duplicates (exact or partial), image search by fragments, and more. Two such unique applications of image recognition are Google’s Quickdraw and Microsoft’s Captionbot.ai. Quick Draw is Google’s AI-powered web app game, where users have to draw an everyday object that a neural network tries to recognize. Players are given 20 seconds to draw a random item, and Google’s neural network tries to match it with other 50 million hand-drawn sketches by other players to identify the correct one. Quickdraw aims to generate the world’s largest doodling data set, which is shared publicly to help further machine learning research. The data preserves user privacy by collecting only anonymous metadata, including timestamp, country code, whether or not the drawing was recognized, and which word the drawing corresponded to. This dataset was used in SketchRNN, a neural network that can draw words and interpolate between drawings. Another image recognition web app is Microsoft’s Captionbot.ai. The system can automatically generate a caption for an uploaded photograph. Users can rate how accurately it has detected what was on display. The algorithm learns from the rating, to make the captions more accurate. It uses three separate services to process the images. The Computer Vision API identifies the components of the photo, then mixes it with data from the Bing Image API, and runs any faces it spots through Emotion API. The Emotion API analyses facial expressions to detect anger, contempt, disgust, fear, and other traits. Based on the results from these APIs, the caption is generated. Google Docs powered by Natural Language Processing Modern Web apps can also be fueled with cognitive capabilities to make them stand apart from other apps. Instances of this include transforming human speech to text or conversing with people in natural language. One such example of a web app which includes natural language processing is Google Docs. Google Docs and Slides have an Explore feature to show text, images, and other features relevant to the document that a user is working on at any given point.  Docs can also use natural language to search through data and reports, and automatically generate formulas in Sheets. Google Docs recently incorporated an AI grammar checker, announced at Google Cloud Next. It uses a machine translation algorithm to recognize errors and suggest corrections as users type. Google Docs can also be integrated with Natural Language API to recognize the sentiment of selected text in a Google Doc and highlight it based on that sentiment. Web-based artificial intelligence Chatbots Web-based chatbots are just like app-based chatbots albeit they interact with users in the website browser. They use AI techniques such as natural language understanding and pattern recognition to store and distinguish between the context of the information provided and elicit a suitable response for future replies. An example of web-based chatbots are the Live Chat bots where the conversation with a visitor on a website is automated using a chatbot. Many live chat software companies are already experimenting with chatbots. Examples include the Operator bot used by Intercom, a company building customer messaging platform or Driftbot by Drift which gives your website a personal assistant. Read More: Top 4 chatbot development frameworks for developers Another example, are AI based chatbots which help in creating full websites. Right Click is a startup that introduced an A.I.-powered chatbot which uses Artificial Intelligence in a conversational interface to create websites. It asks general questions during the conversation like “What industry you belong to?” and “Why do you want to make a website?” and creates customized templates as per the given answers. Similarly, Wix’s Artificial Intelligence Design bot can tailor websites by learning about each person’s or business’ own needs. Web-based code helpers using AI Intelligent coding assistants are gaining popularity with their ability to understand the code and provide right suggestions at the right time. They can analyze code on the web and give fast and smart completions. Codota for Chrome is a smart web-based IDE which can build predictive models of code and suggest code completions and related content based on the current context present in the code. It combines program analysis, natural language processing, and machine learning to learn from the code. Users can look for Codota’s Icon on every code snippet on their browsers - in GitHub, StackOverflow and others. Another example is Deep Cognition’s Deep Learning Studio – Cloud. It is not exactly an IDE, but it features AI-powered drag & drop interface to help design deep learning models with ease. It features assisted modeling, for automated tensor size calculations and real-time validation. It also has AutoML feature to automatically build a neural network. [dropcap]E[/dropcap]ven though AI is a great choice to enhance your web apps, an important facet to keep in mind is ensuring fairness, accuracy, and transparency of your web apps. For instance, web apps powered by natural language should not discriminate people based on caste, color, or creed or hurt user sentiments. Similarly, those using neural networks for recognizing images should ensure the filtering of obscene images. Creating such types of artificial intelligence systems would require a hybrid of designers, programmers, ML engineers, and researchers. This collective group will have a good grasp of user experience, will be comfortable thinking in abstracts and algorithms, and equally well versed with the social impacts of artificial intelligence. Read More: 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017 Uber introduces Fusion.js, a plugin-based web development framework for high-performance apps. Electron Fiddle: A ‘code playground’ for experimenting with cross-platform native apps. Warp: Rust’s new web framework for implementing WAI (Web Application Interface)
Read more
  • 0
  • 0
  • 29244
Banner background image

article-image-devops-not-continuous-delivery
Xavier Bruhiere
11 Apr 2017
5 min read
Save for later

DevOps is not continuous delivery

Xavier Bruhiere
11 Apr 2017
5 min read
What is the difference between DevOps and continuous delivery? The tech world is full of buzzwords; DevOps is undoubtly one of the best-known of the last few years. Essentially DevOps is a concept that attempts to solve two key problems for modern IT departments and development teams - the complexity of a given infrastructure or service topology and market agility. Or, to put it simply, there are lots of moving parts in modern software infrastructures, which make changing and fixing things hard.  Project managers who know their agile inside out - not to mention customers too - need developers to: Quickly release new features based on client feedback Keep the service available, even during large deployments Have no lasting crashes, regressions, or interruptions How do you do that ? You cultivate a DevOps philosophy and build a continous integration pipeline. The key thing to notice there are the italicised verbs - DevOps is cultural, continuous delivery is a process you construct as a team. But there's also more to it than that. Why do people confuse DevOps and continuous delivery? So, we've established that there's a lot of confusion around DevOps and continuous delivery (CD). Let's take a look at what the experts say.  DevOps is defined on AWS as: "The combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity." Continuous Delivery, as stated by Carl Caum from Puppet, "…is a series of practices designed to ensure that code can be rapidly and safely be deployed to production by delivering every change to a production-like environment and ensuring that business applications and services function as expected through rigorous automated testing." So yes, both are about delivering code. Both try to enforce practices and tools to improve velocity and reliability of software in production. Both want the IT release pipeline to be as cost effective and agile as possible. But if we're getting into the details, DevOps is focused on managing challenging time to market expectations, while continuous delivery was a process to manage greater service complexity - making sure the code you ship is solid, basically. Human problems and coding problems In its definition of DevOps, Atlassian puts forward a neat formulation: "DevOps doesn’t solve tooling problems. It solves human problems." DevOps, according to this understanding promotes the idea that development and operational teams should work seamlessly together. It argues that they should design tools and processes to ensure rapid and efficient development-to-production cycles. Continuous Delivery, on the other hand, narrows this scope to a single mentra: your code should always be able to be safely released. It means that any change goes through an automated pipeline of tests (units, integrations, end-to-end) before being promoted to production. Martin Fowler nicely sums up the immediate benefits of this sophisticated deployment routine: "reduced deployment risk, believable progress, and user feedback." You can't have continuous delivery without DevOps Applying CD is difficult and requires advanced operational knowledge and enough resources to set up a pipeline that works for the team. Without a DevOps culture, you're team won't communicate properly, and technical resources won't be properly designed. It will certainly hurt the most critical IT pipeline: longer release cycles, more unexpected behaviors in production, and a slow feedback loop. Developers and management might fear the deployment step and become less agile. You can have DevOps without continuous delivery... but it's a waste of time The reverse this situation, DevOps without CD, is slightly less dangerous. But it is, unfortunately, pretty inefficient. While DevOps is a culture, or a philosophy, it is by no means supposed to remain theoretical. It's supposed to be put into practice. After all, the main aim isn't chin stroking intellectualism, it's to help teams build better tools and develop processes to deliver code. The time (ie. money) spent to bootstrap such a culture shouldn't be zeroed by a lack of concrete actions. CD delivery is a powerful asset for projects trying to conquer a market in a lean fashion. It overcomes the investments with teams of developers focused on business problems anddelevering to clients tested solutions as fast as they are ready. Take DevOps and continuous delivery seriously What we have, then, are two different, but related, concepts in how modern development teams understand operations. Ignoring one of them induces waste of resources and poor engineering efficiency. However, it is important to remember that the scope of DevOps - involving an entire organizational culture, potentially - and the complexity of continuous delivery mean that adoption shouldn't be rushed or taken lightly. You need to make an effort to do it properly. It might need a mid or long term roadmap, and will undoubtedly require buy-in from a range of stakeholders. So, keep communication channels open, consider using built cloud services if required, understand the value of automated tests and feedback-loops, and, most importantly, hire awesome people to take responsibility. A final word of caution. As sophisticated as they are, DevOps and continuous delivery aren't magical methodologies. A service as critical as AWS S3 claims 99.999999999% durability, thanks to rigorous engineering methods and yet, on February 28, it suffered a large service disruption. Code delivery is hard so keep your processes sharp! About the author Xavier Bruhiere is a Senior Data Engineer at Kpler. He is a curious, sharp, entrepreneur, and engineer who has built many projects, broke most of them, and launched and scaled what was left, learning from them all.
Read more
  • 0
  • 0
  • 28443

article-image-top-10-computer-vision-tools
Aaron Lazar
05 Apr 2018
7 min read
Save for later

Top 10 Tools for Computer Vision

Aaron Lazar
05 Apr 2018
7 min read
The adoption of Computer Vision has been steadily picking up pace over the past decade, but there’s been a spike in adoption of various computer vision tools in recent times, thanks to its implementation in fields like IoT, manufacturing, healthcare, security, etc. Computer vision tools have evolved over the years, so much so that computer vision is now also being offered as a service. Moreover, the advancements in hardware like GPUs, as well as machine learning tools and frameworks make computer vision much more powerful in the present day. Major cloud service providers like Google, Microsoft and AWS have all joined the race towards being the developers’ choice. But which tool should you choose? Today I’ll take you through a list of the top tools and will help you understand which one to pick up, based on your need. Computer Vision Tools/Libraries OpenCV: Any post on computer vision is incomplete without the mention of OpenCV. OpenCV is a great performing computer vision tool and it works well with C++ as well as Python. OpenCV is prebuilt with all the necessary techniques and algorithms to perform several image and video processing tasks. It’s quite easy to use and this makes it clearly the most popular computer vision library on the planet! It is multi-platform, allowing you to build applications for Linux, Windows and Android. At the same time, it does have some drawbacks. It gets a bit slow when working through massive data sets or very large images. Moreover, on its own, it doesn’t have GPU support and relies on CUDA for GPU processing. Matlab: Matlab is a great tool for creating image processing applications and is widely used in research. The reason being that Matlab allows quick prototyping. Another interesting aspect is that Matlab code is quite concise, as compared to C++, making it easier to read and debug. It tackles errors before execution by proposing some ways to make the code faster. On the downside, Matlab is a paid tool. Also, it can get quite slow during execution time, if that’s something that concerns you much. Matlab is not your go to tool in an actual production environment, as it was basically built for prototyping and research. AForge.NET/Accord.NET: You’ll be excited to know that image processing is possible even if you’re a C# and .NET developer, thanks to AForge/Accord. It’s a great tool that has a lot of filters and is great for image manipulation and different transforms. The Image Processing Lab allows for filtering capabilities like edge detection and more. AForge is extremely simple to use as all you need to do is adjust parameters from a user interface. Moreover, its processing speeds are quite good. However, AForge doesn’t possess the power and capabilities of other tools like OpenCV, like advanced motion picture analysis or even advanced processing on images. TensorFlow: TensorFlow has been gaining popularity over the past couple of years, owing to its power and ease of use. It lets you bring the power of Deep Learning to computer vision and has some great tools to perform image processing/classification - it’s API-like graph tensor. Moreover, you can make use of the Python API to perform face and expression detection. You can also perform classification using techniques like regression. Tensorflow also allows you to perform computer vision of tremendous magnitudes. One of the main drawbacks of Tensorflow is that it’s extremely resource hungry and can devour a GPU’s capabilities in no time, quite uncalled for. Moreover, if you wanted to learn how to perform image processing with TensorFlow, you’d have to understand what Machine and Deep Learning is, write your own algorithms and then go forward from there. CUDA: CUDA is a platform for parallel computing, invented by NVIDIA. It enables great boosts in computing performance by leveraging the power of GPUs. The CUDA Toolkit includes the NVIDIA Performance Primitives library which is a collection of signal, image, and video processing functions. If you have large images to process, that are GPU intensive, you can choose to use CUDA. CUDA is easy to program and is quite efficient and fast. On the downside, it is extremely high on power consumption and you will find yourself reformulating for memory distribution in parallel tasks. SimpleCV: SimpleCV is a framework for building computer vision applications. It gives you access to a multitude of computer vision tools on the likes of OpenCV, pygame, etc. If you don’t want to get into the depths of image processing and just want to get your work done, this is the tool to get your hands on. If you want to do some quick prototyping, SimpleCV will serve you best. Although, if your intention is to use it in heavy production environments, you cannot expect it to perform on the level of OpenCV. Moreover, the community forum is not very active and you might find yourself running into walls, especially with the installation. GPUImage: GPUImage is a framework or rather, an iOS library that allows you to apply GPU-accelerated effects and filters to images, live motion video, and movies. It is built on OpenGL ES 2.0. Running custom filters on a GPU calls for a lot of code to set up and maintain. GPUImage cuts down on all of that boilerplate and gets the job done for you. Computer Vision as a Service: Google Cloud and Mobile Vision APIs: Google Cloud Vision API enables developers to perform image processing by encapsulating powerful machine learning models in a simple REST API that can be called in an application. Also, its Optical Character Recognition (OCR) functionality enables you to detect text in your images. The Mobile Vision API lets you detect objects in photos and video, using real-time on-device vision technology. It also lets you scan and recognise barcodes and text. Amazon Rekognition: Amazon Rekognition is a deep learning-based image and video analysis service that makes adding image and video analysis to your applications, a piece of cake. The service can identify objects, text, people, scenes and activities, and it can also detect inappropriate content, apart from providing highly accurate facial analysis and facial recognition for sentiment analysis. Microsoft Azure Computer Vision API: Microsoft’s API is quite similar to its peers and allows you to analyse images, read text in them, and analyse video in near-real time. You can also flag adult content, generate thumbnails of images and recognise handwriting. Bonus: SciPy and NumPy: I thought I’d add these in as well, since I’ve seen quite a few developers use Python to build computer vision applications (without OpenCV, that is). SciPy and NumPy are quite powerful enough to perform image processing. scikit-image is a Python package that is dedicated towards image processing, which uses native NumPy and SciPy arrays as image objects. Moreover, you get to use the cool IPython interactive computing environment and you can also choose to include OpenCV if you want to do some more hardcore image processing. Well there you have it, these were the top tools for computer vision and image processing. Head on over and check out these resources, to get working with some of the top tools used in the industry.
Read more
  • 0
  • 2
  • 28117

article-image-most-important-skills-you-need-devops
Rick Blaisdell
19 Jul 2017
4 min read
Save for later

The most important skills you need in DevOps

Rick Blaisdell
19 Jul 2017
4 min read
During the last couple of years, we’ve seen how DevOps has exploded and has become one of the most competitive differentiators for every organization, regardless of their size. When talking about DevOps, we refer to agility and collaboration, keys that unlock a business’s success. However, to make it work for your business, you have to first understand how DevOps works, and what skills are required for adopting this agile business culture. Let’s look over this in more detail. DevOps culture Leaving the benefits aside, here are the three basic principles of a successful DevOps approach: Well-defined processes Enhanced collaboration across business functions Efficient tools and automation  DevOps skills you need Recently, I came across an infographic showing the top positions that technology companies are struggling to fill, and DevOps was number one on the list. Surprising? Not really. If we're looking at the skills required for a successful DevOps methodology, we will understand why finding a good DevOps engineer akin to finding a needle in a haystack. Besides communication and collaboration, which are the most obvious skills that a DevOps engineer must have, here is what draws the line between success, or failure. Knowledge of Infrastructure – whether we are talking about datacenter-based or cloud infrastructure, a DevOps engineer needs to have a deep understanding of different types of infrastructure and its components (virtualization, networking, load balancing, etc). Experience with infrastructure automation tools – taking into consideration that DevOps is mainly about automation, a DevOps engineer must have the ability to implement automation tools at any level. Coding - when talking about coding skills for DevOps engineers, I am not talking about just writing the code, but rather delivering solutions. In a DevOps organization, you need to have well-experienced engineers that are capable of delivering solutions. Experience with configuration management tools – tools such as Puppet, Chef, or Ansible are mandatory for optimizing software deployment and you need engineers with the know-how. Understanding continuous integration – being an essential part of a DevOps culture, continuous integration is the process that increases the engagement across the entire team and allows source code updates to be run whenever is required. Understanding security incident response – security is the hot button for all organizations, and one of the most pressing challenges to overcome. Having engineers that have a strong understanding of how to address various security incidents and developing a recovery plan, is mandatory for creating a solid DevOps culture.  Beside the above skills that DevOps engineers should have, there are also skills that companies need to adopt: Agile development – agile environment is the foundation on which the DevOps approach has been built. To get the most out of this innovative approach, your team needs to have strong collaboration capabilities to improve their delivery and quality. You can create your dream team by teaching different agile approaches such as Scrum, Kaizen, and Kanban. Process reengineering – forget everything you knew. This is one good piece of advice. The DevOps approach has been developed to polish and improve the traditional Software Development Lifecycle but also to highlight and encourage collaboration among teams, so an element of unlearning is required.  The DevOps approach has changed the way people collaborate with each other, and improving not only the processes, but their products and services as well. Here are the benefits:  Ensure faster delivery times – every business owner wants to see his product or service on the market as soon as possible, and the DevOps approach manages to do that. Moreover, since you decrease the time-to-market, you will increase your ROI; what more could you ask for? Continuous release and deployment – having strong continuous release and deployment practices, the DevOps approach is the perfect way to ensure the team is continuously delivering quality software within shorter timeframes. Improve collaboration between teams – there has always been a gap between the development and operation teams, a gap that has disappeared once DevOps was born. Today, in order to deliver high-quality software, the devs and ops are forced to collaborate, share, and revise strategies together, acting as a single unit.  Bottom line, DevOps is an essential approach that has changed not only results and processes, but also the way in which people interact with each other. Judging by the way it has progressed, it’s safe to assume that it's here to stay.  About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies, developing innovative technology strategies. 
Read more
  • 0
  • 0
  • 27913
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-6-common-use-cases-of-reverse-proxy-scenarios
Guest Contributor
05 Oct 2018
6 min read
Save for later

6 common use cases of Reverse Proxy scenarios

Guest Contributor
05 Oct 2018
6 min read
Proxy servers are used as intermediaries between a client and a website or online service. By routing traffic through a proxy server, users can disguise their geographic location and their IP address. Reverse proxies, in particular, can be configured to provide a greater level of control and abstraction, thereby ensuring the flow of traffic between clients and servers remains smooth. This makes them a popular tool for individuals who want to stay hidden online, but they are also widely used in enterprise settings, where they can improve security, allow tasks to be carried out anonymously, and control the way employees are able to use the internet. What is a Reverse Proxy? A reverse proxy server is a type of proxy server that usually exists behind the firewall of a private network. It directs any client requests to the appropriate server on the backend. Reverse proxies are also used as a means of caching common content and compressing inbound and outbound data, resulting in a faster and smoother flow of traffic between clients and servers. Furthermore, the reverse proxy can handle other tasks, such as SSL encryption, further reducing the load on web servers. There is a multitude of scenarios and use cases in which having a reverse proxy can make all the difference to the speed and security of your corporate network. By providing you with a point at which you can inspect traffic and route it to the appropriate server, or even transform the request, a reverse proxy can be used to achieve a variety of different goals. Load Balancing to route incoming HTTP requests This is probably the most familiar use of reverse proxies for many users. Load balancing involves the proxy server being configured to route incoming HTTP requests to a set of identical servers. By spreading incoming requests across these servers, the reverse proxies are able to balance out the load, therefore sharing it amongst them equally. The most common scenario in which load balancing is employed is when you have a website that requires multiple servers. This happens due to the volume of requests, which are too much for one server to handle efficiently. By balancing the load across multiple servers, you can also move away from an architecture that features a single point of failure. Usually, the servers will all be hosting the same content, but there are also situations in which the reverse proxy will also be retrieving specific information from one of a number of different servers. Provide security by monitoring and logging traffic By acting as the mediator between clients and your system’s backend, a reverse proxy server can hide the overall structure of your backend servers. This is because the reverse proxy will capture any requests that would otherwise go to those servers and handle them securely. A reverse proxy can also improve security by providing businesses with a point at which they can monitor and log traffic flowing through their network. A common use case in which a reverse proxy is used to bolster the security of a network would be the use of a reverse proxy as an SSL gateway. This allows you to communicate using HTTP behind the firewall without compromising your security. It also saves you the trouble of having to configure security for each server behind the firewall individually. A rotating residential proxy, also known as a backconnect proxy, is a type of proxy that frequently changes the IP addresses and connections that the user uses. This allows users to hide their identity and generate a large number of requests without setting alarms off. A reverse rotating residential proxy can be used to improve the security of a corporate network or website. This is because the servers in question will display the information for the proxy server while keeping their own information hidden from potential attackers. No need to install certificates on your backend servers with SSL Termination SSL termination process occurs when an SSL connection server ends, or when the traffic shifts between encrypted and unencrypted requests. By using a reverse proxy to handle any incoming HTTPS connections, you can have the proxy server decrypt the request, and then pass on the unencrypted request to the appropriate server. Taking this approach offers practical benefits. For example, it eliminates the need to install certificates on your backend servers. It also provides you with a single configuration point for managing SSL/TLS. Removing the need for your web servers to undertake this decryption means that you are also reducing the processing load on the server. Serve static content on behalf of backend servers Some reverse proxy servers can be configured to also act as web servers. Websites contain a mixture of dynamic content, which changes over time, and static content, which always remains the same. If you can configure your reverse proxy server to serve up static content on behalf of backend servers, you can greatly reduce the load, freeing up more power for dynamic content rendering. Alternatively, a reverse proxy can be configured to behave like a cache. This allows it to store and serve content that is frequently requested, thereby further reducing the load on backend servers. URL Rewriting before they go on to the backend servers Anything that a business can do to easily to improve their SEO score is worth considering. Without an investment in your SEO, your business or website will remain invisible to search engine users. With URL rewriting, you can compensate for any legacy systems you use, which produce URLs that are less than ideal for SEO. With a reverse proxy server, the URLs can be automatically reformatted before they are passed on to the backend servers. Combine Different Websites into a Single URL Space It is often desirable for a business to adopt a distributed architecture whereby different functions are handled by different components. With a reverse proxy, it is easy to route a single URL to a multitude of components. To anyone who uses your URL, it will simply appear as if they are moving to another page on the website. In fact, each page within that URL might actually be connecting to a completely different backend service. This is an approach that is widely used for web service APIs. To sum up, the primary function of a reverse proxy is load balancing, ensuring that no individual backend server becomes inundated with more traffic or requests than it can handle. However, there are a number of other scenarios in which a reverse proxy can potentially offer enormous benefits. About the author Harold Kilpatrick is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online. Read Next HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics How to Configure Squid Proxy Server Acting as a proxy (HttpProxyModule)
Read more
  • 0
  • 0
  • 27389

article-image-is-dart-programming-dead-already
Amarabha Banerjee
26 Jul 2018
3 min read
Save for later

Is Dart programming dead already?

Amarabha Banerjee
26 Jul 2018
3 min read
Dart is an open-source, object-oriented, general-purpose programming language developed by Google in 2011. Dart uses a 'C' style syntax and optionally transcompiles into JavaScript. It is used for both client side and server-side web development. Dart is also being used for Native and Cross-platform mobile development. In spite of all the capabilities that Dart possesses, there are few rumors about Dart heading nowhere? Are there any truth to these rumors? Through this article, we will see how Dart can turn the tide around with some key new projects and application areas that are being explored recently. But first, let's talk about its recent TIOBE ranking - the de-facto standard for gauging the popularity of programming languages. The latest rankings show Dart at the 24th spot out of the 50 languages that TIOBE tracks, behind languages like R, Delphi, even Swift which was released in 2014, 3 years later after Dart was introduced. Even languages like Delphi and R have seen a recent spike in implementation across various application domain. Codementor ranks Dart at #1 in the list of programming languages one should not learn in 2018. They looked at community engagement, growth, and the job market to arrive at this conclusion. Source: Codementor.io The job trends for Dart also have been quite stagnant for some time. After a minor dip in between 2015-16, the job trends are now back at the 2014 mark. There has been neither any growth nor decline. The major cause of concern for Dart has been that very few companies are using Dart in their development stack. Other than Google backed websites such as AdWords, Google Fiber, Flutter etc, only few other major companies like Workiva, Adobe, Blossom have actively implemented Dart. The Job listings are also similar in number as in 2014, the developer salaries are pretty high. The lack of sustained growth balances that advantage out. Google had also shifted to TypeScript as the official language for it’s most popular front-end development framework Angular. This was also seen as a minor setback for Dart. When compared to other late entrants like Swift, is this a sign of worry for Dart? Source: ITJobswatch.uk The answer is a definitely 'No'. The reason being Dart’s ease of use, lack of boilerplates and extremely lightweight nature. Developers have termed it as a language for the long run. These predictions got a major boost when Google recently announced their latest Cross-Platform mobile development framework, Flutter which was written in Dart. The reviews of Flutter has also been encouraging since it paves way for simpler and native like mobile apps. The popularity of Flutter could mean a revival of Dart in the mobile development scenario. But Google might have even bigger aspirations with Dart. Google might be thinking of a possible replacement of its flagship Android operating system which is bugged by irregular update cycles due to multiple instances of it running on different devices. Google is developing an operating system called Fuchsia which is also being written in Dart. All of these point out to one single direction, Dart is here to stay. If everything goes as per Google's vision, Fuchsia will bring Dart to the forefront of mobile development.   **edited for better clarity on Dart’s comparison to other languages like Swift, Delphi and R and elaborated on Dart’s real world use cases. Read Next Why Google Dart Will Never Win The Battle For The Browser Building Games with HTML5 and Dart Google Fuchsia: What’s all the fuss about?
Read more
  • 0
  • 3
  • 27208

article-image-what-is-pytorch-and-how-does-it-work
Sunith Shetty
18 Sep 2018
7 min read
Save for later

What is PyTorch and how does it work?

Sunith Shetty
18 Sep 2018
7 min read
PyTorch is a Python-based scientific computing package that uses the power of graphics processing units. It is also one of the preferred deep learning research platforms built to provide maximum flexibility and speed. It is known for providing two of the most high-level features; namely, tensor computations with strong GPU acceleration support and building deep neural networks on a tape-based autograd systems. There are many existing Python libraries which have the potential to change how deep learning and artificial intelligence are performed, and this is one such library. One of the key reasons behind PyTorch’s success is it is completely Pythonic and one can build neural network models effortlessly. It is still a young player when compared to its other competitors, however, it is gaining momentum fast. A brief history of PyTorch Since its release in January 2016, many researchers have continued to increasingly adopt PyTorch. It has quickly become a go-to library because of its ease in building extremely complex neural networks. It is giving a tough competition to TensorFlow especially when used for research work. However, there is still some time before it is adopted by the masses due to its still “new” and “under construction” tags. PyTorch creators envisioned this library to be highly imperative which can allow them to run all the numerical computations quickly. This is an ideal methodology which fits perfectly with the Python programming style. It has allowed deep learning scientists, machine learning developers, and neural network debuggers to run and test part of the code in real time. Thus they don’t have to wait for the entire code to be executed to check whether it works or not. You can always use your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch functionalities and services when required. Now you might ask, why PyTorch? What’ so special in using it to build deep learning models? The answer is quite simple, PyTorch is a dynamic library (very flexible and you can use as per your requirements and changes) which is currently adopted by many of the researchers, students, and artificial intelligence developers. In the recent Kaggle competition, PyTorch library was used by nearly all of the top 10 finishers. Some of the key highlights of PyTorch includes: Simple Interface: It offers easy to use API, thus it is very simple to operate and run like Python. Pythonic in nature: This library, being Pythonic, smoothly integrates with the Python data science stack. Thus it can leverage all the services and functionalities offered by the Python environment. Computational graphs: In addition to this, PyTorch provides an excellent platform which offers dynamic computational graphs, thus you can change them during runtime. This is highly useful when you have no idea how much memory will be required for creating a neural network model. PyTorch Community PyTorch community is growing in numbers on a daily basis. In the just short year and a half, it has shown some great amount of developments that have led to its citations in many research papers and groups. More and more people are bringing PyTorch within their artificial intelligence research labs to provide quality driven deep learning models. The interesting fact is, PyTorch is still in early-release beta, but the way everyone is adopting this deep learning framework at a brisk pace shows its real potential and power in the community. Even though it is in the beta release, there are 741 contributors on the official GitHub repository working on enhancing and providing improvements to the existing PyTorch functionalities. PyTorch doesn’t limit to specific applications because of its flexibility and modular design. It has seen heavy use by leading tech giants such as Facebook, Twitter, NVIDIA, Uber and more in multiple research domains such as NLP, machine translation, image recognition, neural networks, and other key areas. Why use PyTorch in research? Anyone who is working in the field of deep learning and artificial intelligence has likely worked with TensorFlow before, Google’s most popular open source library. However, the latest deep learning framework - PyTorch solves major problems in terms of research work. Arguably PyTorch is TensorFlow’s biggest competitor to date, and it is currently a much favored deep learning and artificial intelligence library in the research community. Dynamic Computational graphs It avoids static graphs that are used in frameworks such as TensorFlow, thus allowing the developers and researchers to change how the network behaves on the fly. The early adopters are preferring PyTorch because it is more intuitive to learn when compared to TensorFlow. Different back-end support PyTorch uses different backends for CPU, GPU and for various functional features rather than using a single back-end. It uses tensor backend TH for CPU and THC for GPU. While neural network backends such as THNN and THCUNN for CPU and GPU respectively. Using separate backends makes it very easy to deploy PyTorch on constrained systems. Imperative style PyTorch library is specially designed to be intuitive and easy to use. When you execute a line of code, it gets executed thus allowing you to perform real-time tracking of how your neural network models are built. Because of its excellent imperative architecture and fast and lean approach it has increased overall PyTorch adoption in the community. Highly extensible PyTorch is deeply integrated with the C++ code, and it shares some C++ backend with the deep learning framework, Torch. Thus allowing users to program in C/C++ by using an extension API based on cFFI for Python and compiled for CPU for GPU operation. This feature has extended the PyTorch usage for new and experimental use cases thus making them a preferable choice for research use. Python-Approach PyTorch is a native Python package by design. Its functionalities are built as Python classes, hence all its code can seamlessly integrate with Python packages and modules. Similar to NumPy, this Python-based library enables GPU-accelerated tensor computations plus provides rich options of APIs for neural network applications. PyTorch provides a complete end-to-end research framework which comes with the most common building blocks for carrying out everyday deep learning research. It allows chaining of high-level neural network modules because it supports Keras-like API in its torch.nn package. PyTorch 1.0: The path from research to production We have been discussing all the strengths PyTorch offers, and how these make it a go-to library for research work. However, one of the biggest downsides is, it has been its poor production support. But this is expected to change soon. PyTorch 1.0 is expected to be a major release which will overcome the challenges developers face in production. This new iteration of the framework will merge Python-based PyTorch with Caffe2 allowing machine learning developers and deep learning researchers to move from research to production in a hassle-free way without the need to deal with any migration challenges. The new version 1.0 will unify research and production capabilities in one framework thus providing the required flexibility and performance optimization for research and production. This new version promises to handle tasks one has to deal with while running the deep learning models efficiently on a massive scale. Along with the production support, PyTorch 1.0 will have more usability and optimization improvements. With PyTorch 1.0, your existing code will continue to work as-is, there won’t be any changes to the existing API. If you want to stay updated with all the progress to PyTorch library, you can visit the Pull Requests page. The beta release of this long-awaited version is expected later this year. Major vendors like Microsoft and Amazon are expected to provide complete support to the framework across their cloud products. Summing up, PyTorch is a compelling player in the field of deep learning and artificial intelligence libraries, exploiting its unique niche of being a research-first library. It overcomes all the challenges and provides the necessary performance to get the job done. If you’re a mathematician, researcher, student who is inclined to learn how deep learning is performed, PyTorch is an excellent choice as your first deep learning framework to learn. Read more Can a production-ready Pytorch 1.0 give TensorFlow a tough time? A new geometric deep learning extension library for Pytorch releases! Top 5 tools for reinforcement learning
Read more
  • 0
  • 0
  • 27191

article-image-what-difference-between-declarative-and-imperative-programming
Antonio Cucciniello
10 Mar 2018
4 min read
Save for later

What is the difference between declarative and imperative programming?

Antonio Cucciniello
10 Mar 2018
4 min read
Declarative programming and imperative programming are two different approaches that offer a different way of working on a given project or application. But what is the difference between declarative and imperative programming? And when should you use one over the other? What is declarative programming? Let us first start with declarative programming. This is the form or style of programming where we are most concerned with what we want as the answer, or what would be returned. Here, we as developers are not concerned with how we get there, simply concerned with the answer that is received. What is imperative programming? Next, let's take a look at imperative programming. This is the form and style of programming in which we care about how we get to an answer, step by step. We want the same result ultimately, but we are telling the complier to do things a certain way in order to achieve that correct answer we are looking for. An analogy If you are still confused, hopefully this analogy will clear things up for you. The analogy is comparing wanting to learn a topic or outsourcing work to have someone else do it. If you are outsourcing work, you might not care about how the work is completed; rather you care just what the final product or result of the work looks like. This is likened to declarative programming. You know exactly what you want and you program a function to give you just that. Let us say you did not want to outsource work for your business or project. Instead, you tell yourself that you want to learn this skill for the long term. Now, when attempting to complete the task, you care about how it is actually done. You need to know the individual steps along the way in order to get it working properly. This is similar to imperative programming. Why you should use declarative programming Reusability Since the way the result is achieved does not necessarily matter here, it allows for the functions you build to be more general and could potentially be used for multiple purposes and not just one. Not rewriting code can speed up the program you are currently writing and any others that use the same functionality in the future. Reducing Errors Given that in declarative programming you tend to write functions that do not change state as you would in functional programming, the chances of errors arising are smaller and it allows for your application to become more stable. The removal of side effects from your functions allows you to know exactly what comes in and what comes out, allows for a more predictable program. Potential drawbacks of declarative programming Lack of Control In declarative programming, you may use functions that someone else created, in order to achieve the desired results. But you may need specific things to be completed behind the scenes to make your result come out properly. You do not have this control in declarative programming as you would in imperative programming. Inefficiency When the implementation is controlled by something else, you may have problems making your code efficient. In applications where there may be a time constraint, you will need to program the individual steps in order to make sure your program is running as efficient as possible. There are benefits and disadvantages to both forms. Overall, it is entirely up to you, the programmer, to decide which implementation you would like to follow in your code. If you are solely focused on the data, perhaps consider using the declarative programming style. If you care more about the implementation and how something works, maybe stick to an imperative programming approach. More importantly, you can have a mix of both styles. It is extremely flexible for you. You are in charge here.
Read more
  • 0
  • 0
  • 27033
article-image-cloud-pricing-comparison-aws-vs-azure
Guest Contributor
02 Feb 2019
11 min read
Save for later

Cloud pricing comparison: AWS vs Azure

Guest Contributor
02 Feb 2019
11 min read
On average, businesses waste about 35% of their cloud spend due to inefficiently using their cloud resources. This amounts to more than $10 billion in wasted cloud spend across just the top three public cloud providers. Although the unmatched compute power, data storage options and efficient content delivery systems of the leading public cloud providers can support incredible business growth, this can cause some hubris. It’s easy to lose control of costs when your cloud provider appears to be keeping things running smoothly. To stop this from happening, it’s essential to adopt a new approach to how we manage - and optimize - cloud spend. It’s not an easy thing to do, as pricing structures can be complicated. However, in this post, we’ll look at how both AWS and Azure structure their pricing, and how you can best determine what’s right for you. Different types of cloud pricing schemes Broadly, the pricing model for cloud services can range from a pure subscription-based model, where services are charged based on a cloud catalog and users are billed per month, per mailbox, or app license ordered. In this instance, subscribers are billed for all the resources to which they are subscribed, irrespective of whether they are used or not. The other option is pay-as-you-go. This is where subscribers begin with a billing amount set at 0, which then grows with the services and resources they use.. Amazon uses the Pay-As-You-Go model, charging a predetermined price for every hour of virtual machine resources used. Such a model is also used by other leading cloud service providers including Microsoft Azure and Google’s Google Cloud Platform. Another variant of cloud pricing is an enterprise billing service. This is based on the number of active users assigned to a particular cloud subscription. Microsoft Azure is a leading cloud provider that offers cloud subscription for its customers. Most cloud providers offer varying combinations of the above three models with attractive discount options built-in. These include: What free tier services do AWS and Azure offer? Both AWS and Azure offer a ‘free tier’ service for new and initial subscribers. This is for potential long-time subscribers to test out the service before committing for the long run. For AWS, Amazon allows subscribers to try out most of AWS’ services free for a year, including RDS, S3, EC2, Elastic Block Store, Elastic Load Balancing (EBS) and other AWS services. For example, you can utilize EC2 and EBS on the free tier to host a website for a whole year. EBS pricing will be zero unless your usage exceeds the limit of 30GB of storage. The free tier for the EC2 includes 730 hours of a t2.micro instance. Azure offers similar deals for new users. Azure’s services like App Service, Virtual Machines, Azure SQL Database, Blob Storage and Azure Kubernetes Service (AKS) are free for the initial period of 12 months. Additionally, Azure provides the ‘Functions’ compute service (for serverless) at 1 million requests free every month throughout the subscription. This is useful if you want to give serverless a try. AWS and Azure’s pay-as-you-go, on-demand pricing models Under the pay-as-you-go model, AWS and Azure offer subscribers the option to simply settle their bills at the end of every month without any upfront investment. This is a good option if you want to avoid a long-term and binding contract. Most resources are available on demand and charged on a per hour basis, and costs are calculated based on the number of hours the resource was used. For data storage and data transfer, the rates are generally calculated per Gigabyte. Subscribers are notified 30 days in advance for any changes in the Pay As You Go rates as well as when new services are added periodically to the platform. Reserve-and-pay-less pricing model In addition to the on-demand pricing model, Amazon AWS has an alternate scheme called Reserved Instance (RI) that allows the subscriber to reserve capacity for specific products. RI offers discounted hourly rates and capacity reservation for its EC2 and RDS services. A subscriber can reserve a resource and can save up to 75% of total billing costs in the long run. These discounted rates are automatically added to the subscriber’s AWS bills. Subscribers have the option to reserve instances either for a 1-year or a 3-year term. Microsoft Azure offers to help subscribers save up to 72% of their billing costs compared to its pay-as-you-go model when subscribers sign up for one to three-year terms for Windows and Linux virtual machines (VMs). Microsoft also allows for added flexibility in the sense that if your business needs change, you can cancel your Azure RI subscription at any time and return the remaining unused RI to Microsoft for an early termination fee. Use-more-and-pay-less pricing model In addition to the above payment options, AWS offers subscribers one additional payment option. When it comes to data transfer and data storage services, AWS gives discounts based on the subscriber’s usage. These volume-based discounts help subscribers realize critical savings as their usage increases. Subscribers can benefit from the economies of scale, allowing their businesses to grow while costs are kept relatively under control. AWS also gives subscribers the option to sign up for services that help their growing business. As an example, AWS’ storage services offer subscribers with opportunities to lower pricing based on how frequently data is accessed and performance needed in the retrieval process. For EC2, you can get a discount of up to 10% if you reserve more. The image below demonstrates the pricing of the AWS S3 bucket based on usage. Comparing Cloud Pricing on Azure and AWS As the major cloud service providers – Amazon Web Services, Azure, Google Cloud Platform and IBM – continually decrease prices of cloud instances, provide new and innovative discount options, include additional instances, and drop billing increments. In some cases, especially, Microsoft Azure, per second billing has also been introduced. However, as costs decrease, the complexity increases. It is paramount for subscribers to understand and efficiently navigate this complexity. We take a crack at it here. Reserved Instance Pricing Given the availability of Reserved Instances by Azure, AWS and GCP have also introduced publicly available discounts, some reaching up to 75%. This is in exchange for signing up to use the services of the particular cloud service provider for a one year to 3 year period. We’ve briefly covered this in the section above. Before signing up, however, subscribers need to understand the amount of usage they are committing to and how much of usage to leave as an ‘on-demand’ option. To do this, subscribers need to consider many different factors – Historical usage – by region, instance type, etc Steady-state vs. part-time usage An estimate of usage growth or decline Probability of switching cloud service providers Choosing alternative computing models like serverless, containers, etc. On-Demand Instance Pricing On-Demand Instances work best for applications that have short-term, irregular workloads but critical enough as to not be interrupted. For instance, if you’re running cron jobs on a periodic basis that lasts for a few hours, you can move them to on-demand instances. Each On-Demand Instance is billed per instance hour from time it is launched until it is terminated. These are most useful during the testing or development phase of applications. On-demand instances are available in many varying levels of computing power, designed for different tasks executed within the cloud environment. These on-demand instances have no binding contractual commitments and can be used as and when required. Generally, on-demand instances are among the most expensive purchasing options for instances. Each on-demand instance is billed at a per instance hour from the time it is launched until it is stopped or terminated. If partial instance hours are used, these are rounded up to the full hour during billing. The chart below shows the on-demand price per hour for AWS and Azure cloud services and the hourly price for each GB of RAM. VM Type AWS OD Hourly Azure OD Hourly AWS OD / GB RAM Azure OD / GB RAM Standard 2 vCPU w Local SSD $0.133 $0.100 $0.018 $0.013 Standard 2 vCPU no local disk $0.100 $0.100 $0.013 $0.013 Highmem 2 vCPU w Local SSD $0.166 $0.133 $0.011 $0.008 Highmem 2 vCPU no local disk $0.133 $0.133 $0.009 $0.008 Highcpu 2 vCPU w Local SSD $0.105 $0.085 $0.028 $0.021 Highcpu 2 vCPU no local disk $0.085 $0.085 $0.021 $0.021   The on-demand price of Azure instances is cheaper compared to AWS for certain VM types. The price difference is evident for instances with local SSD. Discounted Cloud Instance Pricing When it comes to discounted cloud pricing, it is important to remember that this comes with a lock-in period of 1 – 3 years. Therefore, it would work best for organizations that are more stable and have a good idea of what their historical cloud usage is and can fairly accurately predict what cloud services they would require over the next 12 month period. In the table below, we have looked at annual costs of both AWS and Azure. VM Type AWS 1 Y RI Annual Azure 1 Y RI Annual AWS 1 Y RI Annual / GB RAM Azure 1 Y RI Annual / GB RAM Standard 2 vCPU w Local SSD $867 $508 $116 $64 Standard 2 vCPU no local disk $622 $508 $78 $64 Highmem 2 vCPU w Local SSD $946 $683 $63 $43 Highmem 2 vCPU no local disk $850 $683 $56 $43 Highcpu 2 vCPU w Local SSD $666 $543 $178 $136 Highcpu 2 vCPU no local disk $543 $543 $136 $136 Azure’s rates are clearly better than Amazon’s pricing and by a good margin. Azure offers better-discounted rates for Standard, Highmem and High CPU compute instances.   Optimizing Cloud Pricing Subscribers need to move beyond short-term, one time fixes and make use of automation to continuously monitor their spend, raise alerts for over or underuse of service and also take an automated action based on a predetermined condition. Here are some of the ways you can optimize your cloud spending: Cloud Pricing Calculators Cloud Pricing tools enable you to list the different parameters for your AWS or Azure subscriptions. You can use these tools to calculate an approximate monthly cost that would likely be incurred. AWS Simple Monthly Calculator You can try the official cloud pricing calculators from AWS and Azure or a third-party pricing calculator. Calculators help you to optimize your pricing based on your requirements. For example, if you have a long-term requirement for running instances, and if you’re currently running them using on-demand pricing schemes, cloud calculators can offer better insights into reserved-instance schemes and other ways that you can improve your cloud expenditure. For instance, this Azure calculator by NetApp offers more price optimization option. This includes options to tier less frequently used data to storage objects like Azure Blob and customize snapshot creation and storage efficiency. Zerto is another popular calculator for Azure and AWS with a simpler interface. However, note that the estimated cost is based on current pricing and is subject can be liable to change. Price List API Historically, for potential users to narrow down on the final usage cost involved a considerable amount of manual rate checks. They involve collecting price points, and checking and cross-referencing them manually. In the case of AWS, the Price List API offers programmatic access, which is especially beneficial to designers who can now query the AWS price list instead of searching manually through the web. To make matters more natural, the queries can be constructed into simple code in any language. Azure offers a similar billing API to gain insights into your Azure usage programmatically. Summary Understanding and optimizing cloud pricing is somewhat challenging with AWS and Azure. This is partially because they offer hundreds of features with different pricing options and new features are added to the pipeline every week. To solve some of these complexities, we’ve covered some of the popular ways to tackle pricing in AWS and Azure. Here’s a list of things that we’ve covered: How the cloud pricing works and the different pricing schemes in AWS and Azure Comparison of different instance pricing options in AWS and Azure which includes reserved instance, on-demand instances, and discounted instances. Third-party tools like calculators for optimizing price. Price list API for AWS and Azure. If you have any thoughts to share, feel free to post it in the comments. About the author Gilad David Maayan Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Oracle, Zend, CheckPoint and Ixia. Gilad is a 3-time winner of international technical communication awards, including the STC Trans-European Merit Award and the STC Silicon Valley Award of Excellence. Over the past 7 years, Gilad has headed Agile SEO, which performs strategic search marketing for leading technology brands. Together with his team, Gilad has done market research, developer relations, and content strategy in 39 technology markets, lending him a broad perspective on trends, approaches, and ecosystems across the tech industry. Cloud computing trends in 2019 The 10 best cloud and infrastructure conferences happening in 2019 Bo Weaver on Cloud security, skills gap, and software development in 2019  
Read more
  • 0
  • 0
  • 26736

article-image-tools-to-stay-completely-anonymous-online
Guest Contributor
12 Jul 2018
8 min read
Save for later

10 great tools to stay completely anonymous online

Guest Contributor
12 Jul 2018
8 min read
Everybody is facing a battle these days. Though it may not be immediately apparent, it is already affecting a majority of the global population. This battle is not fought with bombs, planes, or tanks or with any physical weapons for that matter. This battle is for our online privacy. A survey made last year discovered 69% of data breaches were related to identity theft. Another survey shows the number of cases of data breaches related to identity theft has steadily risen over the last 4 years worldwide. And it is likely to increase as hackers are gaining easy access more advanced tools. The EU’s GDPR may curb this trend by imposing stricter data protection standards on data controllers and processors. These entities have been collecting and storing our data for years through ads that track our online habits-- another reason to protect our online anonymity. However, this new regulation has only been in force for over a month and only within the EU. So, it's going to take some time before we feel its long-term effects. The question is, what should we do when hackers out there try to steal and maliciously use our personal information? Simple: We defend ourselves with tools at our disposal to keep ourselves completely anonymous online. So, here’s a list you may find useful. 1. VPNs A VPN helps you maintain anonymity by hiding your real IP and internet activity from prying eyes. Normally, your browser sends a query tagged with your IP every time you make an online search. Your ISP takes this query and sends it to a DNS server which then points you to the correct website. Of course, your ISP (and all the servers your query had to go through) can, and will likely, view and monitor all the data you course through them-- including your personal information and IP address. This allows them to keep a tab on all your internet activity. A VPN protects your identity by assigning you an anonymous IP and encrypting your data. This means that any query you send to your ISP will be encrypted and no longer display your real IP. This is why using a VPN is one of the best ways to keeping anonymous online. However, not all VPNs are created equal. You have to choose the best one if you want airtight security. Also, beware of free VPNs. Most of them make money by selling your data to advertisers. You’ll want to compare and contrast several VPNs to find the best one for you. But, that’s sooner said than done with so many different VPNs out there. Look for reviews on trustworthy sites to find the best vpn for your needs. 2. TOR Browser The Onion Router (TOR) is a browser that strengthens your online anonymity even more by using different layers of encryption-- thereby protecting your internet activity which includes “visits to Web sites, online posts, instant messages, and other communication forms”. It works by first encasing your data in three layers of encryption. Your data is then bounced three times-- each bounce taking off one layer of encryption. Once your data gets to the right server, it “puts back on” each layer it has shed as it successively bounces back to your device. You can even improve TOR by using it in combination with a compatible VPN. It is important to note, though, that using TOR won’t hide the fact that you’re using it. Some sites may restrict allowances made through TOR. 3. Virtual machine A Virtual machine is basically a second computer within your computer. It lets you emulate another device through an application. This emulated computer can then be set according to your preferences. The best use for this tool, however, is for tasks that don’t involve an internet connection. It is best used for when you want to open a file and want to make sure no one is watching over your shoulder. After opening the file, you then simply delete the virtual machine. You can try VirtualBox which is available on Windows, Linux, and Mac. 4. Proxy servers A proxy server is an intermediary between your device and the internet. It’s basically another computer that you use to process internet requests. It’s similar to a virtual machine in concept but it’s an entirely separate physical machine. It protects your anonymity in a similar way a VPN does (by hiding your IP) but it can also send a different user agent to keep your browser unidentifiable and block or accept cookies but keep them from passing to your device. Most VPN companies also offer proxy servers so they’re a good place to look for a reliable one. 5. Fake emails A fake email is exactly what the name suggests: an email that isn’t linked to your real identity. Fake emails aid your online anonymity by not only hiding your real identity but by making sure to keep you safe from phishing emails or malware-- which can be easily sent to you via email. Making a fake email can be as easy as signing up for an email without using your real information or by using a fake email service. 6. Incognito mode “Going incognito” is the easiest anonymity tool to come by. Your device will not store any data at all while in this mode including: your browsing history, cookies, site data, and information entered in forms. Most browsers have a privacy mode that you can easily use to hide your online activity from other users of the same device. 7. Ad blockers Ads are everywhere these days. Advertising has and always will be a lucrative business. That said, there is a difference between good ads and bad ads. Good ads are those that target a population as a whole. Bad ads (interest-based advertising, as their companies like to call it) target each of us individually by tracking our online activity and location-- which compromises our online privacy. Tracking algorithms aren’t illegal, though, and have even been considered “clever”. But, the worst ads are those that contain malware that can infect your device and prevent you from using it. You can use ad blockers to combat these threats to your anonymity and security. Ad blockers usually come in the form of browser extensions which instantly work with no additional configuration needed. For Google Chrome, you can choose either Adblock Plus, uBlock Origin, or AdBlock. For Opera, you can choose either Opera Ad Blocker, Adblock Plus, or uBlock Origin. 8. Secure messaging apps If you need to use an online messaging app, you should know that the popular ones aren’t as secure as you’d like them to be. True, Facebook messenger does have a “secret conversation” feature but Facebook hasn’t exactly been the most secure social network to begin with. Instead, use tools like Signal or Telegram. These apps use end-to-end encryption and can even be used to make voice calls. 9. File shredder The right to be forgotten has surfaced in mainstream media with the onset of the EU’s General Data Protection Regulation. This right basically requires data collecting or processing entities to completely remove a data subject’s PII from their records. You can practice this same right on your own device by using a “file shredding” tool. But the the thing is: Completely removing sensitive files from your device is hard. Simply deleting it and emptying your device’s recycle bin doesn’t actually remove the file-- your device just treats the space it filled up as empty and available space. These “dead” files can still haunt you when they are found by someone who knows where to look. You can use software like Dr. Cleaner (for Mac) or Eraser (for Win) to “shred” your sensitive files by overwriting them several times with random patterns of random sets of data. 10. DuckDuckGo DuckDuckGo is a search engine that doesn’t track your behaviour (like Google and Bing that use behavioural trackers to target you with ads). It emphasizes your privacy and avoids the filter bubble of personalized search results. It offers useful features like region-specific searching, Safe Search (to protect against explicit content), and an instant answer feature which shows an answer across the top of the screen apart from the search results. To sum it up: Our online privacy is being attacked from all sides. Ads legally track our online activities and hackers steal our personal information. The GDPR may help in the long run but that remains to be seen. What's important is what we do now. These tools will set you on the path to a more secure and private internet experience today. About the Author Dana Jackson, an U.S. expat living in Germany and the founder of PrivacyHub. She loves all things related to security and privacy. She holds a degree in Political Science, and loves to call herself a scientist. Dana also loves morning coffee and her dog Paw.   [divider style="normal" top="20" bottom="20"] Top 5 cybersecurity trends you should be aware of in 2018 Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news Top 5 cybersecurity myths debunked  
Read more
  • 0
  • 4
  • 26441

article-image-9-reasons-to-choose-agile-methodology-for-mobile-app-development
Guest Contributor
01 Oct 2018
6 min read
Save for later

9 reasons to choose Agile Methodology for Mobile App Development

Guest Contributor
01 Oct 2018
6 min read
As mobile application development is becoming the new trend in the business world, the competition to enter the mobile market has intensified. Business leaders are hunting for every possible way to reach the market at the earliest and outshine the competition. Albeit, without compromising on the quality and quantity of the opportunities in the market. One such method prevalent in the market is the adoption of Agile methodology. Agile mobile app development methodology The Agile methodology is an incremental and iterative mobile application development approach, where the complete app development process cycle is divided into multiple sub-modules, considered as mini-projects. Every submodule is assigned to an individual team and subjected to the complete development cycle, right from designing to development, testing, and delivery. Image Source: AppInventiv Benefits of Agile Mobile App Development Approach The technology, due to this iterative nature, is highly recommended in the market. Here are 9 reasons to choose Agile for your mobile app development project Faster Development In the case of the Agile model, the complete mobile app project is divided into smaller modules which are treated like independent sub-projects. These sub-projects are handled by different teams independently, with little-to-no dependencies on each other. Besides, everyone has a clear idea of what their contribution will be and the associated resources and deadline, which accelerates the development process. Every developer puts their best efforts into completing their part in the mobile app development project, an outcome of which is a more streamlined app development process with faster delivery. Reduced Risks With the changing market needs and trends, it is quite risky to launch your own application. Many times, it happens that the market data you have taken into consideration while developing your app gets outdated by the time you launch your app. The outcome is poor ROI and a shady future in the market. Agile, in this situation, is a tool that allows you to take calculated risks and improve your project’s market scope. In other words, the methodology enables a mobile app development company to make changes to any particular sprint without disturbing the code of the previous sprints. Thus, making their application more suitable for the market. Better Quality Agile, unlike the traditional app development models, does not test the app at the end of the development phase. Rather, it fosters testing of every single module at the primitive level. This reduces the risk of encountering a bug at the time of quality testing of the complete project. It also helps mobile app developers to inspect the app elements at every stage of the development process and make adjustments as per the requirement, eventually helping in delivering a higher quality of services. Seamless Project Management By transforming the complete app development project into multiple individual modules, the Agile methodology provides you with the facility to manage your project easily. You can easily assign the tasks to different teams and reduce the dependencies and discussions at the inter-team level. You can also keep a record of the activities performed on each mini project and in this way, determine if something is missing or not working as per the proposed plan. Besides, you can check the productivity of every individual and put your efforts into making/hiring more efficient experts. Enhanced Customer Experience The Agile mobile app development approach puts a strong emphasis on people and collaboration, which renders the development team with an opportunity to work closely with their clients and understand their vision. Besides, the projects are delivered to the clients in the form of multiple sprints, which brings transparency to the process. It also enables the team to determine if both parties are on the same page and if not, allow them to make the required changes before proceeding further. This lessens the chances of launching app that does not fulfill to the idea behind, and therefore, ensures enhanced customer experience. Lower Development Cost Since every step is well planned, executed, and delivered, you can easily calculate the cost of making an app and thus, justify your app budget. Besides, if at any stage of development, you feel the need to raise the app budget, you can easily do it with Agile methodology. In this way, you can avoid leaving the project incomplete due to lack of required resources and funds. Customization The agile mobile app development approach also provides developers with an opportunity to customize their development process. There’ are no rules to create an app in a particular way. Experts can look for different ways to develop and launch the mobile app, and integrate the cutting-edge technologies and tools into the process. In a nutshell, the Agile process enables developers to customize the development timeline as per their choice and deliver a user-centric solution. Higher ROI The Agile methodology lets the mobile app development companies enter the market with the most basic app (MVP) and update the app with each iteration. This makes it easier for the app owners to test their idea, gather required data and insights, build a brand presence, and thus, deliver the best features as per the customer needs and market trends. This aids the app owners and associated mobile app development company to take the right decision for gaining better ROI in the market. Earlier Market Reach By dividing the complete app project into submodules, the Agile mobile app development approach encourages the team to deliver every module with the stipulated deadline. Lagging behind the deadlines is not practically possible. The outcome of this is that the complete app project is designed and delivered on-time or even earlier to it, which means earlier market reach. The Agile methodology can do wonders for mobile app development. However, it is required that everyone is well aware of the end goal and contributes to this approach wisely. Only then can you enjoy all of the aforementioned benefits. With this, are you ready to go Agile? Are you ready to develop a mobile app with an Agile mobility strategy?. Author Bio Holding a Bachelor’s degree in Technology and 2 years of work experience in a mobile app development company, Bhupinder is focused on making technology digestible to all. Being someone who stays updated with the latest tech trends, she’s always armed to write and spread the knowledge. When not found writing, you will find her answering on Quora while sipping coffee.
Read more
  • 0
  • 0
  • 26405
article-image-tao-devops
Joakim Verona
21 Apr 2016
4 min read
Save for later

The Tao of DevOps

Joakim Verona
21 Apr 2016
4 min read
What is Tao? It's a complex idea - but one method of thinking of it is to find the natural way of doing things, and then making sure you do things that way. It is intuitive knowing, an approach that can't be grasped just in principle but only through putting them into practice in daily life. The principles of Tao can be applied to almost anything. We've seen the Tao of Physics, the Tao of Pooh, even the Tao of Dating. Tao principles apply just as well to DevOps - because who can know fully what DevOps actually is? It is an idiom as hard to define as "quality" - and good DevOps is closely tied to the good quality of a software product. Want a simple example? A recipe for cooking a dish normally starts with a list of ingredients, because that's the most efficient way of describing cooking. When making a simple desert, the recipe starts with a title: "Strawberries And Cream". Already we can infer a number of steps in making the dish. We must acquire strawberries and cream, and probably put them together on a plate. The recipe will continue to describe the preparation of the dish in more detail, but even if we read only the heading, we will make few mistakes. So what does this mean for DevOps and product creation? When you are putting things together and building things, the intuitive and natural way to describe the process is to do it declaratively. Describe the "whats" rather than the "hows", and then the "hows" can be inferred. The Tao of Building Software Most build tools have at their core a way of declaring relationships between software components. Here's a Make snippet: a : b cc b And here's an Ant snippet: cc b </build> And a Maven snippet: <dependency> lala </dep> Many people think they wound up in a Lovecraftian hell when they see XML, even though the brackets are perfectly euclidean. But if you squint hard enough, you will see that most tools at their core describe dependency trees. The Apache Maven tool is well-known, and very explicit about the declarative approach. So, let's focus on that and try to find the Tao of Maven. When we are having a good day with Maven and we are following the ways of Tao, we describe what type of software artifact we want to build, and the components we are going to use to put it together. That's all. The concrete building steps are inferred. Of course, since life is interesting and complex, we will often encounter situations were the way of Tao eludes us. Consider this example: type:pom antcall tar together ../*/target/*.jar Although abbreviated, I have observed this antipattern several times in real world projects. Whats wrong with it? After all, this antipattern occurs because the alternatives are non-obvious, or more verbose. You might think it's fine. But first of all, notice that we are not describing whats (at least not in a way that Maven can interpret). We are describing hows. Fixing this will probably require a lot of work, but any larger build will ensure that it eventually becomes mandatory to find a fix. Pause (perhaps in your Zen Garden) and consider that dependency trees are already described within the code of most programming languages. Isn't the "import" statement of Java, Python and the like enough? In theory this is adequate - if we disregard the dynamism afforded by Java, where it is possible to construct a class name as a string and load it. In practice, there are a lot of different artifact types that might contain various resources. Even so, it is clearly possible in theory to package all required code if the language just supported it. Jsr 294 - "modularity in Java" - is an effort to provide such support at the language level. In Summary So what have we learned? The two most important lessons are simple - when building software (or indeed, any product), focus on the "Whats" before the "Hows". And when you're empowered with building tools such as Maven, make sure you work with the tool rather than around it. About the Author Joakim Verona is a consultant with a specialty in Continuous Delivery and DevOps, and the author of Practical DevOps. He has worked as the lead implementer of complex multilayered systems such as web systems, multimedia systems, and mixed software/hardware systems. His wide-ranging technical interests led him to the emerging field of DevOps in 2004, where he has stayed ever since. Joakim completed his masters in computer science at Linköping Institute of Technology. He is a certified Scrum master, Scrum product owner, and Java professional.
Read more
  • 0
  • 0
  • 26184

article-image-devops-evolution-and-revolution
Julian Ursell
24 Jul 2014
4 min read
Save for later

DevOps: An Evolution and a Revolution

Julian Ursell
24 Jul 2014
4 min read
Are we DevOps now? The system-wide software development methodology that breaks down the problematic divide between development and operations is still in that stage where enterprises implementing the idea are probably asking that question, working out whether they've reached the endgame of effective collaboration between the two spheres and a systematic automation of their IT service infrastructure. Considered to be the natural evolution of Agile development and practices, the idea and business implementation of DevOps is rapidly gaining traction and adoption in significant commercial enterprises and we're very likely to see a saturation of DevOps implementations in serious businesses in the near future. The benefits of DevOps for scaling and automating the daily operations of businesses are wide-reaching and becoming more and more crucial, both from the perspective of enabling rapid software development as well as delivering products to clients who demand and need more and more frequent releases of up-to-date applications. The movement towards DevOps systems moves in close synchronization with the growing demand for experiencing and accessing everything in real time, as it produces the level of infrastructural agility to roll out release after release with minimal delays. DevOps has been adopted prominently by hitters as big as Spotify, who have embraced the DevOps culture throughout the formative years of their organization and still hold this philosophy now. The idea that DevOps is an evolution is not a new one. However, there’s also the argument to be made that the actual evolution from a non-DevOps system to a DevOps one entails a revolution in thinking. From a software perspective, an argument could be made that DevOps has inspired a minor technological revolution, with the spawning of multiple technologies geared towards enabling a DevOps workflows. Docker, Chef, Puppet, Ansible, and Vagrant are all powerful key tools in this space and vastly increase the productivity of developers and engineers working with software at scale. However, it is one thing to mobilize DevOps tools and implement them physically into a system (not easy in itself), but it is another thing entirely to revolve the thinking of an organization round to a collaborative culture where developers and administrators live and breathe in the same DevOps atmosphere. As a way of thinking, it requires a substantial cultural overhaul and a breaking down of entrenched programming habits and the silo-ization of the two spheres. It's not easy to transform the day-to-day mind-set of a developer so that they incorporate thinking in ops (monitoring, configuration, availability) or vice versa of a system engineer so they are thinking in terms of design and development. One can imagine it is difficult to cultivate this sort of culture and atmosphere within a large enterprise system with many large moving parts, as opposed to a startup which may have the “day zero” flexibility to employ a DevOps approach from the roots up. To reach the “state” of DevOps is a long journey, and one that involves a revolution in thinking. From a systematic as well as cultural point of view, it takes a considerable degree of ground breaking in order to shatter (what is sometimes) the monolithic wall between development and operations. But for organizations that realize that they need the responsiveness to adapt to clients on demand and have the foresight to put in place system mechanics that allow them to scale their services in the future, the long term benefits of a DevOps revolution are invaluable. Continuous and automated deployment, shorter testing times, consistent application monitoring and performance visibility, flexibility when scaling, and a greater margin for error all stem from a successful DevOps implementation. On top of that a survey showed that engineers working in a DevOps environment spent less time firefighting and more productive time focusing on self-improvement, infrastructure improvement, and product quality. Getting to that point where engineers can say “we’re DevOps now!” however is a bit of a misconception, because it’s more than a matter of sharing common tools, and there will be times where keeping the bridge between devs and ops stable and productive is challenging. There is always the potential that new engineers joining an organization can dilute the DevOps culture, and also the fact that DevOps engineers don't grow overnight. It is an ongoing philosophy, and as much an evolution as it is a revolution worth having.
Read more
  • 0
  • 0
  • 25861