Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - DevOps

29 Articles
article-image-understanding-the-role-aiops-plays-in-the-present-day-it-environment
Guest Contributor
17 Dec 2019
7 min read
Save for later

Understanding the role AIOps plays in the present-day IT environment

Guest Contributor
17 Dec 2019
7 min read
In most conversations surrounding cybersecurity these days, the term “digital transformation,” gets frequently thrown in the mix, especially when the discussion revolves around AIOps. If you’ve got the slightest bit of interest in any recent developments in the cybersecurity world, you might have an idea of what AIOps is. However, if you didn’t already know- AIOps refers to a multi-layered, modern technology platform that allows enterprises to maximize IT operations by integrating AI and machine learning to detect and solve cybersecurity issues as they occur. As the name suggests, AIOps makes use of essential AI technology such as machine learning for the overall improvement of an organization’s IT operations. However, today- the role that AIOps plays has shifted dramatically- which leaves a lot of room for confusion to harbor amongst cybersecurity officers since most enterprises prefer to take the more conventional route as far as AI application is concerned. To utilize the most out of AIOps, enterprises need to understand the significance of the changes in the present-day IT environment, and how those changes influence the AI’s applications. To aid readers in understanding the volatility of the relationship between AI’s applications and the IT environment it is applicable in, we’ve put together an article that dives into the differences between conventional monitoring methods and present-day enterprise needs. Moreover, we’ll also be shining a light on the importance of the adoption of AIOps in enterprises as well. How has the IT environment changed in the modern times? Before we can get into every nook and cranny of why the transition from a traditional approach to a more modern approach matters, we’d like to make one thing very clear. Just because a specific approach works for one organization in no way guarantees that it would work for you. Perhaps the greatest advice any business owner could receive is to plan according to the specific requirements of their security and IT infrastructure. The greatest shortcoming of many CISOs and CSOs is that they fail to understand the particular needs of their IT environment and rely on conventional applications of AI to maximize the overall IT experience. Speaking of traditional AIOps applications, since the number of ‘moving’ parts or components involved was significantly less in number- the involvement of AI was far less complex, and therefore much easier to monitor and control. In a more modern setting, however, with the wave of digitalization and the ever-growing reliance that enterprises have on cloud computing systems, the number of components involved has increased, which also makes understanding the web much more difficult. Bearing witness to the ever-evolving and complex nature of today’s IT environment are the results of the research conducted by Dynatrace. The results explicitly state that something as simple as a single web or mobile application transaction can involve a staggering number of 37 different components or technologies on average. Taking this into account, relying on a traditional approach to AI becomes redundant, and ineffective since the conventional approach relies on an extremely limited understanding and fails to make sense of all the information provided by an arsenal of tools and dashboards. Not only is the conventional approach to AIOps impractical within the modern IT context, but it is also extremely outdated. Having said that perhaps the only approach that fits in the modern-day IT environment is a software intelligence-centric approach, which allows for fast-paced and robust solutions to present-day IT complexities. How important is AIOps for enterprises today? As we’ve already mentioned above, the present-day IT infrastructure requires a drastic change in the relationship that enterprises have had with AIOps so far. For starters, enterprises and organizations need to realize the importance of the role that AIOps plays. Unfortunately, however, there’s an overarching tendency seen in enterprises that enables them the naivety of labeling investing in AIOps as yet another “IT expense.” On the contrary, AIOps is essential for companies and organizations today, since every company is undergoing digitalization, along with increasing their reliance on modern technology more and more. Some cybersecurity specialists might even argue that each company is slowly turning into a software company, primarily because of the rise in cloud-computing systems. AIOps also works on improving the ‘business’ aspect of an enterprise, since the modern consumer looks for enterprises that offer innovative features, along with their ability to enhance user experience through an impeccable and seamless digital experience. Furthermore, in the competitive economic conditions of today, carrying out business operations in a timely manner is critical to an enterprise’s longevity- which is where the integration of AI can help an organization function smoothly. It should also be pointed out that the employment of AIOps opens up new avenues for businesses to step into since it removes the element of fear present in most business owners. The implementation of AIOps also enables an organization to make quick-paced releases since it takes IT problems out of the equation. These problems usually consist of bugs, regulation, and compliance, along with monitoring the overall IT experience being provided to consumers. How can enterprises ensure the longevity of their reliance on AIOps? When it comes to the integration of any new technology into an organization’s routine functions, there are always questions to be asked regarding the impact of the continued reliance on modern technology. To demonstrate the point we’ve made above, let’s return to a tech we’ve referred to throughout the article- cloud computing. Introduced in the 1960s, cloud computing revolutionized data storage to what it is today. However, after a couple of years and some unfortunate cyberattacks launched on cloud storage networks, cybersecurity specialists have found some dire problems with complete dependency on cloud computing storage. Similarly, many cybersecurity specialists and researchers wonder about the negative impacts that a dependency on AIOps could have in the future. When it comes to ensuring enterprises about the longevity of amalgamating AIOps into an enterprise, we’d like to give our assurance through the following reasons: Unlike cloud computing, developments in AIOps are heavily rooted in real-time data fed to the algorithm by an IT team. When you strip down all the fancy IT jargon, the only thing identity you need to trust is that of your IT personnel. Since AIOps relies on smart auto-remediation capabilities, business owners can see an immediate response geared by the employed algorithms. One such way that AIOps deploys auto-remediation strategies is by sending out alerts of any possible issue- the practice of which enables businesses to operate on the “business” side of the spectrum since they’ve got a trustworthy agent to rely on. Conclusion At the end of the article, we can only reinstate what’s been said before, in a thousand different ways- it’s high time that enterprises welcome change in the form of AIOps, instead of resisting it. In the modern age of digitalization, the key differences seen in the modern-day IT landscape should be reason enough for enterprises to be on the lookout for new alternatives to securing their data, and by extension- their companies. Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. What is AIOps and why is it going to be important? 8 ways Artificial Intelligence can improve DevOps Post-production activities for ensuring and enhancing IT reliability [Tutorial]
Read more
  • 0
  • 0
  • 2726

article-image-why-is-pentaho-8-3-great-for-dataops
Guest Contributor
07 Oct 2019
6 min read
Save for later

Why is Pentaho 8.3 great for DataOps?

Guest Contributor
07 Oct 2019
6 min read
Announced in July, Pentaho 8.3 is the latest version of the data integration and analytics platform software from Hitachi Vantara. Along with new and improved features, this version will support DataOps, a collaborative data management practice that helps customers access the full potential of their data. “DataOps is about having the right data, in the right place, at the right time and the new features in Pentaho 8.3 ensure just that,” said John Magee, vice president, Portfolio Marketing, Hitachi Vantara. “Not only do we want to ensure that data is stored at the lowest cost at the right service level, but that data is searchable, accessible and properly governed so actionable insights can be generated and the full economic value of the data is captured.” How Pentaho prevents the loss of data According to Stewart Bond, research director, Data Integration and Integrity Software, and Chandana Gopal, research director, Business Analytics Solutions from IDC, “A vast majority of data that is generated today is lost. In fact, only about 2.5% of all data is actually analyzed. The biggest challenge to unlocking the potential that is hidden within data is that it is complicated, siloed and distributed. To be effective, decision makers need to have access to the right data at the right time and with context.” The struggle is how to manage all the incoming data in a way that exposes everyone to what’s coming down the pipeline. When data is siloed, there’s no guarantee the right people are seeing it to analyze it. Pentaho Development is a single platform to help businesses keep up with data growth in a way that enables real-time data ingestion. With available data services, you can:   Make data sets immediately available for reports and applications.   Reduce the time needed to create data models.   Improve collaboration between business and IT teams.   Analyze results with embedded machines and deep learning models without knowing how to code them into data pipelines.   Prepare and blend traditional data with big data. Making all the data more accessible across the board is a key feature of Pentaho that this latest release continues to strengthen. What’s new in Pentaho 8.3? Latest version of Pentaho includes new features to support DataOps DataOps limits the overall cycle time of big data analytics. Starting from the initial origin of the ideas to the making of the visualization, the overall data analytics process is transformed with DataOps. Pentaho 8.3 is conceptualized to promote the easy management and collaboration of the data. The data analytics process is much more agile. Therefore, the data teams are able to work in sync. Also, efficiency and effectiveness are increased with DataOps. Businesses are looking for ways to transform the data digitally. They want to get more value from the massive pool of information. And, as data is almost everywhere, and it is distributed more than ever before, therefore, the businesses are looking for ways to get the key insights from the data quickly and easily. This is exactly where the role of Pentaho 8.3 comes into the picture. It accelerates the businesses’ innovation and agility. Plenty of new and exciting time-saving enhancements have been done to make Pentaho a better and more advanced solution for the corporates. It helps the companies to automate their data management techniques.  Key enhancements in Pentaho 8.3 Each enhancement included with Pentaho 8.3 in some way helps organizations modernize their data management practices in ways that assist with removing friction between data and insight, including: Improved drag and drop pipeline capabilities These help access and blend data that are hard to reach to provide deeper insights into the greater analytic value from enterprise integration. Amazon Web Services (AWS) developers can also now ingest and process streaming data through a visual environment rather than having to write code that must blend with other data. Enhanced data visibility Improved integration with Hitachi Content Platform (HCP), a distributed, object storage system designed to support large repositories of content, makes it easier for users to read, write and update HCP customer metadata. They can also more easily query objects with their system metadata, making data more searchable, governable, and applicable for analytics. It’s also now easier to trace real-time data from popular protocols like AMQP, JMS, Kafka, and MQTT. Users can also view lineage data from Pentaho within IBM’s Information Governance Catalog (IGC) to reduce the amount of effort required to govern data. Expanded multi-cloud support AWS Redshift bulk load capabilities now automate the process of loading Redshift. This removes the repetitive SQL scripting to complete bulk loads and allows users to boost productivity and apply policies and schedules for data onboarding. Also included in this category are updates that address Snowflake connectivity. As one of the leading destinations for cloud warehousing, Snowflake’s primary hiccup is when an analytics project wants to include data from other sources. Pentaho 8.3 allows blending, enrichment and the analysis of Snowflake data in conjunction with other sources, including other cloud sources. These include existing Pentaho-supported cloud platforms like AWS and Google Cloud. Pentaho and DataOps Each of the new capabilities and enhancements for this release of Pentaho are important for current users, but the larger benefit to businesses is its association with DataOps. Emerging as a collaborative data management discipline, focused on better communication, integration, and automation of how data flows across an organization, DataOps is becoming a practice embraced more often, yet not without its own setbacks. Pentaho 8.3 helps businesses gain the ability to make DataOps a reality without facing common challenges often associated with data management. According to John Magee, Vice President Portfolio Marketing at Hitachi,  “The new Pentaho 8.3 release provides key capabilities for customers looking to begin their DataOps journey.” Beyond feature enhancements Looking past the improvements and new features of the latest Pentaho release, it’s a good product because of the support it offers its community of users. From forums to webinars to 24/7 support, it not only caters to huge volumes of data on a practical level, but it doesn’t ignore the actual people using the product outside of the data. Author Bio James Warner is a Business Intelligence Analyst with Excellent knowledge on Hadoop/Big data analysis at NexSoftSys.com  New MapR Platform 6.0 powers DataOps DevOps might be the key to your Big Data project success Bridging the gap between data science and DevOps with DataOps
Read more
  • 0
  • 0
  • 2388

article-image-7-crucial-devops-metrics-that-you-need-to-track
Guest Contributor
20 Aug 2019
9 min read
Save for later

7 crucial DevOps metrics that you need to track

Guest Contributor
20 Aug 2019
9 min read
DevOps has taken the IT world by storm and is increasingly becoming the de facto industry standard for software development. The DevOps principles have the potential to result in a competitive differentiation allowing the teams to deliver a high quality software developed at a faster rate which adequately meets the customer requirements. DevOps prevents the development and operations teams from functioning in two distinct silos and ensures seamless collaboration between all the stakeholders. Collection of feedback and its subsequent incorporation plays a critical role in DevOps implementation and formulation of a CI/CD pipeline. Successful transition to DevOps is a journey, not a destination. Setting up benchmarks, measuring yourself against them and tracking your progress is important for determining the stage of DevOps architecture you are in and ensuring a smooth journey onward. Feedback loops are a critical enabler for delivery of the application and metrics help transform the qualitative feedback into quantitative form. Collecting the feedback from the stakeholders is only half the work, gathering insights and communicating it through the DevOps team to keep the CI/CD pipeline on track is equally important. This is where the role of metrics comes in. DevOps metrics are the tools that your team needs for ensuring that the feedback is collected and communicated with the right people to improve upon the existing processes and functions in a unit. Here are 7 DevOps metrics that your team needs to track for a successful DevOps transformation: 1. Deployment frequency Quick iteration and continuous delivery are key measurements of DevOps success. It basically means how long the software takes to deploy and how often the deployment takes place. Keeping track of the frequency with which the new code is deployed helps keep track of the development process. The ultimate goal of deployment is to be able to release smaller deployments of code as quickly as possible. Smaller deployments are easier to test and release. They also improve the discoverability of bugs in the code allowing for faster and timely resolution of the same. Determining the frequency of deployments needs to be done separately for development, testing, staging, and production environments. Keeping track of the frequency of deployment to QA or pre-production environments is also an important consideration. A high deployment frequency is a tell-tale sign that things are going smooth in the production cycle. Smaller deployments are easier to test and release so higher deployment frequency directly corresponds with higher efficiency. No wonder tech giants such as Amazon and Netflix deploy code thousands of times a day. Amazon has built a deployment engine called Apollo that has deployed more than 50 million deployments in 12 months which is more than one deployment per second. This results in reduced outages and decreased downtimes. 2. Failed deployments Any deployment that causes issues or outages for your users is a failed deployment. Tracking the percentage of deployments that result in negative feedback from the user’s end is an important DevOps metric. The DevOps teams are expected to build quality in the product right from the beginning of the project. The responsibility for ensuring the quality of the software is also disseminated through the entire team and not just centered around the QA. While in an ideal scenario, there should be no failed deployments, that’s often not the case. Tracking the percentage of deployment that results in negative sentiment in the project helps you ascertain the ground level realities and makes you better prepared for such occurrences in the future. Only if you know what is wrong can you formulate a plan to fix it. While a failure rate of 0 is the magic number, less than 5% failed deployments is considered workable. In case the metric consistently shows spike of failed deployments over 10%, the existing process needs to be broken down into smaller segments with mini-deployments. Fixing 5 issues in 100 deployments is any day easier than fixing 50 in 1000 within the same time-frame. 3. Code committed Code committed is a DevOps metric that tracks the number of commits the team makes to the software before it can be deployed into production. This serves as an indicator of the development velocity as well as the code quality. The number of code commits that a team makes has to be within the optimum range defined by the DevOps team. Too many commits may be indicative of low quality or lack of direction in development. Similarly, if the commits are too low, it may be an indicator that the team is too taxed and non-productive. Uncovering the reason behind the variation in code committed is important for maintaining the productivity and project velocity while also ensuring optimal satisfaction within the team members. 4. Lead Time The software development cycle is a continuous process where new code is constantly developed and successfully deployed to production. Lead time for changes in DevOps is the time taken to go from code committed to code successfully running into production. It is an important indicator to determine the efficiency in the existing process and identifying the possible areas of improvement. The lead time and mean time to change (MTTC) result in the DevOps team getting a better hold of the project. By measuring the amount of time passing between its inception and the actual production and deployment, the team’s ability to adapt to change as the project requirements evolve can be computed. 5. Error rate Errors in any software application are inevitable. A few occasional errors aren’t a red flag but keeping track of the error rates and being on the lookout for any unusual spikes is important for the health of your application. A significant rise in error rate is an indicator of inherent quality problems and ongoing performance-related issues. The errors that you encounter can be of two types, bugs and production issues. Bugs are the exceptions in the code discovered after deployment. Production issues, on the other hand, are issues related to database connections and query timeouts. The error rate is calculated as a function of the transactions that result in an error during a particular time window. For a specified time duration, out of a 1000 transactions, if 20 have errors, the error rate is calculated as 20/1000 or 2 percent. A few intermittent errors throughout the application life cycle is a normal occurrence but any unusual spikes that occur need to be looked out for. The process needs to be analysed for bugs and production issues and the exceptions that occur need to be handled concurrently. 6. Mean time to detection Issues happen in every project but how fast you discover the issues is what matters. Having robust application monitoring and optimal coverage would help you find out any issues that happen as quickly as possible. The mean time to detection metric (MTTD) is the amount of time that passes between the beginning of the issue and the time when the issue gets detected and some remedial action is taken. The time to fix the issues is not covered under MTTD. Ideally, the DevOps teams need to strive to keep the MTTD as low as possible (ideally close to zero) i.e the DevOps teams should be able to detect any issues as soon as they occur. There needs to be a proper protocol established and communication channels need to be in place in order to help the team discover the error quickly and respond to its correction in a rapid manner. 7. Mean time to recovery Time to restore service or Mean time to recovery (MTTR) is a critical part of any project. It is the average time taken by the team to repair a failure in the system. It comprises of the time taken from failure detection till the time the project starts operating in the normal manner. Recovery and resilience are key components that determine the market readiness of a project. MTTR is an important DevOps metric because it allows for tracking of complex issues and failures while judging the capability of the team to handle change and bounce back again. The ideal recovery time for the fix to take place should be as low as possible, thus minimizing the overall system downtime. System downtimes and outages though undesirable are unavoidable. This especially runs true in the current development scenario where companies are making the move to the cloud. Designing for failure is a concept that needs to be ingrained right from the start. Even major applications like Facebook & Whatsapp, Twitter, Cloudflare, and Slack are not free of outages. What matters is that the downtime is kept minimal. Mean time to recovery thus becomes critical to realize the time the DevOps teams would need to bring the system back on track. Closing words DevOps isn’t just about tracking metrics, it is primarily about the culture. Organizations that make the transition to DevOps place immense emphasis on one goal-rapid delivery of stable, high-quality software through automation and continuous delivery. Simply having a bunch of numbers in the form of DevOps metrics isn’t going to help you across the line. You need to have a long-term vision combined with valuable insights that the metrics provide. It is only by monitoring these over a period of time and tracking your team’s progress in achieving the goals that you have set can you hope to reap the true benefits that DevOps offers. Author Bio Vinati Kamani writes about emerging technologies and their applications across various industries for Arkenea, a custom software development company and devops consulting company. When she's not on her desk penning down articles or reading up on the recent trends, she can be found traveling to remote places and soaking up different cultural experiences. DevOps engineering and full-stack development – 2 sides of the same agile coin Introducing kdevops, modern devops framework for Linux kernel development Why do IT teams need to transition from DevOps to DevSecOps?
Read more
  • 0
  • 0
  • 7699

article-image-why-do-it-teams-need-to-transition-from-devops-to-devsecops
Guest Contributor
13 Jul 2019
8 min read
Save for later

Why do IT teams need to transition from DevOps to DevSecOps?

Guest Contributor
13 Jul 2019
8 min read
Does your team perform security testing during development? If not, why not? Cybercrime is on the rise, and formjacking, ransomware, and IoT attacks have increased alarmingly in the last year. This makes security a priority at every stage of development. In this kind of ominous environment, development teams around the globe should take a more proactive approach to threat detection. This can be done in a number of ways. There are some basic techniques that development teams can use to protect their development environments. But ultimately, what is needed is an integration of threat identification and management into the development process itself. Integrated processes like this are referred to as DevSecOps, and in this guide, we’ll take you through some of the advantages of transitioning to DevSecOps. Protect Your Development Environment First, though, let’s look at some basic measures that can help to protect your development environment. For both individuals and enterprises, online privacy is perhaps the most valuable currency of all. Proxy servers, Tor, and virtual private networks (VPN) have slowly crept into the lexicon of internet users as cost-effective privacy tools to consider if you want to avoid drawing the attention of hackers. But what about enterprises? Should they use the same tools? They would prefer to avoid hackers as well. This answer is more complicated. Encryption and authentication should be addressed early in the development process, especially given the common practice of using open source libraries for app coding. The advanced security protocols that power many popular consumer VPN services make it a good first step to protecting coding and any proprietary technology. Additional controls like using 2-factor authentication and limiting who has access will further protect the development environment and procedures. Beyond these basic measures, though, it is also worth looking in detail at your entire development process and integrating security management at every stage. This is sometimes referred to as integrating DevOps and DevSecOps. DevOps vs. DevSecOps: What's the Difference? DevOps and DevSecOps are not separate entities, but different facets of the development process. Traditionally, DevOps teams work to integrate software development and implementation in order to facilitate the rapid delivery of new business applications. Since this process omits security testing and solutions, many security flaws and vulnerabilities aren't addressed early enough in the development process. With a new approach, DevSecOps, this omission is addressed by automating security-related tasks and integrating controls and functions like composition analysis and configuration management into the development process. Previously, DevSec focused only on automating security code testing, but it is gradually transitioning to incorporate an operations-centric approach. This helps in reconciling two environments that are opposite by nature. DevOps is forward-looking because it's toward rapid deployment, while development security looks backward to analyze and predict future issues. By prioritizing security analysis and automation, teams can still improve delivery speed without the need to retroactively find and deal with threats. Best Practices: How DevSecOps Should Work The goal of current DevSecOps best practices is to implement a shift towards real-time threat detection rather than undergoing a historical analysis. This enables more efficient application development that recognizes and deals with issues as they happen rather than waiting until there's a problem. This can be done by developing a more effective strategy while adopting DevSecOps practices. When all areas of concern are addressed, it results in: Automatic code procurement: Automatic code procurement eliminates the problem of human error and incorporating weak or flawed coding. This benefits developers by allowing vulnerabilities and flaws to be discovered and corrected earlier in the process. Uninterrupted security deployment: Uninterrupted security deployment through the use of automation tools that work in real time. This is done by creating a closed-loop testing and reporting and real-time threat resolution. Leveraged security resources: Leveraged security resources through automation. Using automated DevSecOps typically address areas related to threat assessment, event monitoring, and code security. This frees your IT or security team to focus in other areas, like threat remediation and elimination. There are five areas that need to be addressed in order for DevSecOps to be effective: Code analysis By delivering code in smaller modules, teams are able to identify and address vulnerabilities faster. Management changes Adapting the protocol for changes in management or admins allows users to improve on changes faster as well as enabling security teams to analyze their impact in real time. This eliminates the problem of getting calls about problems with system access after the application is deployed. Compliance Addressing compliance with Payment Card Industry Digital Security Standard (PCI DSS) and the new General Data Protection Regulations (GDPR) earlier, helps prevent audits and heavy fines. It also ensures that you have all of your reporting ready to go in the event of a compliance audit. Automating threat and vulnerability detection Threats evolve and proliferate fast, so security should be agile enough to deal with emerging threats each time coding is updated or altered. Automating threat detection earlier in the development process improves response times considerably. Training programs Comprehensive security response begins with proper IT security training. Developers should craft a training protocol that ensures all personnel who are responsible for security are up to date and on the same page. Organizations should bring security and IT staff into the process sooner. That means advising current team members of current procedures and ensuring that all new staff is thoroughly trained. Finding the Right Tools for DevSecOps Success Does a doctor operate with a chainsaw? Hopefully not. Likewise, all of the above points are nearly impossible to achieve without the right tools to get the job done with precision. What should your DevSec team keep in their toolbox? Automation tools Automation tools provide scripted remediation recommendations for security threats detected. One such tool is Automate DAST, which scans new or modified code against security vulnerabilities listed on the Open Web Application Security Project's (OWASP) list of the most common flaws, such as a SQL injection errors. These are flaws you might have missed during static analysis of your application code. Attack modeling tools Attack modeling tools create models of possible attack matrices and map their implications. There are plenty of attack modeling tools available, but a good one for identifying cloud vulnerabilities is Infection Monkey, which simulates attacks against the parts of your infrastructure that run on major public cloud hosts like Google Cloud, AWS, and Azure, as well as most cloud storage providers like Dropbox and pCloud. Visualization tools Visualization tools are used for evolving, identifying, and sharing findings with the operations team. An example of this type of tool is PortVis, developed by a team led by professor Kwan-Liu Ma at the University of California, Davis. PortVis is designed to display activity by host or port in three different modes: a grid visualization, in which all network activity is displayed on a single grid; a volume visualization, which extends the grid to a three-dimensional volume; and a port visualization, which allows devs to visualize the activity on specific ports over time. Using this tool, different types of attack can be easily distinguished from each other. Alerting tools  Alerting tools prioritize threats and send alerts so that the most hazardous vulnerabilities can be addressed immediately. WhiteSource Bolt, for instance, is a useful tool of this type, designed to improve the security of open source components. It does this by checking these components against known security threats, and providing security alerts to devs. These alerts also auto-generate issues within GitHub. Here, devs can see details such as references for the CVE, its CVSS rating, a suggested fix, and there is even an option to assign the vulnerability to another team member using the milestones feature. The Bottom Line Combining DevOps and DevSec is not a meshing of two separate disciplines, but rather the natural transition of development to a more comprehensive approach that takes security into account earlier in the process, and does it in a more meaningful way. This saves a lot of time and hassles by addressing enterprise security requirements before deployment rather than probing for flaws later. The sooner your team hops on board with DevSecOps, the better. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Does it make sense to talk about DevOps engineers or DevOps tools? How Visual Studio Code can help bridge the gap between full-stack development and DevOps
Read more
  • 0
  • 0
  • 6603

article-image-6-new-ebooks-for-programmers-to-watch-out-for-in-march
Richard Gall
20 Feb 2019
6 min read
Save for later

6 new eBooks for programmers to watch out for in March

Richard Gall
20 Feb 2019
6 min read
The biggest challenge for anyone working in tech is that you need multiple sets of eyes. Yes, you need to commit to regular, almost continuous learning, but you also need to look forward to what’s coming next. From slowly emerging trends that might not even come to fruition (we’re looking at you DataOps), to version updates and product releases, for tech professionals the horizon always looms and shapes the present. But it’s not just about the big trends or releases that get coverage - it’s also about planning your next (career) move, or even your next mini-project. That could be learning a new language (not necessarily new, but one you haven’t yet got round to learning), trying a new paradigm, exploring a new library, or getting to grips with cloud native approaches to software development. This sort of learning is easy to overlook but it is one that's vital to any developers' development. While the Packt library has a wealth of content for you to dig your proverbial claws into, if you’re looking forward, Packt has got some new titles available in pre-order that could help you plan your learning for the months to come. We’ve put together a list of some of our own top picks of our pre-order titles available this month, due to be released late February or March. Take a look and take some time to consider your next learning journey... Hands-on deep learning with PyTorch TensorFlow might have set the pace when it comes to artificial intelligence, but PyTorch is giving it a run for its money. It’s impossible to describe one as ‘better’ than the other - ultimately they both have valid use cases, and can both help you do some pretty impressive things with data. Read next: Can a production ready Pytorch 1.0 give TensorFlow a tough time? The key difference is really in the level of abstraction and the learning curve - TensorFlow is more like a library, which gives you more control, but also makes things a little more difficult. PyTorch, then, is a great place to start if you already know some Python and want to try your hand at deep learning. Or, if you have already worked with TensorFlow and simply want to explore new options, PyTorch is the obvious next step. Order Hands On Deep learning with PyTorch here. Hands-on DevOps for Architects Distributed systems have made the software architect role incredibly valuable. This person is not only responsible for deciding what should be developed and deployed, but also the means through which it should be done and maintained. But it’s also made the question of architecture relevant to just about everyone that builds and manages software. That’s why Hands on DevOps for Architects is such an important book for 2019. It isn’t just for those who typically describe themselves as software architects - it’s for anyone interested in infrastructure, and how things are put together, and be made to be more reliable, scalable and secure. With site reliability engineering finding increasing usage outside of Silicon Valley, this book could be an important piece in the next step in your career. Order Hands-on DevOps for Architects here. Hands-on Full stack development with Go Go has been cursed with a hell of a lot of hype. This is a shame - it means it’s easy to dismiss as a fad or fashion that will quickly disappear. In truth, Go’s popularity is only going to grow as more people experience, its speed and flexibility. Indeed, in today’s full-stack, cloud native world, Go is only going to go from strength to strength. In Hands-on Full Stack Development with Go you’ll not only get to grips with the fundamentals of Go, you’ll also learn how to build a complete full stack application built on microservices, using tools such as Gin and ReactJS. Order Hands-on Full Stack Development with Go here. C++ Fundamentals C++ is a language that often gets a bad rap. You don’t have to search the internet that deeply to find someone telling you that there’s no point learning C++ right now. And while it’s true that C++ might not be as eye-catching as languages like, say, Go or Rust, it’s nevertheless still a language that still plays a very important role in the software engineering landscape. If you want to build performance intensive apps for desktop C++ is likely going to be your go-to language. Read next: Will Rust replace C++? One of the sticks that’s often used to beat C++ is that it’s a fairly complex language to learn. But rather than being a reason not to learn it, if anything the challenge it presents to even relatively experienced developers is one well worth taking on. At a time when many aspects of software development seem to be getting easier, as new layers of abstraction remove problems we previously might have had to contend with, C++ bucks that trend, forcing you to take a very different approach. And although this approach might not be one many developers want to face, if you want to strengthen your skillset, C++ could certainly be a valuable language to learn. The stats don’t lie - C++ is placed 4th on the TIOBE index (as of February 2019), beating JavaScript, and commands a considerably high salary - indeed.com data from 2018 suggests that C++ was the second highest earning programming language in the U.S., after Python, with a salary of $115K. If you want to give C++ a serious go, then C++ Fundamentals could be a great place to begin. Order C++ Fundamentals here. Data Wrangling with Python & Data Visualization with Python Finally, we’re grouping two books together - Data Wrangling with Python and Data Visualization with Python. This is because they both help you to really dig deep into Python’s power, and better understand how it has grown to become the definitive language of data. Of course, R might have something to say about this - but it’s a fact the over the last 12-18 months Python has really grown in popularity in a way that R has been unable to match. So, if you’re new to any aspect of the data science and analysis pipeline, or you’ve used R and you’re now looking for a faster, more flexible alternative, both titles could offer you the insight and guidance you need. Order Data Wrangling with Python here. Order Data Visualization with Python here.
Read more
  • 0
  • 0
  • 3933

article-image-aiops-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

AIOps - Trick or Treat?

Bhagyashree R
31 Oct 2018
2 min read
AIOps, as the term suggests, is Artificial Intelligence for IT operations and was first introduced by Gartner last year. AIOps systems are used to enhance and automate a broad range of processes and tasks in IT operations with the help of big data analytics, machine learning, and other AI technologies. Read also: What is AIOps and why is it going to be important? In its report, Gartner estimated that, by 2020, approximately 50% of enterprises will be actively using AIOps platforms to provide insight into both business execution and IT Operations. AIOps has seen a fairly fast growth since its introduction with many big companies showing interest in AIOps systems. For instance, last month Atlassian acquired Opsgenie, an incident management platform that along with planning and solving IT issues, helps you gain insight to improve your operational efficiency. The reasons why AIOps is being adopted by companies are: it eliminates tedious routine tasks, minimizes costly downtime, and helps you gain insights from data that’s trapped in silos. Where AIOps can go wrong? AIOps alerts us about incidents beforehand, but in some situations, it can also go wrong. In cases where the event is unusual, the system will be less likely to predict it. Also, those events that haven’t occurred before will be entirely outside the ability for machine learning to predict or analyze. Additionally, it can sometimes give false negatives and false positives. False negatives could happen in the cases where the tests are not sensitive enough to detect possible issues. False positives can be the result of incorrect configuration. This essentially means that there will always be a need for human operators to review these alerts and warnings. Is AIOps a trick or treat? AIOps is bringing more opportunities for IT workforce such as AIOps Data Scientist, who will focus on solutions to correlate, consolidate, alert, analyze, and provide awareness of events. Dell defines its Data Scientist role as someone who will “contribute to delivering transformative AIOps solutions on their SaaS platform”. With AIOps, IT workforce won’t just disappear, it will evolve. AIOps is definitely a treat because it reduces manual work and provides an intuitive way of incident response. What is AIOps and why is it going to be important? 8 ways Artificial Intelligence can improve DevOps Tech hype cycles: do they deserve your attention?
Read more
  • 0
  • 0
  • 2551
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-is-your-enterprise-measuring-the-right-devops-metrics
Guest Contributor
17 Sep 2018
6 min read
Save for later

Is your Enterprise Measuring the Right DevOps Metrics?

Guest Contributor
17 Sep 2018
6 min read
As of 2018, 17% of the companies worldwide have fully adopted DevOps while 14% are still in the consideration stage. Amazon, Netflix and Target are few of the companies that have attained success with DevOps. Amazon’s move to Amazon Web Services resulted in their ability to scale their capacity up or down as needed for the servers, thus allowing their engineers to deploy their own code to the server whenever they wanted to. This resulted in continuous deployment, thus reducing the duration as well as number of outages experienced by the companies using AWS. Netflix used DevOps to improve their cloud infrastructure and to ensure smooth streaming of videos online. When you say “we have adopted DevOps in your Enterprise”, what do you really mean? It means you have adopted a software philosophy that integrates software development and operations, thus reducing the time to market your end product. The questions which come next are: How do you measure the true success of DevOps in your organization? Have you been working on the right metrics all along? Let’s talk about first measuring DevOps in organizations. It is all about uptime, transactions per second, bugs fixed, the commits and other operational as well as productivity metrics. This is what most organizations tend to look at as metrics, when you talk about DevOps. But are these the Right DevOps Metrics? For a while, companies have been working on a set of metrics, discussed above, to determine the success of the DevOps. However, these are not the right metrics, and should not be considered. A metric is an indicator of the performance of the DevOps, and not every single indicator will determine the success. Your metrics might differ based on the data you collect. You would end up collecting large volumes of data; however, not every data available can be converted into a metric. Here’s how you can determine the metrics for your DevOps. Avoid using too many metrics You should, at the most, use 10 metrics. We suggest using less than 10 in fact. The fewer the metrics used, the better your judgment would be. You should broaden your perspective when choosing the metrics. It is important to choose metrics that account for the overall organizational health, and don’t just take into consideration the operational and development data. Metrics that connect with your organization What is the ultimate aim for your organization? How would you determine your organization is successful? The answer to these questions will help you determine the metrics. Most organizations determine their success based on customer experience and the overall operational efficiency. You will need to choose metrics that help you determine these two values. Tie the metrics to your goals As a businessperson, you are more concerned with customer attrition, bad feedback and non-returning customers than the lines of code that goes into creating a successful software product. You will need to tie your DevOps success metrics to these goals. While you are concerned about the failure of your website or the downtime, the true concern is the customer’s abandonment of your website. Causes that affect the DevOps While the business metrics will help you measure the success to a certain extent, there are certain things that affect the operations and development teams. You will need to check these causes, and go to the root to understand how it affects the DevOps teams  and what needs to be done to create a balance between the development and operational teams. Next, we will talk about the actual DevOps metrics that you should take into consideration when deriving value for your organization and measuring the success. The Velocity With most of the enterprise elements being automated, velocity is one of the most important metrics that will determine the success of your DevOps. The idea is to get the updates out to the users in the quickest and fastest way possible, without compromising on security or reliability. You stay competitive, offer new features and boost customer retention. The two variables that help measure this tangible metric include deployment frequency and deployment lead time. The former measures the frequency of releases and the latter measures the speed at which the team commits a code and pushes forth the update. Service Quality Service quality directly impacts the goals set forth by the organization, and is intangible. The idea is to maintain the service quality throughout the releases and  changes made to the application. The variables that determine this metric include change failure rate, number of support tickets and MTTR (Mean time to recovery). When you release an update, and that leads to an error or fault in the application, it is the change failure rate. In case there are bugs or performance issues in your releases, and these are being reported, then the variable number of support tickets or errors comes into existence. MTTR is the variable that measures the number of issues resolved and the time taken to solve them. The idea is to be more responsive to the problems faced by the customers. User Experience This is the final metric that impacts the success of your DevOps. You need to check if all the features and updates you have insisted upon are in sync with the user needs. The variables that are concerned with measuring this aspect include feature usage and business impact. You will need to check how many people from the target audience are using the new feature update you have released, and determine their personas. You can check the number of sessions, completed transactions and duration of the session to quantify the number of people. Check their profiles to get their personas.. Planning your DevOps strategy It is not easy to roll out DevOps in your organization, and expect agility immediately. You need to have a perfect strategy, align it to your business goals, and determine the effective DevOps metrics to determine the success of your roll out. Planning is of essence for a thorough roll out of DevOps. It is important to consider every data, when you have DevOps in your organization. Make sure you store and analyze every data, and use the data that suits the DevOps metrics you have determined for success. It is important that the DevOps metrics are aligned to your business goals and the objectives you have defined. About Author: Vishal Virani is a Founder and CEO of Coruscate Solutions, a mobile app development company. He enjoys writing about technology, mobile apps, custom web development and latest industry trends.
Read more
  • 0
  • 0
  • 2942

article-image-8-ways-artificial-intelligence-can-improve-devops
Prasad Ramesh
01 Sep 2018
6 min read
Save for later

8 ways Artificial Intelligence can improve DevOps

Prasad Ramesh
01 Sep 2018
6 min read
DevOps combines development and operations in an agile manner. ITOps refers to network infrastructure, computer operations, and device management. AIOps is artificial intelligence applied to ITOps, a term coined by Gartner. Makes us wonder what AI applied to DevOps would look like. Currently, there are some problem areas in DevOps that mainly revolve around data. Namely, accessing the large pool of data, taking actions on it, managing alerts etc. Moreover, there are errors caused by human intervention. AI works heavily with data and can help improve DevOps in numerous ways. Before we get into how AI can improve DevOps, let’s take a look at some of the problem areas in DevOps today. The trouble with DevOps Human errors: When testing or deployment is performed manually and there is an error, it is hard to repeat and fix. Many a time, the software development is outsourced in companies. In such cases, there is lack of coordination between the dev and ops teams. Environment inconsistency: Software functionality breaks when the code moves to different environments as each environment has different configurations. Teams can run around wasting a lot of time due to bugs when the software works fine on one environment but not on the other. Change management: Many companies have change management processes well in place, but they are outdated for DevOps. The time taken for reviews, passing a new module etc is manual and proves to be a bottleneck. Changes happen frequently in DevOps and the functioning suffers due to old processes. Monitoring: Monitoring is key to ensure smooth functioning in Agile. Many companies do not have the expertise to monitor the pipeline and infrastructure. Moreover monitoring only the infrastructure is not enough, there also needs to be monitoring of application performance, solutions need to be logged and analytics need to be tracked. Now let’s take a look at 8 ways AI can improve DevOps given the above context. 1. Better data access One of the most critical issues faced by DevOps teams is the lack of unregulated access to data. There is also a large amount of data, while the teams rarely view all of the data and focus on the outliers. The outliers only work as an indicator but do not give robust information. Artificial intelligence can compile and organize data from multiple sources for repeated use. Organized data is much easier to access and understand than heaps of raw data. This will help in predictive analysis and eventually a better decision making process. This is very important and enables many other ways listed below. 2. Superior implementation efficiency Artificially intelligent systems can work with minimal or no human intervention. Currently, a rules-based environment managed by humans is followed in DevOps teams. AI can transform this into self governed systems to greatly improve operational efficiency. There are limitations to the volume and complexity of analysis a human can perform. Given the large volumes of data to be analyzed and processed, AI systems being good at it, can set optimal rules to maximize operational efficiencies. 3. Root cause analysis Conducting root cause analysis is very important to fix an issue permanently. Not getting to the root cause allows for the cause to persist and affect other areas further down the line. Often, engineers don’t investigate failures in depth and are more focused on getting the release out. This is not surprising given the limited amount of time they have to work with. If fixing a superficial area gets things working, the root cause is not found. AI can take all data into account and see patterns between activity and cause to find the root cause of failure. 4 Automation Complete automation is a problem in DevOps, many tasks in DevOps are routine and need to be done by humans. An AI model can automate these repeatable tasks and speed up the process significantly. A well-trained model increases the scope of complexity of the tasks that can be automated by machines. AI can help achieve least human intervention so that developers can focus on more complex interactive problems. Complete automation also allows the errors to be reproduced and fixed promptly. 5 Reduce Operational Complexity AI can be used to simplify operations by providing a unified view. An engineer can view all the alerts and relevant data produced by the tools in a single place. This improves the current scenario where engineers have to switch between different tools to manually analyze and correlate data. Alert prioritization, root cause analysis, evaluating unusual behavior are complex time consuming tasks that depend on data. An organized singular view can greatly benefit in looking up data when required. “AI and machine learning makes it possible to get a high-level view of the tool-chain, but at the same time zoom in when it is required.” -SignifAI 6 Predicting failures A critical failure in a particular tool/area in DevOps can cripple the process and delay cycles. With enough data, machine learning models can predict when an error can occur. This goes beyond simple predictions. If an occurred fault is known to produce certain readings, AI can read patterns and predict the signs failure. AI can see indicators that humans may not be able to. As such early failure prediction and notification enable the team to fix it before it can affect the software development life cycle (SDLC). 7 Optimizing a specific metric AI can work towards solutions where the uptime is maximized. An adaptive machine learning system can learn how the system works and improve it. Improving could mean tweaking a specific metric in the workflow for optimized performance. Configurations can be changed by AI for optimal performance as required during different production phases. Real-time analysis plays a big part in this. 8 Managing Alerts DevOps systems can be flooded with alerts which are hard for humans to read and act upon. AI can analyze these alerts in real-time and categorize them. Assigning priority to alerts helps teams towards work on fixing them rather than going through a long list of alerts. The alerts can simply be tagged by a common ID for specific areas or AI can be trained for classifying good and bad alerts. Prioritizing alerts in such a way that flaws are shown first to be fixed will help smooth functioning. Conclusion As we saw, most of these areas depend heavily on data. So getting the system right to enhance data accessibility is the first step to take. Predictions work better when data is organized, performing root cause analysis is also easier. Automation can repeat mundane tasks and allow engineers to focus on more interactive problems that machines cannot handle. With machine learning, the overall operation efficiency, simplicity, and speed can be improved for smooth functioning of DevOps teams. Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner Top 7 DevOps tools in 2018 GitLab’s new DevOps solution
Read more
  • 0
  • 0
  • 4877

article-image-why-choose-ansible-for-your-automation-and-configuration-management-needs
Savia Lobo
03 Jul 2018
4 min read
Save for later

Why choose Ansible for your automation and configuration management needs?

Savia Lobo
03 Jul 2018
4 min read
Off late, organizations are moving towards getting their systems automated. The benefits are many. Firstly, it saves off a huge chunk of time and secondly saves investments in human resources for simple tasks such as updates and so on. Few years back, Chef and Puppet were the two popular names when asked about tools for software automation. Over the years, these have got a strong rival which has surpassed them and now sits as one of the famous tools for software automation. Ansible is the one! Ansible is an open source tool for IT configuration management, deployment, and orchestration. It is perhaps the definitive configuration management tool. Chef and Puppet may have got there first, but its rise over the last couple of years is largely down to its impressive automation capabilities. And with the demands on operations engineers and sysadmins facing constant time pressures, the need to automate isn’t “nice to have”, but a necessity. Its tagline is “allowing smart people to do smart things.” It’s hard to argue that any software should aim to do much more than that. Ansible’s rise in popularity Ansible, originated in the year 2013, is a leader in IT automation and DevOps. It was bought by Red Hat in the year 2015 to achieve their goal of creating frictionless IT. The reason Red Hat acquired Ansible was its simplicity and versatility. It got the second mover advantage of entering the DevOps world after Puppet. It meant that it can orchestrate multi-tier applications in the cloud. This results in server uptime by implementing an ‘Immutable server architecture’ for deploying, creating, delete, or migrate servers across different clouds. For those starting afresh, it is easy to write, maintain automation workflows and gives them a plethora of modules which make it easy for newbies to get started. Benefits Red Hat and its community Ansible complements Red Hat’s popular cloud products, OpenStack and OpenShift. Red Hat proved to be a complex yet safe open source software for enterprises. However, it was not easy-to-use. Due to this many developers started migrating to other cloud services for easy and simple deployment options. By adopting Ansible, Red Hat finally provided an easy option to automate and modernize theri IT solutions. Customers can now focus on automating various baseline tasks. It also aids Red Hat to refresh its traditional playbooks; it allows enterprises to use IT services and infrastructure together with the help of Ansible’s YAML. The most prominent benefit of using Ansible for both enterprises and individuals is that it is agentless. It achieves this by leveraging SSH and Windows remote Management. Both these approaches reuse connections and use minimal network traffic. The approach also has added security benefits and improves both client and central management server resource utilization. Thus, the user does not have to worry about the network or server management, and can focus on other priority tasks. What can you use it for? Easy Configurations: Ansible provides developers with easy to understand configurations; understood by both humans and machines. It also includes many modules and user-built roles. Thus, one need not start building from scratch. Application lifecycle management: One can be rest assured about their application development lifecycle with Ansible. Here, it is used for defining the application and Red Hat Ansible Tower is used for managing the entire deployment process. Continuous Delivery: Manage your business with the help of Ansible push-based architecture, which allows a more sturdy control over all the required operations. Orchestration of server configuration in batches makes it easy to roll out changes across the environment. Security and Compliance: While security policies are defined in Ansible, one can choose to integrate the process of scanning and solving issues across the site into other automated processes. Scanning of jobs and system tracking ensures that systems do not deviate from the parameters assigned. Additionally, Ansible Tower provides a secure storage for machine credentials and RBAC (role-based access control). Orchestration: It brings in a high amount of discipline and order within the environment. This ensures all application pieces work in unison and are easily manageable; despite the complexity of the said applications. Though it is popular as the IT automation tool, many organizations use it in combination with Chef and Puppet. This is because it may have scaling issues and lacks in performance for larger deployments. Don’t let that stop you from trying Ansible; it is most loved by DevOps as it is written in Python and thus it is easy to learn. Moreover, it offers a credible support and an agentless architecture, which makes it easy to control servers and much more within an application development environment. An In-depth Look at Ansible Plugins Mastering Ansible – Protecting Your Secrets with Ansible Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0
Read more
  • 0
  • 0
  • 3598

article-image-docker-isnt-going-anywhere
Savia Lobo
22 Jun 2018
5 min read
Save for later

Docker isn't going anywhere

Savia Lobo
22 Jun 2018
5 min read
To create good software, developers often have to weave in the UI, frameworks, databases, libraries, and of course a whole bunch of code modules. These elements together build an immersive user experience on the front-end. However, deploying and testing software is way too complex these days as all these elements should be properly set-up in order to build successful software. Here, containers are of a great help as they enable developers to pack all the contents of their app, including the code, libraries, and other dependencies, and ship it over as a singular package. One can think of software as a puzzle and containers just help one to get all the pieces in their proper position for the effective functioning of the software. Docker is one of the popular choices in containers. The rise of Docker Containers Linux Containers have been in the market for almost a decade. However, it was after the release of Docker five years ago that developers widely started using containers in a simple way. At present, containers, especially Docker containers are popular and in use everywhere and this popularity seems set to stay. As per our Packt Skill Up developer survey on top sysadmin and virtualization tools, almost 46% of the developer crowd voted that they use Docker containers on a regular basis.  It ranked third after Linux and Windows OS in the lead. Source: Packt Skill Up survey 2018 Also, organizations such as Red Hat, Canonical, Microsoft, Oracle and all other major IT companies and cloud businesses that have adopted Docker. Docker is often confused with virtual machines; read our article on Virtual machines vs Containers to understand the differences between the two. VMs such as Hyper-V, KVM, Xen, and so on are based on the concept of emulating hardware virtually. As such, they come with huge system requirements. On the other hand, Docker containers or in general containers use the same OS and kernel. Apart from this, Docker is just right if you want to use minimal hardware to run multiple copies of your app at the same time. This would, in turn, save huge costs on power and hardware for data centers annually. Docker containers boot within a fraction of seconds unlike virtual machines that require 10-20 GB of operating system data to boot, which eventually slows down the whole process. For CI/CD, Docker makes it easy to set up environments for local development that replicates a live server. It helps run multiple development environments using different software, OS, and configurations; all from the same host. One can run test projects on new or different servers and can also work on the same project with similar settings, irrespective of the local host environment. Docker can also be deployed on the cloud as it is designed for integration within most of the DevOps platforms including Puppet, Chef, and so on. One can even manage standalone development environments with it. Why developers love Docker Docker brought in novel ideas in the market for the organizations starting with making containers easy to use and deploy. In the year 2014, Docker announced that it was partnering with the major tech leaders Google, Red Hat, and Parallels on its open-source component libcontainer. This made libcontainer the defacto standard for Linux containers. Microsoft also announced that it would bring Docker-based containers to its Azure Cloud. Docker has also donated its software container format and its runtime, along with its specifications to Linux’s Open Container Project. This project includes all the contents of the libcontainer project, nsinit, and all other modifications such that it can independently run without Docker.  Further, Docker Containerd, is also hosted by Cloud Native Computing Foundation (CNCF). Few reasons why Docker is preferred by many: It has a great user experience, which helps developers to use the programming language of their choice. One requires to perform less amount of coding One can run Docker on any operating system such as Windows, Linux, Mac, and etc. The Docker Kubernetes combo DevOps can be used to deploy and monitor Docker containers but they are not highly optimized for this task. Containers need to be individually monitored as they contain huge density in respect to the matter they contain. The possible solution to this is cloud orchestration tools, and what better than Kubernetes as it is one of the most dominant cloud orchestration tools in the market. As Kubernetes has a bigger community and a bigger share of the market, Docker made a smart move to include Kubernetes as one of its offerings. With this, Docker users and customers can not only use Kubernetes’ secure orchestration experience but also an end-to-end Docker experience. Docker gives an A to Z experience for developers and system administrators. With Docker, Developers can focus on writing code and forget about the rest of the deployment. They can also make use of different programs designed to run on Docker and can make use of it in their own projects. System administrators in a way can reduce system overhead as compared to VMs. Docker’s portability and ease of installation make it easy for admins to save a bunch of time lost in installing individual VM components. Also, with Google, Microsoft, RedHat, and others absorbing Docker technology in their daily operations, it is surely not going anywhere soon. Docker’s future is bright and we can expect machine learning to be a part of it sooner than later. Are containers the end of virtual machines? How to ace managing the Endpoint Operations Management Agent with vROps Atlassian open sources Escalator, a Kubernetes autoscaler project  
Read more
  • 0
  • 0
  • 2516
article-image-what-are-lightweight-architecture-decision-records
Richard Gall
16 May 2018
4 min read
Save for later

What are lightweight Architecture Decision Records?

Richard Gall
16 May 2018
4 min read
Architecture Decision Records (ADRs) document all the decisions made about software. Every change is recorded in a plain text file sitting inside a version control system (like GitHub). The record should be a complement to the information you can find in a version control system. The ADR provides context and information around every decision made about a piece of software. Why are lightweight Architecture Decision Records needed? We are always making decisions when we build software. Even the simplest piece of software will have required the engineer to take a number of different decisions. Often these decisions aren't obvious. If you've ever had to work with code written by someone else you're probably familiar with this sort of situation. You might have even found that when you come across someone else's code, you need to make a further decision. Either you can simply accept what has been written, and merely surmise and assume why it has been done in the way that it, or you can decide to change it, based on your own judgement. Neither option is ideal. This was what Michael Nygard identified in this blog post in 2011. This was when the concept of Architecture Decision Records first emerged. An ADR should prevent situations like this arising. That makes life easier for you. More importantly, it should mean that every decision is transparent to everyone involved. So, instead of blindly accepting something or immediately changing it, you can simply check the Architecture Decision Record. This will then inform how you proceed. Perhaps you need to make a change. But perhaps you also now understand the context of why something was built in the way it was. Any questions you might have should be explicitly answered in the architecture decision record. So, when you start asking yourself why has she done it like that? instead of floundering helplessly, you can find the answer in the ADR. Why lightweight Architecture Decision Records now? Architecture Decision Records aren't a new thing. Nygard wrote his post all the way back in 2011, after all. But the fact remains that the context from which Nygard was writing in 2011 was very specific. Today it is mainstream. As we've moved away from monolithic architecture towards microservices or serverless, decision making has become more and more important in software engineering. This is a point that is well explained in a blog post here: "The rise of lean development and microservices... complicates the ability to communicate architecture decisions. While these concepts are not inherently opposed to documentation, their processes often fail to effectively capture decision-making processes and reasoning. Another possible inefficiency when recording decisions is bad or out-of-date documentation. It's often a herculean effort to keep large, complex architecture documents current, making maintenance one of the most common barriers to entry." ADRs are, then, a way of managing the complexity in modern software engineering. They are a response to a fundamental need to better communicate decisions. Most importantly, they codify decision-making within the development process. It is when they are lightweight and sit within the project itself that they are most effective. Architecture Decision Record template Architecture Decision Records must follow a template. Not only does that mean everyone is working off the same page, it also means people are actually more likely to document their decisions. Think about it: if you're asked to note how you decide to do something without any guidelines, you're probably not going to do it at all. Below, you'll find an Architecture Decision Record example template. There are a number of different templates you can use, but it's probably best to sit down with your team and agree on what needs to be captured. An Architecture Decision Record example Date Decision makers [who was involved in the decision taken] Category [which part of the architecture does this decision pertain to] Contextual outline [Explain why this decision was made. Outline the key considerations and assumptions at play] Impact consequences [What does this decision mean for the project? What should someone reading this be aware of in terms of future decisions?] As I've already noted, there are a huge number of ways you may want to approach this. Use this as a starting point. Read next Enterprise Architecture Concepts Reactive Programming and the Flux Architecture
Read more
  • 0
  • 0
  • 7465

article-image-top-7-devops-tools-2018
Vijin Boricha
25 Apr 2018
5 min read
Save for later

Top 7 DevOps tools in 2018

Vijin Boricha
25 Apr 2018
5 min read
DevOps is a methodology or a philosophy. It's a way of improving the friction between development and operations. But while we could talk about what DevOps is and isn't for decades (and people probably will), there are a range of DevOps tools that are integral to putting its principles into practice. So, while it's true that adopting a DevOps mindset will make the way you build software more efficiently, it's pretty hard to put DevOps into practice without the right tools. Let's take a look at some of the best DevOps tools out there in 2018. You might not use all of them, but you're sure to find something useful in at least one of them - probably a combination of them. DevOps tools that help put the DevOps mindset into practice Docker Docker is a software that performs OS-level virtualization, also known as containerization. Docker uses containers to package up all the requirements and dependencies of an application making it shippable to on-premises devices, data center VMs or even Cloud. It was developed by Docker, Inc, back in 2013 with complete support for Linux and limited support for Windows. By 2016 Microsoft had already announced integration of Docker with Windows 10 and Windows Server 2016. As a result, Docker enables developers to easily pack, ship, and run any application as a lightweight, portable container, which can run virtually anywhere. Jenkins Jenkins is an open source continuous integration server in Java. When it comes to integrating DevOps processes, continuous integration plays the most important part and this is where Jenkins comes into picture. It was released in 2011 to help developers integrate DevOps stages with a variety of in-built plugins. Jenkins is one of those prominent tools that helps developers find and solve code bugs quickly and also automates the testing of their builds. Ansible Ansible was developed by the Ansible community back in 2012 to automate network configuration, software provisioning, development environment, and application deployment. In a nutshell, it is responsible for delivering simple IT automation that puts a stop to repetitive task. This eventually helps DevOps teams to focus on more strategic work. Ansible is completely agentless where in it uses syntax written in YAML and follows a master-slave architecture. Puppet Puppet is an open source software configuration management tool written in C++ and Closure. It was released back in 2005 licensed under the GNU General Public License (GPL) until version 2.7.0. Later it was licensed under Apache License 2.0. Puppet is an open-source configuration management tool used to deploy, configure and manage servers. It uses a Master Slave architecture where the Master and Slave use secure encrypted channels to communicate. Puppet runs on any platform that supports Ruby, for example CentOS, Windows Server, Oracle Enterprise Linux, Microsoft, and more. Git Git is a version control system that allows you to track file changes which in turn helps in coordinating with team members working on those files. Git was released in 2005 where it was majorly used for Linux Kernel development. Its primary use case is source code management in software development. Git is a distributed version control system where every contributor can create a local repository by cloning the entire main repository. The main advantage of this system is that contributors can update their local repository without any interference to the main repository. Vagrant Vagrant is an open source tool released in 2010 by HashiCorp and it used to build and maintain virtual environments. It provides a simple command-line interface to manage virtual machines with custom configurations so that DevOps team members have an identical development environment. While Vagrant is written in Ruby, it supports development in all major languages. It works seamlessly on Mac, Windows, and all popular Linux distributions. If you are considering building and configuring a portable, scalable, and lightweight environment, Vagrant is your solution. Chef Chef is a powerful configuration management tool used to transform infrastructure into code. It was released back in 2009 and is written in Ruby and Erlang. Chef uses a pure-ruby domain specific language (DSL) to write system configuration 'recipes' which are put together as cookbook for easier management. Unlike Puppet’s master-slave architecture Chef uses a client-server architecture. Chef supports multiple cloud environments which makes it easy for infrastructures to manage data centers and maintain high availability. Think carefully about the DevOps tools you use To increase efficiency and productivity, the right tool is key. In a fast-paced world where DevOps engineers and their entire teams do all the extensive work, it is really hard to find the right tool that fits your environment perfectly. Your best bet is to choose your tool based on the methodology you are going to adopt. Before making a hard decision it is worth taking a step back to analyze what would work best to increase your team’s productivity and efficiency. The above tools have been shortlisted based on current market adoptions. We hope you find a tool in this list that eventually saves a lot of your time in choosing the right one. Learning resources Here is a small selection of books and videos from our Devops portfolio to help you and your team master the DevOps tools that fit your requirements: Mastering Docker (Second Edition) Mastering DevOps [Video] Mastering Docker [Video] Ansible 2 for Beginners [Video] Learning Continuous Integration with Jenkins (Second Edition) Mastering Ansible (Second Edition) Puppet 5 Beginner's Guide (Third Edition) Effective DevOps with AWS
Read more
  • 0
  • 0
  • 3893

article-image-aws-fargate-makes-container-infrastructure-management-a-piece-of-cake
Savia Lobo
17 Apr 2018
3 min read
Save for later

AWS Fargate makes Container infrastructure management a piece of cake

Savia Lobo
17 Apr 2018
3 min read
Containers such as Docker, FreeBSD Jails, and many more, are a substantial way for developers to develop and deploy their applications. Also, with container orchestration solutions such as Amazon ECS and EKS (Kubernetes), developers can easily manage and scale these containers, thus enabling them to perform other activities quickly. However, in spite of these management solutions at hand, one also has to take an account of the infrastructure maintenance, its availability, capacity and so on which are added tasks. AWS Fargate eases out these tasks and streamlines all deployments for you, resulting in faster completion of deliverables. At the Re:Invent in November 2017, AWS launched Fargate, a technology which enables one to manage containers without having to worry about managing the container infrastructure underneath. AWS Fargate comes to your rescue here. It is an easy way to deploy your containers on AWS. One can start using Fargate on ECS or EKS, try processes and workloads and later migrate workloads to Fargate. It eliminates most of the management such as placement of resources, scheduling, scaling, and so on, which is a requirement for containers. All you have to do is, Build your container image, Specify the CPU and memory requirements, Define your networking and IAM policies, and Launch your container application Some key benefits of AWS Fargate It allows developers to focus on design, development, and deployment of applications. This eliminates the need to manage a cluster of Amazon EC2 instances. One can easily scale applications using Fargate. Once, the application requirements such as CPU, memory, and so on are defined, Fargate manages effective scaling and infrastructure needed to make containers highly-available. One can launch thousands of containers in no time and easily scale them to run most of the mission-critical applications. AWS Fargate is integrated with Amazon ECS and EKS. Fargate launches and manages containers once CPU and memory needed, IAM policies that container needs are defined and uploaded to Amazon ECS. With Fargate, one gets flexible configuration options that matches one’s applications’ needs. Also, one pays on the basis of per-second granularity. Adoption of Container management as a trend is steadily increasing. Kubernetes, at present, is one of the popular and most used containerized application management platforms. However, users and developers are often confused about who the best Kubernetes provider is. Microsoft and Google have their managed Kubernetes services, but AWS Fargate provides an added ease to Amazon’s EKS (Elastic Container Service for Kubernetes) by eliminating the hassle of container infrastructure management. Read more about AWS Fargate on AWS’ official website.
Read more
  • 0
  • 0
  • 3416
article-image-differences-kubernetes-docker-swarm
Richard Gall
02 Apr 2018
4 min read
Save for later

The key differences between Kubernetes and Docker Swarm

Richard Gall
02 Apr 2018
4 min read
The orchestration war between Kubernetes and Docker Swarm appears to be over. Back in October, Docker announced that its Enterprise Edition could be integrated with Kubernetes. This move was widely seen as the Docker team conceding to Kubernetes dominance as an orchestration tool. But Docker Swarm nevertheless remains popular; it doesn't look like it's about to fall off the face of the earth. So what is the difference between Kubernetes and Docker Swarm? And why should you choose one over the other?  To start with it's worth saying that both container orchestration tools have a lot in common. Both let you run a cluster of containers, allowing you to increase the scale of your container deployments significantly without cloning yourself to mess about with the Docker CLI (although as you'll see, you could argue that one is more suited to scalability than the other). Ultimately, you'll need to view the various features and key differences between Docker Swarm and Kubernetes in terms of what you want to achieve. Do you want to get up and running quickly? Are you looking to deploy containers on a huge scale? Here's a brief but useful comparison of Kubernetes and Docker Swarm. It should help you decide which container orchestration tool you should be using. Docker Swarm is easier to use than Kubernetes One of the main reasons you’d choose Docker Swarm over Kubernetes is that it has a much more straightforward learning curve. As popular as it is, Kubernetes is regarded by many developers as complex. Many people complain that it is difficult to configure. Docker Swarm, meanwhile, is actually pretty simple. It’s much more accessible for less experienced programmers. And if you need a container orchestration solution now, simplicity is likely going to be an important factor in your decision making. ...But Docker Swarm isn't as customizable Although ease of use is definitely one thing Docker Swarm has over Kubernetes, it also means there's less you can actually do with it. Yes, it gets you up and running, but if you want to do something a little different, you can't. You can configure Kubernetes in a much more tailored way than Docker Swarm. That means that while the learning curve is steeper, the possibilities and opportunities open to you will be far greater. Kubernetes gives you auto-scaling - Docker Swarm doesn't When it comes to scalability it’s a close race. Both tools are able to run around 30,000 containers on 1,000 nodes, which is impressive. However, when it comes to auto-scaling, Kubernetes wins because Docker doesn’t offer that functionality out of the box. Monitoring container deployments is easier with Kubernetes This is where Kubernetes has the edge. It has in-built monitoring and logging solutions. With Docker Swarm you’ll have to use third-party applications. That isn’t necessarily a huge problem, but it does make life ever so slightly more difficult. Whether or not it makes life more difficult to outweigh the steeper Kubernetes learning curve however is another matter… Is Kubernetes or Docker Swarm better? Clearly, Kubernetes is a more advanced tool than Docker Swarm. That's one of the reasons why the Docker team backed down and opened up their enterprise tool for integration with Kubernetes. Kubernetes is simply the software that's defining container orchestration. And that's fine - Docker has cemented its position within the stack of technologies that support software automation and deployment. It's time to let someone else take on the challenge of orchestration But although Kubernetes is the more 'advanced' tool, that doesn't mean you should overlook Docker Swarm. If you want to begin deploying container clusters, without the need for specific configurations, then don't allow yourself to be seduced by something shinier, something ostensibly more popular. As with everything else in software development, understand and define what job needs to be done - then choose the right tool for the job.
Read more
  • 0
  • 1
  • 7057

article-image-5-things-to-remember-when-implementing-devops
Erik Kappelman
05 Dec 2017
5 min read
Save for later

5 things to remember when implementing DevOps

Erik Kappelman
05 Dec 2017
5 min read
DevOps is a much more realistic and efficient way to organize the creation and delivery of technology solutions to customers. But like practically everything else in the world of technology, DevOps has become a buzzword and is often thrown around willy-nilly. Let's cut through the fog and highlight concrete steps that will help an organization implement DevOps. DevOps is about bringing your development and operations teams together This might seem like a no-brainer, but DevOps is often explained in terms of tools rather than techniques or philosophical paradigms. At its core, DevOps is about uniting developers and operators, getting these groups to effectively communicate with each other, and then using this new communication to streamline various processes. This could include a physical change to the layout of an organization's workspace. It's incredible the changes that can happen just by changing the seating arrangements in an office. If you have a very large organization, development and operations might be in separate buildings, separate campuses, or even separate cities. While the efficacy of web-based communication has increased dramatically over the last few years, there is still no replacement for face-to-face daily human interactions. Putting developers and operators in the same physical space is going to increase the rate of adoption and efficacy of various DevOps tools and techniques. DevOps is all about updates Updates can be aimed at expanding functionality or simply fixing or streamlining existing processes. Updates present a couple of problems to developers and operators. First, we need to keep everybody working on the same codebase. This can be achieved by using a variety of continuous integration tools. The goal of continuous integration is to make sure that changes and updates to the codebase are implemented as close to continuously as possible. This helps avoid merging problems that can result from multiple developers working on the same codebase at the same time. Second, these updates need to be integrated into the final product. For this task, DevOps applies the concept of continuous deployment. This is essentially the same thing as continuous integration, but has to do with deploying changes to the codebase as opposed to integrating changes to the codebase. In terms of importance to the DevOps process, continues integration and deployment are equally important. Moving updates from a developer's workspace to the codebase to production should be seamless, smooth, and continuous. Implementing a microservices structure is imperative for an effective DevOps approach Microservices are an extension of the service-based structure. Basically a service structure calls for modulation of a solution’s codebase into units based on functionality. Microservices takes this a step further by implementing what consists of a service-based structure in which each service performs a single task. While a service-based or microservice structure is not required for implementation of DevOps, I have no idea why you wouldn’t because microservices lend themselves so well with DevOps. One way to think of a microservice structure is by imagining an ant hill in which all of the worker ants are microservices. Each ant has a specific set of abilities and is given a task from the queen. The ant then autonomously performs this task, usually gathering food, along with all of its ant friends. Remove a single ant from the pile, nothing really happens. Replace an old ant with a new ant, nothing really happens. The metaphor isn’t perfect, but it strikes at the heart of why microservices are valuable in a DevOps framework. If we need to be continuously integrating and deploying, shouldn’t we try to impact the codebase as directly as we can? When microservices are in use, changes can be made at an extremely granular level. This allows for continuous integration and deployment to really shine. Monitor your DevOps solutions In order to continuously deploy, applications need to also be continuously monitored. This allows for problems to be identified quickly. When problems are quickly identified, it tends to reduce the total effort required to fix the problems. Your application should obviously be monitored from the perspective of whether or not it is working as it currently should, but users need to be able to give feedback on the application’s functionality. When reasonable, this feedback can then be integrated into the application somehow. Monitoring user feedback tends to fall by the wayside when discussing DevOps. It shouldn’t. The whole point of the DevOps process is to improve the user experience. If you’re not getting feedback from users in a timely manner, it's kind of impossible to improve their experience. Keep it loose and experiment Part of the beauty of DevOps is that it can allow for more experimentation than other development frameworks. When microservices and continuous integration and deployment are being fully utilized, it's fairly easy to incorporate experimental changes to applications. If an experiment fails, or doesn’t do exactly what was expected, it can be removed just as easily. Basically, remember why DevOps is being used and really try to get the most out of it. DevOps can be complicated. Boiling anything down to five steps can be difficult but if you act on these five fundamental principles you will be well on your way to putting DevOps into practice. And while its fun to talk about what DevOps is and isn't, ultimately that's the whole point - to actually uncover a better way to work with others.
Read more
  • 0
  • 0
  • 2485