Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Cloud & Networking

65 Articles
article-image-is-serverless-architecture-a-good-choice-for-app-development
Mehul Rajput
11 Oct 2019
6 min read
Save for later

Is serverless architecture a good choice for app development?

Mehul Rajput
11 Oct 2019
6 min read
App development has evolved rapidly in recent years. With new demands and expectations from businesses and users, trends like cloud have helped developers to be more productive and to build faster, more reliable and secure applications. But, there’s no end to evolution - and serverless is arguably the next step for application development. But is a serverless architecture the right choice? What is a Serverless Architecture? When you hear the word sever-less, you might assume that it means no servers. In actual fact it really refers to the elimination of the need to manage the servers. Instead, it shifts that responsibility to your cloud provider. Simply, it means that the constituent parts of an application are divided between multiple servers, with no need for the application owner/manager to create or manage the infrastructure that supports it. Instead of running off a server, with a serverless architecture, it runs off functions. These are essentially actions that are fired off to ensure things happen within the application. This is where the phrase ‘function-as-a-service’, or FaaS, (another way of describing serverless) comes from.  A recent report claims that the FaaS market is projected to grow up to 32.7% by 2021, by 7.72 billion US dollars. Is Serverless Architecture a Good Choice for App Development? Now that we’ve established what the serverless actually means, we must get to business. Is serverless architecture the right choice for app development? Well, it can work either way. It can be positive as well as negative. Here are some reasons: Using serverless for app development: the positives There are many reasons because of which serverless architecture can be good for app development and should be used. Some of them are discussed below: Decreasing costs Easier for service Scalability Third-party services Decreasing costs The most effective use of a serverless architecture in an app development process is that it reduces the costs of the work.It’s typically less expensive a ‘traditional’ server architecture. The reason is that on hardware servers, you have to pay for many different things that might not be  required. For example, you won’t have to pay for regular maintenance, the grounds, the electricity, and staff maintenance. Hence, you can save a considerable amount of money and use that for app quality as well. Easier for service It is a rational thought that when the owner or the app manager will not have to manage the server themselves, and a machine can do this job, then it won’t be as challenging to make the service accessible. As it will make the job more comfortable because it will not require supervision. Second, you will not have to spend time on it. Instead, you can use this time for productive work such as product development. Third, the service by this technology is reliable, and hence you can easily use it without much fear. Scalability Now another interestingly useful advantage of serverless architecture in app development is scalability. So, what is scalability? Well, it is the phenomenon by which a system handles an extra amount of work by adding resources to the system. It is the capability of an app or product to continue to work appropriately without disturbance when it is reformed in size or volume to meet any users need. So, serverless architecture act as the resource that is added to the system to handle any work that has piled up. Third-party services Another essential and useful feature of the serverless architecture is that, going this way you can use third-party services. Hence, your app can use any third-party service it requires other than what you already have. This way, the struggle needed to create the backend architecture of the app reduces. Additionally the third-party might provide us with better services than we already have. Hence, eventually, serverless architecture proves to be better as it provides the extent of a third-party. Serverless for app development: negatives Now we know all the advantages of a serverless architecture, it’s important to note that it can also it  bring some limitations and disadvantages. These are: Time restrictions Vendor lock-in Multi-tenancy Debugging is not possible Time restrictions As mentioned before, serverless architecture works on FaaS rules and has a time limit for running a function. This time limit is 300 seconds exactly. So, when this limit is reached, the function is stopped. Therefore, for more complex functions that require more time to execute, FaaS approach may not be a good choice. Although this problem can be tackled in a way that the problem is solved easily, to do this, we can split a task into several simpler functions if the task allows it. Otherwise, time restrictions like these can cause great difficulty. Vendor lock-in We have discussed that by using serverless architecture, we can utilize with third party services. Well, it can also go in the wrong way and cause vendor lock-in. If, for any reason, you decide to shift to a new service provider, in most cases services will be fulfilled in a different way. That means the productivity gains you expected from serverless will be lost as you will have to adjust and reconfigure the infrastructure to accept the new service. Multi-tenancy Multi-tenancy is an increasing problem in serverless architecture. The data of many tenants are kept quite near to each other. This can create  confusion. Some data might be exchanged, distributed, or probably lost. In turn, this can cause security and reliability issues. A customer could, for example, suddenly produce an extraordinarily high load which would affect other customers' applications. Debugging is not possible Debugging isn’t possible with serverless. As Serverless Architecture is a place where the data is being stored, it doesn’t have a debugging facility where the uploaded code can be debugged. If you want to know the function, run or perform it and wait for the result. The result can crash in the function and you cannot do anything about this. However, there is a way to resolve this problem, as well. You can use extensive logging which with every step being logged, decreases the chances of errors that cause debugging issues. Conclusion Serverless architecture certainly seems impressive in spite of having some limitations. There is no doubt that the viability and success of architectures depend on the business requirements and of course on the technology used. In the same way, serverless can sparkle bright if used in the appropriate case. I hope this blog might have helped you in the understanding of Serverless architecture for mobile apps and might be able to see it's both bright and dark sides. Author Bio Mehul Rajput is a CEO and co-founder of Mindinventory which specializes in Android and iOS app development and provide web and mobile app solutions from startup to enterprise level businesses. He is an avid blogger and writes on mobile technologies, mobile app, app marketing, app development, startup and business.   What is serverless architecture and why should I be interested? Introducing numpywren, a system for linear algebra built on a serverless architecture Serverless Computing 101 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 2
Read more
  • 0
  • 0
  • 6052

article-image-why-is-pentaho-8-3-great-for-dataops
Guest Contributor
07 Oct 2019
6 min read
Save for later

Why is Pentaho 8.3 great for DataOps?

Guest Contributor
07 Oct 2019
6 min read
Announced in July, Pentaho 8.3 is the latest version of the data integration and analytics platform software from Hitachi Vantara. Along with new and improved features, this version will support DataOps, a collaborative data management practice that helps customers access the full potential of their data. “DataOps is about having the right data, in the right place, at the right time and the new features in Pentaho 8.3 ensure just that,” said John Magee, vice president, Portfolio Marketing, Hitachi Vantara. “Not only do we want to ensure that data is stored at the lowest cost at the right service level, but that data is searchable, accessible and properly governed so actionable insights can be generated and the full economic value of the data is captured.” How Pentaho prevents the loss of data According to Stewart Bond, research director, Data Integration and Integrity Software, and Chandana Gopal, research director, Business Analytics Solutions from IDC, “A vast majority of data that is generated today is lost. In fact, only about 2.5% of all data is actually analyzed. The biggest challenge to unlocking the potential that is hidden within data is that it is complicated, siloed and distributed. To be effective, decision makers need to have access to the right data at the right time and with context.” The struggle is how to manage all the incoming data in a way that exposes everyone to what’s coming down the pipeline. When data is siloed, there’s no guarantee the right people are seeing it to analyze it. Pentaho Development is a single platform to help businesses keep up with data growth in a way that enables real-time data ingestion. With available data services, you can:   Make data sets immediately available for reports and applications.   Reduce the time needed to create data models.   Improve collaboration between business and IT teams.   Analyze results with embedded machines and deep learning models without knowing how to code them into data pipelines.   Prepare and blend traditional data with big data. Making all the data more accessible across the board is a key feature of Pentaho that this latest release continues to strengthen. What’s new in Pentaho 8.3? Latest version of Pentaho includes new features to support DataOps DataOps limits the overall cycle time of big data analytics. Starting from the initial origin of the ideas to the making of the visualization, the overall data analytics process is transformed with DataOps. Pentaho 8.3 is conceptualized to promote the easy management and collaboration of the data. The data analytics process is much more agile. Therefore, the data teams are able to work in sync. Also, efficiency and effectiveness are increased with DataOps. Businesses are looking for ways to transform the data digitally. They want to get more value from the massive pool of information. And, as data is almost everywhere, and it is distributed more than ever before, therefore, the businesses are looking for ways to get the key insights from the data quickly and easily. This is exactly where the role of Pentaho 8.3 comes into the picture. It accelerates the businesses’ innovation and agility. Plenty of new and exciting time-saving enhancements have been done to make Pentaho a better and more advanced solution for the corporates. It helps the companies to automate their data management techniques.  Key enhancements in Pentaho 8.3 Each enhancement included with Pentaho 8.3 in some way helps organizations modernize their data management practices in ways that assist with removing friction between data and insight, including: Improved drag and drop pipeline capabilities These help access and blend data that are hard to reach to provide deeper insights into the greater analytic value from enterprise integration. Amazon Web Services (AWS) developers can also now ingest and process streaming data through a visual environment rather than having to write code that must blend with other data. Enhanced data visibility Improved integration with Hitachi Content Platform (HCP), a distributed, object storage system designed to support large repositories of content, makes it easier for users to read, write and update HCP customer metadata. They can also more easily query objects with their system metadata, making data more searchable, governable, and applicable for analytics. It’s also now easier to trace real-time data from popular protocols like AMQP, JMS, Kafka, and MQTT. Users can also view lineage data from Pentaho within IBM’s Information Governance Catalog (IGC) to reduce the amount of effort required to govern data. Expanded multi-cloud support AWS Redshift bulk load capabilities now automate the process of loading Redshift. This removes the repetitive SQL scripting to complete bulk loads and allows users to boost productivity and apply policies and schedules for data onboarding. Also included in this category are updates that address Snowflake connectivity. As one of the leading destinations for cloud warehousing, Snowflake’s primary hiccup is when an analytics project wants to include data from other sources. Pentaho 8.3 allows blending, enrichment and the analysis of Snowflake data in conjunction with other sources, including other cloud sources. These include existing Pentaho-supported cloud platforms like AWS and Google Cloud. Pentaho and DataOps Each of the new capabilities and enhancements for this release of Pentaho are important for current users, but the larger benefit to businesses is its association with DataOps. Emerging as a collaborative data management discipline, focused on better communication, integration, and automation of how data flows across an organization, DataOps is becoming a practice embraced more often, yet not without its own setbacks. Pentaho 8.3 helps businesses gain the ability to make DataOps a reality without facing common challenges often associated with data management. According to John Magee, Vice President Portfolio Marketing at Hitachi,  “The new Pentaho 8.3 release provides key capabilities for customers looking to begin their DataOps journey.” Beyond feature enhancements Looking past the improvements and new features of the latest Pentaho release, it’s a good product because of the support it offers its community of users. From forums to webinars to 24/7 support, it not only caters to huge volumes of data on a practical level, but it doesn’t ignore the actual people using the product outside of the data. Author Bio James Warner is a Business Intelligence Analyst with Excellent knowledge on Hadoop/Big data analysis at NexSoftSys.com  New MapR Platform 6.0 powers DataOps DevOps might be the key to your Big Data project success Bridging the gap between data science and DevOps with DataOps
Read more
  • 0
  • 0
  • 2388

article-image-hot-chips-31-ibm-power10-amds-ai-ambitions-intel-nnp-t-cerebras-largest-chip-with-1-2-trillion-transistors-and-more
Fatema Patrawala
23 Aug 2019
7 min read
Save for later

Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more

Fatema Patrawala
23 Aug 2019
7 min read
Hot Chips 31, the premiere event for the biggest semiconductor vendors to highlight their latest architectural developments is held in August every year. The event this year was held at the Memorial Auditorium on the Stanford University Campus in California, from August 18-20, 2019. Since its inception it is co-sponsored by IEEE and ACM SIGARCH. Hot Chips is amazing for the level of depth it provides on the latest technology and the upcoming releases in the IoT, firmware and hardware space. This year the list of presentations for Hot Chips was almost overwhelming with a wide range of technical disclosures on the latest chip logic innovations. Almost all the major chip vendors and IP licensees involved in semiconductor logic designs took part: Intel, AMD, NVIDIA, Arm, Xilinx, IBM, were on the list. But companies like Google, Microsoft, Facebook and Amazon also took part. There are notable absences from the likes of Apple, who despite being on the Committee, last presented at the conference in 1994. Day 1 kicked off with tutorials and sponsor demos. On the cloud side, Amazon AWS covered the evolution of hypervisors and the AWS infrastructure. Microsoft described its acceleration strategy with FPGAs and ASICs, with details on Project Brainwave and Project Zipline. Google covered the architecture of Google Cloud with the TPU v3 chip.  And a 3-part RISC-V tutorial rounded off by afternoon, so the day was spent well with insights into the latest cloud infrastructure and processor architectures. The detailed talks were presented on Day 2 and Day 3, below are some of the important highlights of the event: IBM’s POWER10 Processor expected by 2021 IBM which creates families of processors to address different segments, with different models for tasks like scale-up, scale-out, and now NVLink deployments. The company is adding new custom models that use new acceleration and memory devices, and that was the focus of this year’s talk at Hot Chips. They also announced about POWER10 which is expected to come with these new enhancements in 2021, they additionally announced, core counts of POWER10 and process technology. IBM also spoke about focusing on developing diverse memory and accelerator solutions to differentiate its product stack with heterogeneous systems. IBM aims to reduce the number of PHYs on its chips, so now it has PCIe Gen 4 PHYs while the rest of the SERDES run with the company's own interfaces. This creates a flexible interface that can support many types of accelerators and protocols, like GPUs, ASICs, CAPI, NVLink, and OpenCAPI. AMD wants to become a significant player in Artificial Intelligence AMD does not have an artificial intelligence–focused chip. However, AMD CEO Lisa Su in a keynote address at Hot Chips 31 stated that the company is working toward becoming a more significant player in artificial intelligence. Lisa stated that the company had adopted a CPU/GPU/interconnect strategy to tap artificial intelligence and HPC opportunity. She said that AMD would use all its technology in the Frontier supercomputer. The company plans to fully optimize its EYPC CPU and Radeon Instinct GPU for supercomputing. It would further enhance the system’s performance with its Infinity Fabric and unlock performance with its ROCM (Radeon Open Compute) software tools. Unlike Intel and NVIDIA, AMD does not have a dedicated artificial intelligence chip or application-specific accelerators. Despite this, Su noted, “We’ll absolutely see AMD be a large player in AI.” AMD is considering whether to build a dedicated AI chip or not. This decision will depend on how artificial intelligence evolves. Lisa explained that companies have been improving their CPU (central processing unit) performance by leveraging various elements. These elements are process technology, die size, TDP (thermal design power), power management, microarchitecture, and compilers. Process technology is the biggest contributor, as it boosts performance by 40%. Increasing die size also boosts performance in the double digits, but it is not cost-effective. While AMD used microarchitecture to boost EPYC Rome server CPU IPC (instructions per cycle) by 15% in single-threaded and 23% in multi-threaded workloads. This IPC improvement is above the industry average IPC improvement of around 5%–8%. Intel’s Nervana NNP-T and Lakefield 3D Foveros hybrid processors Intel revealed fine-grained details about its much-anticipated Spring Crest Deep Learning Accelerators at Hot Chips 31. The Nervana Neural Network Processor for Training (NNP-T) comes with 24 processing cores and a new take on data movement that's powered by 32GB of HBM2 memory. The spacious 27 billion transistors are spread across a 688mm2 die. The NNP-T also incorporates leading-edge technology from Intel-rival TSMC. Intel Lakefield 3D Foveros Hybrid Processors Intel in another presentation talked about Lakefield 3D Foveros hybrid processors that are the first to come to market with Intel's new 3D chip-stacking technology. The current design consists of two dies. The lower die houses all of the typical southbridge features, like I/O connections, and is fabbed on the 22FFL process. The upper die is a 10nm CPU that features one large compute core and four smaller Atom-based 'efficiency' cores, similar to an ARM big.LITTLE processor. Intel calls this a "hybrid x86 architecture," and it could denote a fundamental shift in the company's strategy. Finally, the company stacks DRAM atop the 3D processor in a PoP (package-on-Package) implementation. Cerebras largest chip ever with 1.2 trillion transistors California artificial intelligence startup Cerebras Systems introduced its Cerebras Wafer Scale Engine (WSE), the world’s largest-ever chip built for neural network processing. Sean Lie the Co-Founder and Chief Hardware Architect at Cerebras Lie presented the gigantic chip ever at Hot Chips 31. The 16nm WSE is a 46,225 mm2 silicon chip which is slightly larger than a 9.7-inch iPad. It features 1.2 trillion transistors, 400,000 AI optimized cores, 18 Gigabytes of on-chip memory, 9 petabyte/s memory bandwidth, and 100 petabyte/s fabric bandwidth. It is 56.7 times larger than the largest Nvidia graphics processing unit, which accommodates 21.1 billion transistors on a 815 mm2 silicon base. NVIDIA’s multi-chip solution for deep neural networks accelerator NVIDIA which announced about designing a test multi-chip solution for DNN computations at a VLSI conference last year, the company explained chip technology at Hot Chips 31 this year. It is currently a test chip which involves a multi-chip DL inference. It is designed for CNNs and has a RISC-V chip controller. It has 36 small chips, 8 Vector MACs per PE, and each chip has 12 PEs and each package has 6x6 chips. Few other notable talks at Hot Chips 31 Microsoft unveiled its new product Hololens 2.0 silicone. It has a holographic processor and a custom silicone. The application processor runs the app, and the HPU modifies the rendered image and sends to the display. Facebook presented details on Zion, its next generation in-memory unified training platform. Zion which is designed for Facebook sparse workloads, has a unified BFLOAT 16 format with CPU and accelerators. Huawei spoke about its Da Vinci architecture, a single Ascend 310 which can deliver 16 TeraOPS of 8-bit integer performance, support real-time analytics across 16 channels of HD video, and consume less than 8W of power. Xiling Versal AI engine Xilinx, the manufacturer of FPGAs, announced its new Versal AI engine last year as a way of moving FPGAs into the AI domain. This year at Hot Chips they expanded on its technology and more. Ayar Labs, an optical chip making startup, showcased results of its work with DARPA (U.S. Department of Defense's Defense Advanced Research Projects Agency) and Intel on an FPGA chiplet integration platform. The final talk on Day 3 ended with a presentation by Habana, they discussed about an innovative approach to scaling AI Training systems with its GAUDI AI Processor. AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications
Read more
  • 0
  • 0
  • 5320

article-image-7-crucial-devops-metrics-that-you-need-to-track
Guest Contributor
20 Aug 2019
9 min read
Save for later

7 crucial DevOps metrics that you need to track

Guest Contributor
20 Aug 2019
9 min read
DevOps has taken the IT world by storm and is increasingly becoming the de facto industry standard for software development. The DevOps principles have the potential to result in a competitive differentiation allowing the teams to deliver a high quality software developed at a faster rate which adequately meets the customer requirements. DevOps prevents the development and operations teams from functioning in two distinct silos and ensures seamless collaboration between all the stakeholders. Collection of feedback and its subsequent incorporation plays a critical role in DevOps implementation and formulation of a CI/CD pipeline. Successful transition to DevOps is a journey, not a destination. Setting up benchmarks, measuring yourself against them and tracking your progress is important for determining the stage of DevOps architecture you are in and ensuring a smooth journey onward. Feedback loops are a critical enabler for delivery of the application and metrics help transform the qualitative feedback into quantitative form. Collecting the feedback from the stakeholders is only half the work, gathering insights and communicating it through the DevOps team to keep the CI/CD pipeline on track is equally important. This is where the role of metrics comes in. DevOps metrics are the tools that your team needs for ensuring that the feedback is collected and communicated with the right people to improve upon the existing processes and functions in a unit. Here are 7 DevOps metrics that your team needs to track for a successful DevOps transformation: 1. Deployment frequency Quick iteration and continuous delivery are key measurements of DevOps success. It basically means how long the software takes to deploy and how often the deployment takes place. Keeping track of the frequency with which the new code is deployed helps keep track of the development process. The ultimate goal of deployment is to be able to release smaller deployments of code as quickly as possible. Smaller deployments are easier to test and release. They also improve the discoverability of bugs in the code allowing for faster and timely resolution of the same. Determining the frequency of deployments needs to be done separately for development, testing, staging, and production environments. Keeping track of the frequency of deployment to QA or pre-production environments is also an important consideration. A high deployment frequency is a tell-tale sign that things are going smooth in the production cycle. Smaller deployments are easier to test and release so higher deployment frequency directly corresponds with higher efficiency. No wonder tech giants such as Amazon and Netflix deploy code thousands of times a day. Amazon has built a deployment engine called Apollo that has deployed more than 50 million deployments in 12 months which is more than one deployment per second. This results in reduced outages and decreased downtimes. 2. Failed deployments Any deployment that causes issues or outages for your users is a failed deployment. Tracking the percentage of deployments that result in negative feedback from the user’s end is an important DevOps metric. The DevOps teams are expected to build quality in the product right from the beginning of the project. The responsibility for ensuring the quality of the software is also disseminated through the entire team and not just centered around the QA. While in an ideal scenario, there should be no failed deployments, that’s often not the case. Tracking the percentage of deployment that results in negative sentiment in the project helps you ascertain the ground level realities and makes you better prepared for such occurrences in the future. Only if you know what is wrong can you formulate a plan to fix it. While a failure rate of 0 is the magic number, less than 5% failed deployments is considered workable. In case the metric consistently shows spike of failed deployments over 10%, the existing process needs to be broken down into smaller segments with mini-deployments. Fixing 5 issues in 100 deployments is any day easier than fixing 50 in 1000 within the same time-frame. 3. Code committed Code committed is a DevOps metric that tracks the number of commits the team makes to the software before it can be deployed into production. This serves as an indicator of the development velocity as well as the code quality. The number of code commits that a team makes has to be within the optimum range defined by the DevOps team. Too many commits may be indicative of low quality or lack of direction in development. Similarly, if the commits are too low, it may be an indicator that the team is too taxed and non-productive. Uncovering the reason behind the variation in code committed is important for maintaining the productivity and project velocity while also ensuring optimal satisfaction within the team members. 4. Lead Time The software development cycle is a continuous process where new code is constantly developed and successfully deployed to production. Lead time for changes in DevOps is the time taken to go from code committed to code successfully running into production. It is an important indicator to determine the efficiency in the existing process and identifying the possible areas of improvement. The lead time and mean time to change (MTTC) result in the DevOps team getting a better hold of the project. By measuring the amount of time passing between its inception and the actual production and deployment, the team’s ability to adapt to change as the project requirements evolve can be computed. 5. Error rate Errors in any software application are inevitable. A few occasional errors aren’t a red flag but keeping track of the error rates and being on the lookout for any unusual spikes is important for the health of your application. A significant rise in error rate is an indicator of inherent quality problems and ongoing performance-related issues. The errors that you encounter can be of two types, bugs and production issues. Bugs are the exceptions in the code discovered after deployment. Production issues, on the other hand, are issues related to database connections and query timeouts. The error rate is calculated as a function of the transactions that result in an error during a particular time window. For a specified time duration, out of a 1000 transactions, if 20 have errors, the error rate is calculated as 20/1000 or 2 percent. A few intermittent errors throughout the application life cycle is a normal occurrence but any unusual spikes that occur need to be looked out for. The process needs to be analysed for bugs and production issues and the exceptions that occur need to be handled concurrently. 6. Mean time to detection Issues happen in every project but how fast you discover the issues is what matters. Having robust application monitoring and optimal coverage would help you find out any issues that happen as quickly as possible. The mean time to detection metric (MTTD) is the amount of time that passes between the beginning of the issue and the time when the issue gets detected and some remedial action is taken. The time to fix the issues is not covered under MTTD. Ideally, the DevOps teams need to strive to keep the MTTD as low as possible (ideally close to zero) i.e the DevOps teams should be able to detect any issues as soon as they occur. There needs to be a proper protocol established and communication channels need to be in place in order to help the team discover the error quickly and respond to its correction in a rapid manner. 7. Mean time to recovery Time to restore service or Mean time to recovery (MTTR) is a critical part of any project. It is the average time taken by the team to repair a failure in the system. It comprises of the time taken from failure detection till the time the project starts operating in the normal manner. Recovery and resilience are key components that determine the market readiness of a project. MTTR is an important DevOps metric because it allows for tracking of complex issues and failures while judging the capability of the team to handle change and bounce back again. The ideal recovery time for the fix to take place should be as low as possible, thus minimizing the overall system downtime. System downtimes and outages though undesirable are unavoidable. This especially runs true in the current development scenario where companies are making the move to the cloud. Designing for failure is a concept that needs to be ingrained right from the start. Even major applications like Facebook & Whatsapp, Twitter, Cloudflare, and Slack are not free of outages. What matters is that the downtime is kept minimal. Mean time to recovery thus becomes critical to realize the time the DevOps teams would need to bring the system back on track. Closing words DevOps isn’t just about tracking metrics, it is primarily about the culture. Organizations that make the transition to DevOps place immense emphasis on one goal-rapid delivery of stable, high-quality software through automation and continuous delivery. Simply having a bunch of numbers in the form of DevOps metrics isn’t going to help you across the line. You need to have a long-term vision combined with valuable insights that the metrics provide. It is only by monitoring these over a period of time and tracking your team’s progress in achieving the goals that you have set can you hope to reap the true benefits that DevOps offers. Author Bio Vinati Kamani writes about emerging technologies and their applications across various industries for Arkenea, a custom software development company and devops consulting company. When she's not on her desk penning down articles or reading up on the recent trends, she can be found traveling to remote places and soaking up different cultural experiences. DevOps engineering and full-stack development – 2 sides of the same agile coin Introducing kdevops, modern devops framework for Linux kernel development Why do IT teams need to transition from DevOps to DevSecOps?
Read more
  • 0
  • 0
  • 7699

article-image-understanding-security-features-in-the-google-cloud-platform-gcp
Vincy Davis
27 Jul 2019
10 min read
Save for later

Understanding security features in the Google Cloud Platform (GCP)

Vincy Davis
27 Jul 2019
10 min read
Google's long experience and success in, protecting itself against cyberattacks plays to our advantage as customers of the Google Cloud Platform (GCP). From years of warding off security threats, Google is well aware of the security implications of the cloud model. Thus, they provide a well-secured structure for their operational activities, data centers, customer data, organizational structure, hiring process, and user support. Google uses a global scale infrastructure to provide security to build commercial services, such as Gmail, Google search, Google Photos, and enterprise services, such as GCP and gsuite. This article is an excerpt taken from the book, "Google Cloud Platform for Architects.", written by Vitthal Srinivasan, Janani Ravi and Et al. In this book, you will learn about Google Cloud Platform (GCP) and how to manage robust, highly available, and dynamic solutions to drive business objective. This article gives an insight into the security features in Google Cloud Platform, the tools that GCP provides for users benefit, as well as some best practices and design choices for security. Security features at Google and on the GCP Let's start by discussing what we get directly by virtue of using the GCP. These are security protections that we would not be able to engineer for ourselves. Let's go through some of the many layers of security provided by the GCP. Datacenter physical security: Only a small fraction of Google employees ever get to visit a GCP data center. Those data centers, the zones that we have been talking so much about, probably would seem out of a Bond film to those that did—security lasers, biometric detectors, alarms, cameras, and all of that cloak-and-dagger stuff. Custom hardware and trusted booting: A specific form of security attacks named privileged access attacks are on the rise. These involve malicious code running from the least likely spots that you'd expect, the OS image, hypervisor, or boot loader. There is the only way to really protect against these, which is to design and build every single element in-house. Google has done that, including hardware, a firmware stack, curated OS images, and a hardened hypervisor. Google data centers are populated with thousands of servers connected to a local network. Google selects and validates building components from vendors and designs custom secure server boards and networking devices for server machines. Google has cryptographic signatures on all low-level components, such as BIOS, bootloader, kernel, and base OS, to validate the correct software stack is booting up. Data disposal: The detritus of the persistent disks and other storage devices that we use are also cleaned thoroughly by Google. This data destruction process involves several steps: an authorized individual will wipe the disk clean using a logical wipe. Then, a different authorized individual will inspect the wiped disk. The results of the erasure are stored and logged too. Then, the erased driver is released into inventory for reuse. If the disk was damaged and could not be wiped clean, it is stored securely and not reused, and such devices are periodically destroyed. Each facility where data disposal takes place is audited once a week. Data encryption: By default GCP always encrypts all customer data at rest as well as in motion. This encryption is automatic, and it requires no action on the user's part. Persistent disks, for instance, are already encrypted using AES-256, and the keys themselves are encrypted with master keys. All these key management and rotation is managed by Google. In addition to this default encryption, a couple of other encryption options exist as well, more on those in the following diagram: Secure service deployment: Google's security documentation will often refer to secure service deployment, and it is important to understand that in this context, the term service has a specific meaning in the context of security: a service is the application binary that a developer writes and runs on infrastructure. This secure service deployment is based on three attributes: Identity: Each service running on Google infrastructure has an associated service account identity. A service has to submit cryptographic credentials provided to it to prove its identity while making or receiving remote procedure calls (RPC) to other services. Clients use these identities to make sure that they are connecting to an intended server and the server will use to restrict access to data and methods to specific clients. Integrity: Google uses a cryptographic authentication and authorization technique at an application layer to provide strong access control at the abstraction level for interservice communication. Google has an ingress and egress filtering facility at various points in their network to avoid IP spoofing. With this approach, Google is able to maximize their network's performance and its availability. Isolation: Google has an effective sandbox technique to isolate services running on the same machine. This includes Linux user separation, language and kernel-based sandboxes, and hardware virtualization. Google also secures operation of sensitive services such as cluster orchestration in GKE on exclusively dedicated machines. Secure interservice communication: The term inter-service communication refers to GCP's resources and services talking to each other. For doing so, the owners of the services have individual whitelists of services which can access them. Using them, the owner of the service can also allow some IAM identities to connect with the services managed by them.Apart from that, Google engineers on the backend who would be responsible to manage the smooth and downtime-free running of the services are also provided special identities to access the services (to manage them, not to modify their user-input data). Google encrypts interservice communication by encapsulating application layer protocols in RPS mechanisms to isolate the application layer and to remove any kind of dependency on network security. Using Google Front End: Whenever we want to expose a service using GCP, the TLS certificate management, service registration, and DNS are managed by Google itself. This facility is called the Google Front End (GFE) service. For example, a simple file of Python code can be hosted as an application on App Engine that (application) will have its own IP, DNS name, and so on. In-built DDoS protections: Distributed Denial-of-Service attacks are very well studied, and precautions against such attacks are already built into many GCP services, notably in networking and load balancing. Load balancers can actually be thought of as hardened, bastion hosts that serve as lightning rods to attract attacks, and so are suitably hardened by Google to ensure that they can withstand those attacks. HTTP(S) and SSL proxy load balancers, in particular, can protect your backend instances from several threats, including SYN floods, port exhaustion, and IP fragment floods. Insider risk and intrusion detection: Google constantly monitors activities of all available devices in Google infrastructure for any suspicious activities. To secure employees' accounts, Google has replaced phishable OTP second factors with U2F, compatible security keys. Google also monitors its customer devices that employees use to operate their infrastructure. Google also conducts a periodic check on the status of OS images with security patches on customer devices. Google has a special mechanism to grant access privileges named application-level access management control, which exposes internal applications to only specific users from correctly managed devices and expected network and geographic locations. Google has a very strict and secure way to manage its administrative access privileges. They have a rigorous monitoring process of employee activities and also a predefined limit for administrative accesses for employees. Google-provided tools and options for security As we've just seen, the platform already does a lot for us, but we still could end up leaving ourselves vulnerable to attack if we don't go about designing our cloud infrastructure carefully. To begin with, let's understand a few facilities provided by the platform for our benefit. Data encryption options: We have already discussed Google's default encryption; this encrypts pretty much everything and requires no user action. So, for instance, all persistent disks are encrypted with AES-256 keys that are automatically created, rotated, and themselves encrypted by Google. In addition to default encryption, there are a couple of other encryption options available to users. Customer-managed encryption keys (CMEK) using Cloud KMS: This option involves a user taking control of the keys that are used, but still storing those keys securely on the GCP, using the key management service. The user is now responsible for managing the keys that are for creating, rotating and destroying them. The only GCP service that currently supports CMEK is BigQuery and is in beta stage for Cloud Storage. Customer-supplied encryption keys (CSEK): Here, the user specifies which keys are to be used, but those keys do not ever leave the user's premises. To be precise, the keys are sent to Google as a part of API service calls, but Google only uses these keys in memory and never persists them on the cloud. CSEK is supported by two important GCP services: data in cloud storage buckets as well as by persistent disks on GCE VMs. There is an important caveat here though: if you lose your key after having encrypted some GCP data with it, you are entirely out of luck. There will be no way for Google to recover that data. Cloud security scanner: Cloud security scanner is a GCP, provided security scanner for common vulnerabilities. It has long been available for App Engine applications, but is now also available in alpha for Compute Engine VMs. This handy utility will automatically scan and detect the following four common vulnerabilities: Cross-site scripting (XSS) Flash injection Mixed content (HTTP in HTTPS) The use of outdated/insecure libraries Like most security scanners, it automatically crawls an application, follows links, and tries out as many different types of user input and event handlers as possible. Some security best practices Here is a list of design choices that you could exercise to cope with security threats such as DDoS attacks: Use hardened bastion hosts such as load balancers (particularly HTTP(S) and SSL proxy load balancers). Make good use of the firewall rules in your VPC network. Ensure that incoming traffic from unknown sources, or on unknown ports, or protocols is not allowed through. Use managed services such as Dataflow and Cloud Functions wherever possible; these are serverless and so have smaller attack vectors. If your application lends itself to App Engine it has several security benefits over GCE or GKE, and it can also be used to autoscale up quickly, damping the impact of a DDOS attack. If you are using GCE VMs, consider the use of API rate limits to ensure that the number of requests to a given VM does not increase in an uncontrolled fashion. Use NAT gateways and avoid public IPs wherever possible to ensure network isolation. Use Google CDN as a way to offload incoming requests for static content. In the event of a storm of incoming user requests, the CDN servers will be on the edge of the network, and traffic into the core infrastructure will be reduced. Summary In this article, you learned that the GCP benefits from Google's long experience countering cyber-threats and security attacks targeted at other Google services, such as Google search, YouTube, and Gmail. There are several built-in security features that already protect users of the GCP from several threats that might not even be recognized as existing in an on-premise world. In addition to these in-built protections, all GCP users have various tools at their disposal to scan for security threats and to protect their data. To know more in-depth about the Google Cloud Platform (GCP), head over to the book, Google Cloud Platform for Architects. Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Build Hadoop clusters using Google Cloud Platform [Tutorial] Machine learning APIs for Google Cloud Platform
Read more
  • 0
  • 0
  • 8491

article-image-why-do-it-teams-need-to-transition-from-devops-to-devsecops
Guest Contributor
13 Jul 2019
8 min read
Save for later

Why do IT teams need to transition from DevOps to DevSecOps?

Guest Contributor
13 Jul 2019
8 min read
Does your team perform security testing during development? If not, why not? Cybercrime is on the rise, and formjacking, ransomware, and IoT attacks have increased alarmingly in the last year. This makes security a priority at every stage of development. In this kind of ominous environment, development teams around the globe should take a more proactive approach to threat detection. This can be done in a number of ways. There are some basic techniques that development teams can use to protect their development environments. But ultimately, what is needed is an integration of threat identification and management into the development process itself. Integrated processes like this are referred to as DevSecOps, and in this guide, we’ll take you through some of the advantages of transitioning to DevSecOps. Protect Your Development Environment First, though, let’s look at some basic measures that can help to protect your development environment. For both individuals and enterprises, online privacy is perhaps the most valuable currency of all. Proxy servers, Tor, and virtual private networks (VPN) have slowly crept into the lexicon of internet users as cost-effective privacy tools to consider if you want to avoid drawing the attention of hackers. But what about enterprises? Should they use the same tools? They would prefer to avoid hackers as well. This answer is more complicated. Encryption and authentication should be addressed early in the development process, especially given the common practice of using open source libraries for app coding. The advanced security protocols that power many popular consumer VPN services make it a good first step to protecting coding and any proprietary technology. Additional controls like using 2-factor authentication and limiting who has access will further protect the development environment and procedures. Beyond these basic measures, though, it is also worth looking in detail at your entire development process and integrating security management at every stage. This is sometimes referred to as integrating DevOps and DevSecOps. DevOps vs. DevSecOps: What's the Difference? DevOps and DevSecOps are not separate entities, but different facets of the development process. Traditionally, DevOps teams work to integrate software development and implementation in order to facilitate the rapid delivery of new business applications. Since this process omits security testing and solutions, many security flaws and vulnerabilities aren't addressed early enough in the development process. With a new approach, DevSecOps, this omission is addressed by automating security-related tasks and integrating controls and functions like composition analysis and configuration management into the development process. Previously, DevSec focused only on automating security code testing, but it is gradually transitioning to incorporate an operations-centric approach. This helps in reconciling two environments that are opposite by nature. DevOps is forward-looking because it's toward rapid deployment, while development security looks backward to analyze and predict future issues. By prioritizing security analysis and automation, teams can still improve delivery speed without the need to retroactively find and deal with threats. Best Practices: How DevSecOps Should Work The goal of current DevSecOps best practices is to implement a shift towards real-time threat detection rather than undergoing a historical analysis. This enables more efficient application development that recognizes and deals with issues as they happen rather than waiting until there's a problem. This can be done by developing a more effective strategy while adopting DevSecOps practices. When all areas of concern are addressed, it results in: Automatic code procurement: Automatic code procurement eliminates the problem of human error and incorporating weak or flawed coding. This benefits developers by allowing vulnerabilities and flaws to be discovered and corrected earlier in the process. Uninterrupted security deployment: Uninterrupted security deployment through the use of automation tools that work in real time. This is done by creating a closed-loop testing and reporting and real-time threat resolution. Leveraged security resources: Leveraged security resources through automation. Using automated DevSecOps typically address areas related to threat assessment, event monitoring, and code security. This frees your IT or security team to focus in other areas, like threat remediation and elimination. There are five areas that need to be addressed in order for DevSecOps to be effective: Code analysis By delivering code in smaller modules, teams are able to identify and address vulnerabilities faster. Management changes Adapting the protocol for changes in management or admins allows users to improve on changes faster as well as enabling security teams to analyze their impact in real time. This eliminates the problem of getting calls about problems with system access after the application is deployed. Compliance Addressing compliance with Payment Card Industry Digital Security Standard (PCI DSS) and the new General Data Protection Regulations (GDPR) earlier, helps prevent audits and heavy fines. It also ensures that you have all of your reporting ready to go in the event of a compliance audit. Automating threat and vulnerability detection Threats evolve and proliferate fast, so security should be agile enough to deal with emerging threats each time coding is updated or altered. Automating threat detection earlier in the development process improves response times considerably. Training programs Comprehensive security response begins with proper IT security training. Developers should craft a training protocol that ensures all personnel who are responsible for security are up to date and on the same page. Organizations should bring security and IT staff into the process sooner. That means advising current team members of current procedures and ensuring that all new staff is thoroughly trained. Finding the Right Tools for DevSecOps Success Does a doctor operate with a chainsaw? Hopefully not. Likewise, all of the above points are nearly impossible to achieve without the right tools to get the job done with precision. What should your DevSec team keep in their toolbox? Automation tools Automation tools provide scripted remediation recommendations for security threats detected. One such tool is Automate DAST, which scans new or modified code against security vulnerabilities listed on the Open Web Application Security Project's (OWASP) list of the most common flaws, such as a SQL injection errors. These are flaws you might have missed during static analysis of your application code. Attack modeling tools Attack modeling tools create models of possible attack matrices and map their implications. There are plenty of attack modeling tools available, but a good one for identifying cloud vulnerabilities is Infection Monkey, which simulates attacks against the parts of your infrastructure that run on major public cloud hosts like Google Cloud, AWS, and Azure, as well as most cloud storage providers like Dropbox and pCloud. Visualization tools Visualization tools are used for evolving, identifying, and sharing findings with the operations team. An example of this type of tool is PortVis, developed by a team led by professor Kwan-Liu Ma at the University of California, Davis. PortVis is designed to display activity by host or port in three different modes: a grid visualization, in which all network activity is displayed on a single grid; a volume visualization, which extends the grid to a three-dimensional volume; and a port visualization, which allows devs to visualize the activity on specific ports over time. Using this tool, different types of attack can be easily distinguished from each other. Alerting tools  Alerting tools prioritize threats and send alerts so that the most hazardous vulnerabilities can be addressed immediately. WhiteSource Bolt, for instance, is a useful tool of this type, designed to improve the security of open source components. It does this by checking these components against known security threats, and providing security alerts to devs. These alerts also auto-generate issues within GitHub. Here, devs can see details such as references for the CVE, its CVSS rating, a suggested fix, and there is even an option to assign the vulnerability to another team member using the milestones feature. The Bottom Line Combining DevOps and DevSec is not a meshing of two separate disciplines, but rather the natural transition of development to a more comprehensive approach that takes security into account earlier in the process, and does it in a more meaningful way. This saves a lot of time and hassles by addressing enterprise security requirements before deployment rather than probing for flaws later. The sooner your team hops on board with DevSecOps, the better. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Does it make sense to talk about DevOps engineers or DevOps tools? How Visual Studio Code can help bridge the gap between full-stack development and DevOps
Read more
  • 0
  • 0
  • 6601
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime
article-image-how-do-aws-developers-manage-web-apps
Guest Contributor
04 Jul 2019
6 min read
Save for later

How do AWS developers manage Web apps?

Guest Contributor
04 Jul 2019
6 min read
When it comes to hosting and building a website on cloud, Amazon Web Services (AWS) is one of the most preferred choices for developers. According to Canalys, AWS is dominating the global public cloud market, holding around one-third of the total market share. AWS offers numerous services that can be used for compute power, content delivery, database storage, and more. Developers can use it to build a high-availability production website, whether it is a WordPress site, Node.js web app, LAMP stack web app, Drupal website, or a Python web app. AWS developers, need to set up, maintain and evolve the cloud infrastructure of web apps. Aside from these, they are also responsible for applying best practices related to security and scalability. Having said that, let’s take a deep dive into how AWS developers manage a web application. Deploying a website or web app with Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) offers developers a secure and scalable computing capacity in the cloud. For hosting a website or web app, the developers need to use virtual app servers called instances. With Amazon EC2 instances, developers gain complete control over computing resources. They can scale the capacity on the basis of requirements and pay only for the resources they actually use. There are tools like AWS lambda, Elastic Beanstalk and Lightsail that allow the isolation of web apps from common failure cases. Amazon EC2 supports a number of main operating systems, including Amazon Linux, Windows Server 2012, CentOS 6.5, and Debian 7.4. Here is how developers get themselves started with Amazon EC2 for deploying a website or web app. The first step is to set up an AWS account and log into it.   Select “Launch Instance” from the Amazon EC2 Dashboard. It will enable the creation of VM. Now configure the instance by choosing an Amazon Machine Image (AMI), instance type and security group.   Click on Launch. In the next step, choose ‘Create a new key pair’ and name it. A key pair file gets downloaded automatically, which needs to be saved. It will be needed for logging in to the instance. Click on ‘Launch Instances’ to finish the set-up process. Once the instance is ready, it can be used to build high availability websites or web app. Using Amazon S3 for cloud storage Amazon Simple Storage Service, or Amazon S3 is a secure and highly scalable cloud storage solution that makes web-scale computing seamless for developers. It is used for the objects that are required to build a website, such as HTML pages, images, CSS files, videos and JavaScript. S3 comes with a simple interface so that developers can fetch and store large amounts of data from anywhere on the internet, at any time. The storage infrastructure provided with Amazon S3 is known for scalability, reliability, and speed. Amazon itself uses this storage option to host its own websites. Within S3, the developers need to create buckets for data storage. Each bucket can store a large amount of data, allowing developers to upload a high number of objects into it. The amount of data an object can contain, is up to 5 TB. The objects are stored and fetched from the bucket using a unique key. There are several purposes of a bucket. It can be used to organize the S3 namespace, recognize the accounts assigned for storage and data transfer, as well as work as the aggregation unit for usage. Elastic load balancing Load balancing is a critical part of a website or web app to distribute and balance the traffic load accordingly to multiple targets. AWS provides elastic load balancing to its developers, which allows them to distribute the traffic across a number of services, like Amazon EC2 instances, IP addresses, Lambda functions and containers. With Elastic load balancing, developers can ensure that their projects run efficiently even when there is heavy traffic. There are three kinds of load balancers available with AWS elastic load balancing— Application Load Balancer, Network Load Balancer and Classic Load Balancer. Application Load Balancer is an ideal option for HTTP and HTTPS traffic. It provides advanced routing for the requests meant for the delivery of microservices and containers. For balancing the load of Transmission Control Protocol (TCP), Transport Layer Security (TLS) and User Datagram Protocol (UDP), developers opt for Network Load Balancer. Whereas, the Classic Load Balancer is best suited for typical load distribution across EC2 instances. It works for both requests and connections. Debugging and troubleshooting A web app or website can include numerous features and components. Often, a few of them might face issues or not work as expected, because of coding errors or other bugs. In such cases, AWS developers follow a number of processes and techniques and check the useful resources that help them to debug a recipe or troubleshoot the issues.   See the service issue at Common Debugging and Troubleshooting Issues.   Check the Debugging Recipes for issues related to recipes.   Check the AWS OpsWorks Stack Forum. It is a forum where other developers discuss their issues. AWS team also monitors these issues and helps in finding the solutions.   Get in touch with AWS OpsWorks Stacks support team to solve the issue.  Traffic monitoring and analysis Analysing and monitoring the traffic and network logs help in understanding the way websites and web apps perform on the internet.  AWS provides several tools for traffic monitoring, which includes Real-Time Web Analytics with Kinesis Data Analytics, Amazon Kinesis, Amazon Pinpoint, Amazon Athena, etc.  For tracking of website metrics, the Real-Time Web Analytics with Kinesis Data Analytics is used by developers. This tool provides insights into visitor counts, page views, time spent by visitors, actions taken by visitors, channels driving the traffic and more. Additionally, the tool comes with an optional dashboard which can be used for monitoring of web servers. Developers can see custom metrics of the servers to know about the performance of servers, average network packets processing, errors, etc. Wrapping up Management of a web application is a tedious task and requires quality tools and technologies. Amazon Web Services makes things easier for web developers, providing them with all the tools required to handle the app.  Author Bio Vaibhav Shah is the CEO of Techuz, a mobile app and web development company in India and the USA. He is a technology maven, a visionary who likes to explore innovative technologies and has empowered 100+ businesses with sophisticated Web solutions
Read more
  • 0
  • 0
  • 4552

article-image-vulnerabilities-in-the-application-and-transport-layer-of-the-tcp-ip-stack
Melisha Dsouza
07 Feb 2019
15 min read
Save for later

Vulnerabilities in the Application and Transport Layer of the TCP/IP stack

Melisha Dsouza
07 Feb 2019
15 min read
The Transport layer is responsible for end-to-end data communication and acts as an interface for network applications to access the network. This layer also takes care of error checking, flow control, and verification in the TCP/IP  protocol suite. The Application Layer handles the details of a particular application and performs 3 main tasks- formatting data, presenting data and transporting data.  In this tutorial, we will explore the different types of vulnerabilities in the Application and Transport Layer. This article is an excerpt from a book written by Glen D. Singh, Rishi Latchmepersad titled CompTIA Network+ Certification Guide This book covers all CompTIA certification exam topics in an easy-to-understand manner along with plenty of self-assessment scenarios for better preparation. This book will not only prepare you conceptually but will also help you pass the N10-007 exam. Vulnerabilities in the Application Layer The following are some of the application layer protocols which we should pay close attention to in our network: File Transfer Protocol (FTP) Telnet Secure Shell (SSH) Simple Mail Transfer Protocol (SMTP) Domain Name System (DNS) Dynamic Host Configuration Protocol (DHCP) Hypertext Transfer Protocol (HTTP) Each of these protocols was designed to provide the function it was built to do and with a lesser focus on security. Malicious users and hackers are able to compromise both the application that utilizes these protocols and the network protocols themselves. Cross Site Scripting (XSS) XSS focuses on exploiting a weakness in websites. In an XSS attack, the malicious user or hacker injects client-side scripts into a web page/site that a potential victim would trust. The scripts can be JavaScript, VBScript, ActiveX, and HTML, or even Flash (ActiveX), which will be executed on the victim's system. These scripts will be masked as legitimate requests between the web server and the client's browser. XSS focuses on the following: Redirecting a victim to a malicious website/server Using hidden Iframes and pop-up messages on the victim's browser Data manipulation Data theft Session hijacking Let's take a deeper look at what happens in an XSS attack: An attacker injects malicious code into a web page/site that a potential victim trusts. A trusted site can be a favorite shopping website, social media platform, or school or university web portal. A potential victim visits the trusted site. The malicious code interacts with the victim's web browser and executes. The web browser is usually unable to determine whether the scripts are malicious or not and therefore still executes the commands. The malicious scripts can be used obtain cookie information, tokens, session information, and so on about other websites that the browser has stored information about. The acquired details (cookies, tokens, sessions ID, and so on) are sent back to the hacker, who in turn uses them to log in to the sites that the victim's browser has visited: There are two types of XSS attacks: Stored XSS (persistent) Reflected (non-persistent) Stored XSS (persistent): In this attack, the attacker injects a malicious script directly into the web application or a website. The script is stored permanently on the page, so when a potential victim visits the compromised page, the victim's web browser will parse all the code of the web page/application fine. Afterward, the script is executed in the background without the victim's knowledge. At this point, the script is able to retrieve session cookies, passwords, and any other sensitive information stored in the user's web browser, and sends the loot back to the attacker in the background. Reflective XSS (non-persistent): In this attack, the attacker usually sends an email with the malicious link to the victim. When the victim clicks the link, it is opened in the victim's web browser (reflected), and at this point, the malicious script is invoked and begins to retrieve the loot (passwords, credit card numbers, and so on) stored in the victim's web browser. SQL injection (SQLi) SQLi attacks focus on parsing SQL commands into an SQL database that does not validate the user input. The attacker attempts to gain unauthorized access to a database either by creating or retrieving information stored in the database application. Nowadays, attackers are not only interested in gaining access, but also in retrieving (stealing) information and selling it to others for financial gain. SQLi can be used to perform: Authentication bypass: Allows the attacker to log in to a system without a valid user credential Information disclosure: Retrieves confidential information from the database Compromise data integrity: The attacker is able to manipulate information stored in the database Lightweight Directory Access Protocol (LDAP) injection LDAP is designed to query and update directory services, such as a database like Microsoft Active Directory. LDAP uses both TCP and UDP port 389 and LDAP uses port 636. In an LDAP injection attack, the attacker exploits the vulnerabilities within a web application that constructs LDAP messages or statements, which are based on the user input. If the receiving application does not validate or sanitize the user input, this increases the possibility of manipulating LDAP messages. Cross-Site Request Forgery (CSRF) This attack is a bit similar to the previously mentioned XSS attack. In a CSRF attack, the victim machine/browser is forced to execute malicious actions against a website with which the victim has been authenticated (a website that trusts the actions of the user). To have a better understanding of how this attack works, let's visualize a potential victim, Bob. On a regular day, Bob visits some of his favorite websites, such as various blogs, social media platforms, and so on, where he usually logs in automatically to view the content. Once Bob logs in to a particular website, the website would automatically trust the transactions between itself and the authenticated user, Bob. One day, he receives an email from the attacker but unfortunately Bob does not realize the email is a phishing/spam message and clicks on the link within the body of the message. His web browser opens the malicious URL in a new tab: The attack would cause Bob's machine/web browser to invoke malicious actions on the trusted website; the website would see all the requests are originating from Bob. The return traffic such as the loot (passwords, credit card details, user account, and so on) would be returned to the attacker. Session hijacking When a user visits a website, a cookie is stored in the user's web browser. Cookies are used to track the user's preferences and manage the session while the user is on the site. While the user is on the website, a session ID is also set within the cookie, and this information may be persistent, which allows a user to close the web browser and then later revisit the same website and automatically log in. However, the web developer can set how long the information is persistent for, whether it expires after an hour or a week, depending on the developer's preference. In a session hijacking attack, the attacker can attempt to obtain the session ID while it is being exchanged between the potential victim and the website. The attacker can then use this session ID of the victim on the website, and this would allow the attacker to gain access to the victim's session, further allowing access to the victim's user account and so on. Cookie poisoning A cookie stores information about a user's preferences while he/she is visiting a website. Cookie poisoning is when an attacker has modified a victim's cookie, which will then be used to gain confidential information about the victim such as his/her identity. DNS Distributed Denial-of-Service (DDoS) A DDoS attack can occur against a DNS server. Attacker sometimes target Internet Service Providers (ISPs) networks, public and private Domain Name System (DNS) servers, and so on to prevent other legitimate users from accessing the service. If a DNS server is unable to handle the amount of requests coming into the server, its performance will eventually begin to degrade gradually, until it either stops responding or crashes. This would result in a Denial-of-Service (DoS) attack. Registrar hijacking Whenever a person wants to purchase a domain, the person has to complete the registration process at a domain registrar. Attackers do try to compromise users accounts on various domain registrar websites in the hope of taking control of the victim's domain names. With a domain name, multiple DNS records can be created or modified to direct incoming requests to a specific device. If a hacker modifies the A record on a domain to redirect all traffic to a compromised or malicious server, anyone who visits the compromised domain will be redirected to the malicious website. Cache poisoning Whenever a user visits a website, there's the process of resolving a host name to an IP address which occurs in the background. The resolved data is stored within the local system in a cache area. The attacker can compromise this temporary storage area and manipulate any further resolution done by the local system. Typosquatting McAfee outlined typosquatting, also known as URL hijacking, as a type of cyber-attack that allows an attacker to create a domain name very close to a company's legitimate domain name in the hope of tricking victims into visiting the fake website to either steal their personal information or distribute a malicious payload to the victim's system. Let's take a look at a simple example of this type of attack. In this scenario, we have a user, Bob, who frequently uses the Google search engine to find his way around the internet. Since Bob uses the www.google.com website often, he sets it as his homepage on the web browser so each time he opens the application or clicks the Home icon, www.google.com is loaded onto the screen. One day Bob decides to use another computer, and the first thing he does is set his favorite search engine URL as his home page. However, he typed www.gooogle.com and didn't realize it. Whenever Bob visits this website, it looks like the real website. Since the domain was able to be resolved to a website, this is an example of how typosquatting works. It's always recommended to use a trusted search engine to find a URL for the website you want to visit. Trusted internet search engine companies focus on blacklisting malicious and fake URLs in their search results to help protect internet users such as yourself. Vulnerabilities at the Transport Layer In this section, we are going to discuss various weaknesses that exist within the underlying protocols of the Transport Layer. Fingerprinting In the cybersecurity world, fingerprinting is used to discover open ports and services that are running open on the target system. From a hacker's point of view, fingerprinting is done before the exploitation phase, as the more information a hacker can obtain about a target, the hacker can then narrow its attack scope and use specific tools to increase the chances of successfully compromising the target machine. This technique is also used by system/network administrators, network security engineers, and cybersecurity professionals alike. Imagine you're a network administrator assigned to secure a server; apart from applying system hardening techniques such as patching and configuring access controls, you would also need to check for any open ports that are not being used. Let's take a look at a more practical approach to fingerprinting in the computing world. We have a target machine, 10.10.10.100, on our network. As a hacker or a network security professional, we would like to know which TCP and UDP ports are open, the services that use the open ports, and the service daemon running on the target system. In the following screenshot, we've used nmap to help us discover the information we are seeking. The NMap tools delivers specially crafted probes to a target machine: Enumeration In a cyber attack, the hacker uses enumeration techniques to extract information about the target system or network. This information will aid the attacker in identifying system attack points. The following are the various network services and ports that stand out for a hacker: Port 53: DNS zone transfer and DNS enumeration Port 135: Microsoft RPC Endpoint Mapper Port 25: Simple Mail Transfer Protocol (SMTP) DNS enumeration DNS enumeration is where an attacker is attempting to determine whether there are other servers or devices that carry the domain name of an organization. Let's take a look at how DNS enumeration works. Imagine we are trying to find out all the publicly available servers Google has on the internet. Using the host utility in Linux and specifying a hostname, host www.google.com, we can see the IP address 172.217.6.196 has been resolved successfully. This means there's a device with a host name of www.google.com active. Furthermore, if we attempt to resolve the host name, gmail.google.com, another IP address is presented but when we attempt to resolve mx.google.com, no IP address is given. This is an indication that there isn't an active device with the mx.google.com host name: DNS zone transfer DNS zone transfer allows the copying of the master file from a DNS server to another DNS server. There are times when administrators do not configure the security settings on their DNS server properly, which allows an attacker to retrieve the master file containing a list of the names and addresses of a corporate network. Microsoft RPC Endpoint Mapper Not too long ago, CVE-2015-2370 was recorded on the CVE database. This vulnerability took advantage of the authentication implementation of the Remote Procedure Call (RPC) protocol in various versions of the Microsoft Windows platform, both desktop and server operating systems. A successful exploit would allow an attacker to gain local privileges on a vulnerable system. SMTP SMTP is used in mail servers, as with the POP and the Internet Message Access Protocol (IMAP). SMTP is used for sending mail, while POP and IMAP are used to retrieve mail from an email server. SMTP supports various commands, such as EXPN and VRFY. The EXPN command can be used to verify whether a particular mailbox exists on a local system, while the VRFY command can be used to validate a username on a mail server. An attacker can establish a connection between the attacker's machine and the mail server on port 25. Once a successful connection has been established, the server will send a banner back to the attacker's machine displaying the server name and the status of the port (open). Once this occurs, the attacker can then use the VRFY command followed by a user name to check for a valid user on the mail system using the VRFY bob syntax. SYN flooding One of the protocols that exist at the Transport Layer is TCP. TCP is used to establish a connection-oriented session between two devices that want to communication or exchange data. Let's recall how TCP works. There are two devices that want to exchange some messages, Bob and Alice. Bob sends a TCP Synchronization (SYN) packet to Alice, and Alice responds to Bob with a TCP Synchronization/Acknowledgment (SYN/ACK) packet. Finally, Bob replies with a TCP Acknowledgement (ACK) packet. The following diagram shows the TCP 3-Way Handshake mechanism: For every TCP SYN packet received on a device, a TCP ACK packet must be sent back in response. One type of attack that takes advantage of this design flaw in TCP is known as a SYN Flood attack. In a SYN Flood attack, the attacker sends a continuous stream of TCP SYN packets to a target system. This would cause the target machine to process each individual packet and response accordingly; eventually, with the high influx of TCP SYN packets, the target system will become too overwhelmed and stop responding to any requests: TCP reassembly and sequencing During a TCP transmission of datagrams between two devices, each packet is tagged with a sequence number by the sender. This sequence number is used to reassemble the packets back into data. During the transmission of packets, each packet may take a different path to the destination. This may cause the packets to be received in an out-of-order fashion, or in the order they were sent over the wire by the sender. An attacker can attempt to guess the sequencing numbers of packets and inject malicious packets into the network destined for the target. When the target receives the packets, the receiver would assume they came from the real sender as they would contain the appropriate sequence numbers and a spoofed IP address. Summary In this article, we have explored the different types of vulnerabilities that exist at the Application and Transport Layer of the TCP/IP protocol suite. To understand other networking concepts like network architecture, security, network monitoring, and troubleshooting; and ace the CompTIA certification exam, check out our book CompTIA Network+ Certification Guide AWS announces more flexibility its Certification Exams, drops its exam prerequisites Top 10 IT certifications for cloud and networking professionals in 2018 What matters on an engineering resume? Hacker Rank report says skills, not certifications
Read more
  • 0
  • 0
  • 15876

article-image-cloud-pricing-comparison-aws-vs-azure
Guest Contributor
02 Feb 2019
11 min read
Save for later

Cloud pricing comparison: AWS vs Azure

Guest Contributor
02 Feb 2019
11 min read
On average, businesses waste about 35% of their cloud spend due to inefficiently using their cloud resources. This amounts to more than $10 billion in wasted cloud spend across just the top three public cloud providers. Although the unmatched compute power, data storage options and efficient content delivery systems of the leading public cloud providers can support incredible business growth, this can cause some hubris. It’s easy to lose control of costs when your cloud provider appears to be keeping things running smoothly. To stop this from happening, it’s essential to adopt a new approach to how we manage - and optimize - cloud spend. It’s not an easy thing to do, as pricing structures can be complicated. However, in this post, we’ll look at how both AWS and Azure structure their pricing, and how you can best determine what’s right for you. Different types of cloud pricing schemes Broadly, the pricing model for cloud services can range from a pure subscription-based model, where services are charged based on a cloud catalog and users are billed per month, per mailbox, or app license ordered. In this instance, subscribers are billed for all the resources to which they are subscribed, irrespective of whether they are used or not. The other option is pay-as-you-go. This is where subscribers begin with a billing amount set at 0, which then grows with the services and resources they use.. Amazon uses the Pay-As-You-Go model, charging a predetermined price for every hour of virtual machine resources used. Such a model is also used by other leading cloud service providers including Microsoft Azure and Google’s Google Cloud Platform. Another variant of cloud pricing is an enterprise billing service. This is based on the number of active users assigned to a particular cloud subscription. Microsoft Azure is a leading cloud provider that offers cloud subscription for its customers. Most cloud providers offer varying combinations of the above three models with attractive discount options built-in. These include: What free tier services do AWS and Azure offer? Both AWS and Azure offer a ‘free tier’ service for new and initial subscribers. This is for potential long-time subscribers to test out the service before committing for the long run. For AWS, Amazon allows subscribers to try out most of AWS’ services free for a year, including RDS, S3, EC2, Elastic Block Store, Elastic Load Balancing (EBS) and other AWS services. For example, you can utilize EC2 and EBS on the free tier to host a website for a whole year. EBS pricing will be zero unless your usage exceeds the limit of 30GB of storage. The free tier for the EC2 includes 730 hours of a t2.micro instance. Azure offers similar deals for new users. Azure’s services like App Service, Virtual Machines, Azure SQL Database, Blob Storage and Azure Kubernetes Service (AKS) are free for the initial period of 12 months. Additionally, Azure provides the ‘Functions’ compute service (for serverless) at 1 million requests free every month throughout the subscription. This is useful if you want to give serverless a try. AWS and Azure’s pay-as-you-go, on-demand pricing models Under the pay-as-you-go model, AWS and Azure offer subscribers the option to simply settle their bills at the end of every month without any upfront investment. This is a good option if you want to avoid a long-term and binding contract. Most resources are available on demand and charged on a per hour basis, and costs are calculated based on the number of hours the resource was used. For data storage and data transfer, the rates are generally calculated per Gigabyte. Subscribers are notified 30 days in advance for any changes in the Pay As You Go rates as well as when new services are added periodically to the platform. Reserve-and-pay-less pricing model In addition to the on-demand pricing model, Amazon AWS has an alternate scheme called Reserved Instance (RI) that allows the subscriber to reserve capacity for specific products. RI offers discounted hourly rates and capacity reservation for its EC2 and RDS services. A subscriber can reserve a resource and can save up to 75% of total billing costs in the long run. These discounted rates are automatically added to the subscriber’s AWS bills. Subscribers have the option to reserve instances either for a 1-year or a 3-year term. Microsoft Azure offers to help subscribers save up to 72% of their billing costs compared to its pay-as-you-go model when subscribers sign up for one to three-year terms for Windows and Linux virtual machines (VMs). Microsoft also allows for added flexibility in the sense that if your business needs change, you can cancel your Azure RI subscription at any time and return the remaining unused RI to Microsoft for an early termination fee. Use-more-and-pay-less pricing model In addition to the above payment options, AWS offers subscribers one additional payment option. When it comes to data transfer and data storage services, AWS gives discounts based on the subscriber’s usage. These volume-based discounts help subscribers realize critical savings as their usage increases. Subscribers can benefit from the economies of scale, allowing their businesses to grow while costs are kept relatively under control. AWS also gives subscribers the option to sign up for services that help their growing business. As an example, AWS’ storage services offer subscribers with opportunities to lower pricing based on how frequently data is accessed and performance needed in the retrieval process. For EC2, you can get a discount of up to 10% if you reserve more. The image below demonstrates the pricing of the AWS S3 bucket based on usage. Comparing Cloud Pricing on Azure and AWS As the major cloud service providers – Amazon Web Services, Azure, Google Cloud Platform and IBM – continually decrease prices of cloud instances, provide new and innovative discount options, include additional instances, and drop billing increments. In some cases, especially, Microsoft Azure, per second billing has also been introduced. However, as costs decrease, the complexity increases. It is paramount for subscribers to understand and efficiently navigate this complexity. We take a crack at it here. Reserved Instance Pricing Given the availability of Reserved Instances by Azure, AWS and GCP have also introduced publicly available discounts, some reaching up to 75%. This is in exchange for signing up to use the services of the particular cloud service provider for a one year to 3 year period. We’ve briefly covered this in the section above. Before signing up, however, subscribers need to understand the amount of usage they are committing to and how much of usage to leave as an ‘on-demand’ option. To do this, subscribers need to consider many different factors – Historical usage – by region, instance type, etc Steady-state vs. part-time usage An estimate of usage growth or decline Probability of switching cloud service providers Choosing alternative computing models like serverless, containers, etc. On-Demand Instance Pricing On-Demand Instances work best for applications that have short-term, irregular workloads but critical enough as to not be interrupted. For instance, if you’re running cron jobs on a periodic basis that lasts for a few hours, you can move them to on-demand instances. Each On-Demand Instance is billed per instance hour from time it is launched until it is terminated. These are most useful during the testing or development phase of applications. On-demand instances are available in many varying levels of computing power, designed for different tasks executed within the cloud environment. These on-demand instances have no binding contractual commitments and can be used as and when required. Generally, on-demand instances are among the most expensive purchasing options for instances. Each on-demand instance is billed at a per instance hour from the time it is launched until it is stopped or terminated. If partial instance hours are used, these are rounded up to the full hour during billing. The chart below shows the on-demand price per hour for AWS and Azure cloud services and the hourly price for each GB of RAM. VM Type AWS OD Hourly Azure OD Hourly AWS OD / GB RAM Azure OD / GB RAM Standard 2 vCPU w Local SSD $0.133 $0.100 $0.018 $0.013 Standard 2 vCPU no local disk $0.100 $0.100 $0.013 $0.013 Highmem 2 vCPU w Local SSD $0.166 $0.133 $0.011 $0.008 Highmem 2 vCPU no local disk $0.133 $0.133 $0.009 $0.008 Highcpu 2 vCPU w Local SSD $0.105 $0.085 $0.028 $0.021 Highcpu 2 vCPU no local disk $0.085 $0.085 $0.021 $0.021   The on-demand price of Azure instances is cheaper compared to AWS for certain VM types. The price difference is evident for instances with local SSD. Discounted Cloud Instance Pricing When it comes to discounted cloud pricing, it is important to remember that this comes with a lock-in period of 1 – 3 years. Therefore, it would work best for organizations that are more stable and have a good idea of what their historical cloud usage is and can fairly accurately predict what cloud services they would require over the next 12 month period. In the table below, we have looked at annual costs of both AWS and Azure. VM Type AWS 1 Y RI Annual Azure 1 Y RI Annual AWS 1 Y RI Annual / GB RAM Azure 1 Y RI Annual / GB RAM Standard 2 vCPU w Local SSD $867 $508 $116 $64 Standard 2 vCPU no local disk $622 $508 $78 $64 Highmem 2 vCPU w Local SSD $946 $683 $63 $43 Highmem 2 vCPU no local disk $850 $683 $56 $43 Highcpu 2 vCPU w Local SSD $666 $543 $178 $136 Highcpu 2 vCPU no local disk $543 $543 $136 $136 Azure’s rates are clearly better than Amazon’s pricing and by a good margin. Azure offers better-discounted rates for Standard, Highmem and High CPU compute instances.   Optimizing Cloud Pricing Subscribers need to move beyond short-term, one time fixes and make use of automation to continuously monitor their spend, raise alerts for over or underuse of service and also take an automated action based on a predetermined condition. Here are some of the ways you can optimize your cloud spending: Cloud Pricing Calculators Cloud Pricing tools enable you to list the different parameters for your AWS or Azure subscriptions. You can use these tools to calculate an approximate monthly cost that would likely be incurred. AWS Simple Monthly Calculator You can try the official cloud pricing calculators from AWS and Azure or a third-party pricing calculator. Calculators help you to optimize your pricing based on your requirements. For example, if you have a long-term requirement for running instances, and if you’re currently running them using on-demand pricing schemes, cloud calculators can offer better insights into reserved-instance schemes and other ways that you can improve your cloud expenditure. For instance, this Azure calculator by NetApp offers more price optimization option. This includes options to tier less frequently used data to storage objects like Azure Blob and customize snapshot creation and storage efficiency. Zerto is another popular calculator for Azure and AWS with a simpler interface. However, note that the estimated cost is based on current pricing and is subject can be liable to change. Price List API Historically, for potential users to narrow down on the final usage cost involved a considerable amount of manual rate checks. They involve collecting price points, and checking and cross-referencing them manually. In the case of AWS, the Price List API offers programmatic access, which is especially beneficial to designers who can now query the AWS price list instead of searching manually through the web. To make matters more natural, the queries can be constructed into simple code in any language. Azure offers a similar billing API to gain insights into your Azure usage programmatically. Summary Understanding and optimizing cloud pricing is somewhat challenging with AWS and Azure. This is partially because they offer hundreds of features with different pricing options and new features are added to the pipeline every week. To solve some of these complexities, we’ve covered some of the popular ways to tackle pricing in AWS and Azure. Here’s a list of things that we’ve covered: How the cloud pricing works and the different pricing schemes in AWS and Azure Comparison of different instance pricing options in AWS and Azure which includes reserved instance, on-demand instances, and discounted instances. Third-party tools like calculators for optimizing price. Price list API for AWS and Azure. If you have any thoughts to share, feel free to post it in the comments. About the author Gilad David Maayan Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Oracle, Zend, CheckPoint and Ixia. Gilad is a 3-time winner of international technical communication awards, including the STC Trans-European Merit Award and the STC Silicon Valley Award of Excellence. Over the past 7 years, Gilad has headed Agile SEO, which performs strategic search marketing for leading technology brands. Together with his team, Gilad has done market research, developer relations, and content strategy in 39 technology markets, lending him a broad perspective on trends, approaches, and ecosystems across the tech industry. Cloud computing trends in 2019 The 10 best cloud and infrastructure conferences happening in 2019 Bo Weaver on Cloud security, skills gap, and software development in 2019  
Read more
  • 0
  • 0
  • 24477

article-image-key-trends-in-software-development-in-2019-cloud-native-and-the-shrinking-stack
Richard Gall
18 Dec 2018
8 min read
Save for later

Key trends in software development in 2019: cloud native and the shrinking stack

Richard Gall
18 Dec 2018
8 min read
Bill Gates is quoted as saying that we tend to overestimate the pace of change over a period of 2 years, but underestimate change over a decade. It’s an astute observation: much of what will matter in 2019 actually looks a lot like what we said will be important in development this year. But if you look back 10 years, the change in the types of applications and websites we build - as well as how we build them - is astonishing. The web as we understood it in 2008 is almost unrecognisable. Today, we are in the midst of the app and API economy. Notions of surfing the web sound almost as archaic as a dial up tone. Similarly, the JavaScript framework boom now feels old hat - building for browsers just sounds weird... So, as we move into 2019, progressive web apps, artificial intelligence, and native app development remain at the top of development agenda. But this doesn’t mean these changes are to be ignored as empty hype. If anything, as adoption increases and new tools emerge, we will begin to see more radical shifts in ways of working. The cutting edge will need to sharpen itself elsewhere. What will it mean to be a web developer in 2019? But these changes are enforcing wider changes in the industry. Arguably, it’s transforming what it means to be a web developer. As applications become increasingly lightweight (thanks to libraries and frameworks like React and Vue), and data becomes more intensive, thanks to the range of services upon which applications and websites depend, developers need to expand across the stack. You can see this in some of the latest Packt titles - in Modern JavaScript Web Development Cookbook, for example, you’ll learn microservices and native app development - topics that have typically fallen outside of the strict remit of web development. The simplification of many aspects of development has, ironically, forced developers to look more closely at how these aspects fit together. As you move further into layers of abstraction, the way things interact and work alongside each other become vital. For the most part, it’s no longer a case of writing the requisite code to make something run on the specific part of the application you’re working on, it’s rather about understanding how the various pieces - from the backend to the front end - fit together. This means, in 2019, you need to dive deeper and get to know your software systems inside out. Get comfortable with the backend. Dive into cloud. Start playing with microservices. Rethink and revisit languages you thought you knew. Get to know your infrastructure: tackling the challenges of API development It might sound strange, but as the stack shrinks and the responsibilities of developers - web and otherwise - shift, understanding the architectural components within the software their building is essential. You could blame some of this on DevOps - essentially, it has made developers responsible for how their code runs once it hits production. Because of this important change, the requisite skills and toolchain for the modern developer is also expanding. There are a range of routes into software architecture, but exploring API design is a good place to begin. Hands on RESTful API Design offers a practical way into the topic. While REST is the standard for API design, the diverse range of tools and approaches is making managing the client a potentially complex but interesting area. GraphQL, a query language developed by Facebook is said to have killed off REST (although we wouldn’t be so hasty), while Redux and Relay, two libraries for managing data in React applications, have seen a lot of interest over the last 12 months as two key tools for working with APIs. Want to get started with GraphQL? Try Beginning GraphQL. Learn Redux with Learning Redux.       Microservices: take responsibility for your infrastructure The reason that we’re seeing so many tools offering ways of managing APIs is that microservices are becoming the dominant architectural mode. This requires developer attention too. That’s not to say that you need to implement microservices now (in fact, there are probably many reasons not to), but if you want to be building software in 5 years time, getting to grips with the principles behind microservices and the tools that can help you use them. Perhaps one of the central technologies driving microservices are containers. You could run microservices in a virtual machine, but because they’re harder to scale than containers, you probably wouldn’t be seeing the benefits you’d be expecting from a microservices architecture. This means getting to grips with core container technologies is vital. Docker is the obvious place to start. There are varying degrees to which developers need to understand it, but even if you don’t think you’ll be using it immediately it does give you a nice real-world foundation in containers if you don’t already have one. Watch and learn how to put Docker to work with the Hands on Docker for Microservices video.  But beyond Docker, Kubernetes is the go to tool that allows you to scale and orchestrate containers. This gives you control over how you scale application services in a way that you probably couldn’t have imagined a decade ago. Get a grounding in Kubernetes with Getting Started with Kubernetes - Third Edition, or follow a 7 day learning plan with Kubernetes in 7 Days. If you want to learn how Docker and Kubernetes come together as part of a fully integrated approach to development, check out Hands on Microservices with Node.js. It's time for developers to embrace cloud It should come as no surprise that, if the general trend is towards full stack, where everything is everyone’s problem, that developers simply can’t afford to ignore cloud. And why would you want to - the levels of abstraction it offers, and the various services and integrations that come with the leading cloud services can make many elements of the development process much easier. Issues surrounding scale, hardware, setup and maintenance almost disappear when you use cloud. That’s not to say that cloud platforms don’t bring their own set of challenges, but they do allow you to focus on more interesting problems. But more importantly, they open up new opportunities. Serverless becomes a possibility - allowing you to scale incredibly quickly by running everything on your cloud provider, but there are other advantages too. Want to get started with serverless? Check out some of these titles… JavaScript Cloud Native Development Cookbook Hands-on Serverless Architecture with AWS Lambda [Video] Serverless Computing with Azure [Video] For example, when you use cloud you can bring advanced features like artificial intelligence into your applications. AWS has a whole suite of machine learning tools - AWS Lex can help you build conversational interfaces, while AWS Polly turns text into speech. Similarly, Azure Cognitive Services has a diverse range of features for vision, speech, language, and search. What cloud brings you, as a developer, is a way of increasing the complexity of applications and processes, while maintaining agility. Adding in features and optimizations previously might have felt sluggish - maybe even impossible. But by leveraging AWS and Azure (among others), you can do much more than you previously realised. Back to basics: New languages, and fresh approaches With all of this ostensible complexity in contemporary software development, you’d be forgiven for thinking that languages simply don’t matter. That’s obviously nonsense. There’s an argument that gaining a deeper understanding of how languages work, what they offer, and where they may be weak, can make you a much more accomplished developer. Be prepared is sage advice for a world where everything is unpredictable - both in the real world and inside our software systems too. So, you have two options - and both are smart. Either go back to a language you know and explore a new paradigm or learn a new language from scratch. Learn a new language: Kotlin Quick Start Guide Hands-On Go Programming Mastering Go Learning TypeScript 2.x - Second Edition     Explore a new programming paradigm: Functional Programming in Go [Video] Mastering Functional Programming Hands-On Functional Programming in RUST Hands-On Object-Oriented Programming with Kotlin     2019: the same, but different, basically... It's not what you should be saying if you work for a tech publisher, but I'll be honest: software development in 2019 will look a lot like it has in 2018.  But that doesn't mean you have time to be complacent. In just a matter of years, much of what feels new or ‘emerging’ today will be the norm. You don’t have to look hard to see the set of skills many full stack developer job postings are asking for - the demands are so diverse that adaptability is clearly immensely valuable both for your immediate projects and future career prospects. So, as 2019 begins, commit to developing yourself sharpening your skill set.
Read more
  • 0
  • 0
  • 5441
article-image-is-middleware-dead-cloud-is-the-prime-suspect
Prasad Ramesh
17 Nov 2018
4 min read
Save for later

Is middleware dead? Cloud is the prime suspect!

Prasad Ramesh
17 Nov 2018
4 min read
The cloud is now a ubiquitous term, in use from tasks such as storing photos to remotely using machines for complex AI tasks. But has it killed on premises middleware setups and changed the way businesses manage their services used by their employees? Is middleware dead? Middleware is the bridge that connects an operating system to different applications in a distributed system. Essentially it is a transition layer of software that enables communication between OS and applications. Middleware acts as a pipe for data to flow from one application to another. If the communication between applications in a network is taken care of by this software, developers can focus on the applications themselves, hence middleware came into picture. Middleware is used in enterprise networks. Is middleware still relevant? Middleware was a necessity for an IT business before cloud was a thing. But as cloud adoption has become mainstream, offering scalability and elasticity, middleware has become less important in modern software infrastructures. Middleware in on premises setups was used for different uses such as remote calls, communication with other devices in the network, transaction management and database interactions. All of this is taken care of by the cloud service provider behind scenes. Middleware is largely in decline - with cloud being a key reason. Specifically, some of the reasons middleware has lost favor include: Middleware maintenance can be expensive and quickly deplete resources, especially if you’re using middleware on a large scale. Middleware can’t scale as fast as cloud. If you need to scale, you’ll need new hardware - this makes elasticity difficult, with sunk costs in your hardware resources. Sustaining large applications on the middleware can become challenging over time. How cloud solves middleware challenges The reason cloud is killing off middleware is because it can simply do things better than traditional middleware. In just about every regard, from availability to flexibility to monitoring, using a cloud service makes life much easier. It makes life easier for developers and engineers, while potentially saving organizations time in terms of resource management. If you’re making decisions about software infrastructure, it probably doesn’t feel like a tough decision. Even institutions like banks, that have traditionally resisted software innovation are embracing cloud. More than 80% of world’s largest banks and more than 85% of global banks opting for the cloud according to this Information Age article. When is middleware the right option? There might still be some life left in middleware yet. For smaller organizations, where an on premise server setup will be used for a significant period of time - with cloud merely a possibility on the horizon - middleware still makes sense. Of course, no organization wants to think of itself as ‘small’ - even if you’re just starting out, you probably have plans to scale. In this case, cloud will give you the flexibility that middleware inhibits. While you shouldn’t invest in cloud solutions if you don’t need them, it’s hard to think of a scenario where it wouldn’t provide an advantage over middleware. From tiny startups that need accessible and efficient hosting services, to huge organizations where scale is simply too big to handle alone, cloud is the best option in a massive range of use cases. Is middleware dead really? So yes, middleware is dead for most practical use case scenarios. Most companies go with the cloud given the advantages and flexibility. With upcoming options like multi-cloud which gives you the options to use different cloud services for different areas, there is even more flexibility in using the cloud. Think Silicon open sources GLOVE: An OpenGL ES over Vulkan middleware Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code
Read more
  • 0
  • 0
  • 3059

article-image-what-is-distributed-computing-and-whats-driving-its-adoption
Melisha Dsouza
07 Nov 2018
8 min read
Save for later

What is distributed computing and what's driving its adoption?

Melisha Dsouza
07 Nov 2018
8 min read
Distributed computing is having a real impact on the way companies look at the cloud. The "Most Promising Jobs 2018" report published by LinkedIn pointed out that distributed and cloud Computing rank amongst the top 10 most in-demand skills. What are the problems with centralized computing systems? Distributed computing solves many of the challenges that centralized computing systems pose today. These centralized systems - like IBM Mainframes - have been around for decades, but they’re beginning to lose favor. This is because centralized computing is ineffective and expensive in the context of increasing data and workloads. When you have a single central computer which controls a massive amount of computations - at the same time - it’s a massive strain on the system. Even one that’s particularly powerful. Centralized systems simply aren’t capable of processing huge volumes of transactional data and supporting tons of online users concurrently. There’s also a big issue with reliability. If your centralized server fails, all data could be permanently lost if you have no disaster recovery strategy. Fortunately, distributed computing offers solutions to many of these issues. How does distributed computing work? Distributed Computing comprises a group of systems located at different places, all connected over a network. They work on a single problem or a common goal. Each one of these systems is autonomous, programmable, asynchronous and failure-prone. These systems provide a better price/performance ratio when compared to a centralized system. This is because it’s more economical to add microprocessors rather than mainframes to your network. They have more computational power as compared to their centralized (mainframe) computing systems. Distributed computing and agility Another major plus point of distributed computing systems is that they provide much greater agility than centralized computing systems. Without centralization, organizations can add and change software and computational power according to the demands and needs of the business. With the reduction in price for computing power and storage thanks to the rise of public cloud services like AWS, organizations all over the world have begun using distributed systems and service-oriented architectures, like microservices. Distributed computing in action: Google search A perfect example of distributed computing in action is Google search. When a user submits a query, Google will use data from a number of different servers to deliver results, based on things like location, past searches, semantic keywords - and much, much more. These servers are located all around the world and are able to provide the search result in seconds or at time milliseconds. How cloud is driving the adoption of distributed computing Central to the adoption is the cloud. Today, cloud is mainstream and opens up the possibility of distributed systems to organizations in a number of different ways. Arguably, you’re not really seeing the full potential of cloud until you’ve moved to a distributed system. Let’s take a look at the different ways cloud services are helping companies feel confident enough to successfully leverage distributed computing. Infrastructure as a Service (IaaS) IaaS makes distributed systems accessible for many organizations by allowing them to host their infrastructure either internally on a private or public cloud. Essentially, they give an organization control over the operating system and platform that forms the foundation of their software infrastructure, but give an external cloud provider control over servers and virtualization technologies that make it possible to deploy that infrastructure. In the context of a distributed system, this means organizations have less to worry about. As you can imagine, without an IaaS, the process of developing and deploying a distributed system becomes much more complex and even costly. Platform as a Service: Custom Software on another Platform If IaaS effectively splits responsibilities between the organization and the cloud provider (the ‘service’), the platform as a Service (PaaS) ‘outsources’ even more to the cloud provider. Essentially, an organization simply has to handle the applications and data, leaving every other aspect of their infrastructure to the platform. This brings many benefits, and, in theory, should allow even relatively small engineering teams to take advantage of the benefits of a distributed system. The underlying complexity and heavy lifting that a distributed system brings rests with the cloud provider, allowing an organization’s engineers to focus on what matters most - shipping code. If you’re thinking about speed and innovation, then a PaaS opens that right up, provided your happy to allow your cloud provider to manage the bulk of your infrastructure. Software as a Service SaaS solutions are perhaps the clearest example of a distributed system. Arguably, given the way we use Saas today, it’s easy to forget that it can be a part of a distributed system. The concept is simple: it’s a complete software solution delivered to the end-user. If you’re trying to accomplish something particularly complex, something which you simply do not have the resources to do yourself, a SaaS solution could be effective. Users don’t need to worry about installing and maintaining software, they can simply access it via the internet   The biggest advantages of adopting a distributed computing system #1 Complete control on the system architecture Distributed computing opens up your options when it comes to system architecture. Although you might rely on an external cloud service for some resources (like compute or storage), the architectural decisions are ultimately yours. This means that you can make decisions based on exactly what your organization needs and how it works. In a sense, this is why distributed computing can bring you agility - but its not just about being agile in the strict sense, but also in a broader version of the word. It allows you to prioritize according to your own needs and demands. #2 Improve the “absolute performance” of the computing system Tasks can be partitioned into sub computations that can run concurrently. This, in turn, provides a total speedup of task completion. What’s more, if a particular site is currently overloaded with jobs, some of them can be moved to lightly loaded sites. This technique of ‘load sharing’ can boost the performance of your system. Essentially, distributed systems minimize the latency and response time while increasing the throughput. [caption id="attachment_23973" align="alignnone" width="1536"]  [/caption] #3  The Price to Performance ratio for the system Distributed networks offer a better price/performance ratio compared to centralized mainframe computers. This is because decentralized and modular applications can share expensive peripherals, such as high-capacity file servers and high-resolution printers. Similarly, multiple components can be run on nodes with specialized processing. This further reduces the cost of multiple specialized processing systems. #4 Disaster Recovery Distributed systems involve services communicating through different machines. This is where message integrity, confidentiality and authentication comes into play. In such a case, distributed computing gives organizations the flexibility to deploy a 4 way mechanism to keep operations secure: Encryption Authentication Authorization: Auditing: Another aspect of disaster recovery is reliability. If computation and the associated data effectively built into a single machine, and if that machine goes down, the entire service goes with it. With a distributed system, what could happen instead is that specific services might go down, but the whole thing should, in theory at least, stay standing. #5 Resilience through replication So, if specific services can go down within a distributed system, you still do need to do something to increase resilience. You do this by replicating services across multiple nodes, minimizing potential points of failure. This is what’s known as fault tolerance - it improves system reliability without affecting the system as a whole. It’s also worth pointing out that the hardware on which a distributed system is built is replaceable - this is better than depending on centralized hardware which, if it fails, will take everything with it… Another distributed computing example: SETI A good example of a distributed system is SETI. SETI collects massive amounts of data from observatories around the world on activity in the sky, in a bid to identify possible signs of extraterrestrial life. This information is then sliced into smaller pieces of data for easy analysis through distributed computing applications running as a screensaver on individual user PC’s, all around the world. The PC’s running the SETI screensaver will download a small file, and while a PC is unused, the screen saver downloads a data slice from SETI. It then runs the analytics application while the PC is idle, and when the analysis is complete, the analyzed data slice is uploaded back to SETI. This massive data analytics is possible all because of distributed computing. So, although distributed computing has become a bit of a buzzword, the technology is gaining traction in the minds of customers and service providers. Beyond the hype and debate, these services will ultimately help companies to be more responsive to market conditions while restraining IT costs. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Oath’s distributed network telemetry collector- ‘Panoptes’ is now Open source! Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018
Read more
  • 0
  • 0
  • 6095

article-image-aiops-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

AIOps - Trick or Treat?

Bhagyashree R
31 Oct 2018
2 min read
AIOps, as the term suggests, is Artificial Intelligence for IT operations and was first introduced by Gartner last year. AIOps systems are used to enhance and automate a broad range of processes and tasks in IT operations with the help of big data analytics, machine learning, and other AI technologies. Read also: What is AIOps and why is it going to be important? In its report, Gartner estimated that, by 2020, approximately 50% of enterprises will be actively using AIOps platforms to provide insight into both business execution and IT Operations. AIOps has seen a fairly fast growth since its introduction with many big companies showing interest in AIOps systems. For instance, last month Atlassian acquired Opsgenie, an incident management platform that along with planning and solving IT issues, helps you gain insight to improve your operational efficiency. The reasons why AIOps is being adopted by companies are: it eliminates tedious routine tasks, minimizes costly downtime, and helps you gain insights from data that’s trapped in silos. Where AIOps can go wrong? AIOps alerts us about incidents beforehand, but in some situations, it can also go wrong. In cases where the event is unusual, the system will be less likely to predict it. Also, those events that haven’t occurred before will be entirely outside the ability for machine learning to predict or analyze. Additionally, it can sometimes give false negatives and false positives. False negatives could happen in the cases where the tests are not sensitive enough to detect possible issues. False positives can be the result of incorrect configuration. This essentially means that there will always be a need for human operators to review these alerts and warnings. Is AIOps a trick or treat? AIOps is bringing more opportunities for IT workforce such as AIOps Data Scientist, who will focus on solutions to correlate, consolidate, alert, analyze, and provide awareness of events. Dell defines its Data Scientist role as someone who will “contribute to delivering transformative AIOps solutions on their SaaS platform”. With AIOps, IT workforce won’t just disappear, it will evolve. AIOps is definitely a treat because it reduces manual work and provides an intuitive way of incident response. What is AIOps and why is it going to be important? 8 ways Artificial Intelligence can improve DevOps Tech hype cycles: do they deserve your attention?
Read more
  • 0
  • 0
  • 2551
article-image-edge-computing-trick-or-treat
Melisha Dsouza
31 Oct 2018
4 min read
Save for later

Edge computing - Trick or Treat?

Melisha Dsouza
31 Oct 2018
4 min read
According to IDC’s Digital Universe update, the number of connected devices is projected to expand to 30 billion by 2020 to 80 billion by 2025. IDC also estimates that the amount of data created and copied annually will reach 180 Zettabytes (180 trillion gigabytes) in 2025, up from less than 10 Zettabytes in 2015. Thomas Bittman, vice president and distinguished analyst at Gartner Research, in a session on edge computing at the recent Gartner IT Infrastructure, Operations Management and Data Center Conference predicted, “In the next few years, you will have edge strategies-you’ll have to.” This prediction was consistent with a real-time poll conducted at the conference which stated that 25% of the audience uses edge computing technology and more than 50% plan to implement it within two years. How does Edge computing work? 2018 marked the era of edge computing with the increase in the number of smart devices and the massive amounts of data generated by them. Edge computing allows data produced by the internet of things (IoT) devices to be processed near the edge of a user’s network. Instead of relying on the shared resources of large data centers in a cloud-based environment, edge computing will place more demands on endpoint devices and intermediary devices like gateways, edge servers and other new computing elements to encourage a complete edge computing environment. Some use cases of Edge computing The complex architecture of devices today demands a more comprehensive computing model to support its infrastructure. Edge computing caters to this need and reduces latency issues, overhead and cost issues associated with centralized computing options like the cloud. A good example of this is the launch of the world’s first digital drilling vessel, the Noble Globetrotter I by London-based offshore drilling company- ‘Noble Drilling’. The vessel uses data to create virtual versions of some of the key equipment on board. If the drawworks on this digitized rig begins to fail prematurely, information based on a ‘digital twin’ of that asset will notify a team of experts onshore. The “digital twin” is a virtual model of the device that lives inside the edge processor and can point out to tiny performance discrepancies human operators may easily miss. Keeping a watch on all pertinent data on a dashboard, the onshore team can collaborate with the rig’s crew to plan repairs before a failure. Noble believes that this move towards edge computing will lead to a more efficient, cost-effective offshore drilling. By predicting potential failures in advance, Noble can avert breakdowns at and also spare the expense of replacing/ repairing equipment. Another news that caught our attention was  Microsoft’s $5 billion investment in IoT to empower the intelligent cloud and the intelligent edge.  Azure Sphere is one of Microsoft’s intelligent edge solutions to power and protect connected microcontroller unit (MCU)-powered devices. MCU powered devices power everything from household stoves and refrigerators to industrial equipment and considering that there are 9 billion MCU-powered devices shipping every year, we need all the help we can get in the security spectrum! That’s intelligent edge for you on the consumer end of the application spectrum. 2018 also saw progress in the development of edge computing tools and solutions across the spectrum, from hardware to software. Take for instance OpenStack Rocky one of the most widely deployed open source cloud infrastructure software. It is designed to accommodate edge computing requirements by deploying containers directly on bare metal. OpenStack Ironic improves management and automation capabilities to bare metal infrastructure. Users can manage physical infrastructure just like they manage VMs, especially with new Ironic features introduced in Rocky. Intel’s OpenVIVO computer vision toolkit is yet another example of using edge computing to help developers to streamline their deep learning inferences and deploy high-performance computer vision solutions across a wide range of use-cases. Baidu, Inc. released the Kunlun AI chip built to handle AI models for both, edge computing on devices and in the cloud via data centers. Edge computing - Trick or Treat? However, edge computing does come with disadvantages like the steep cost of deploying and managing an edge network, security concerns and performing numerous operations. The final verdict: Edge computing is definitely a treat when complement by embedded AI for enhancing networks to promote efficiency in analysis and improve security for business systems. Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 Ubuntu 18.10 ‘Cosmic Cuttlefish’ releases with a focus on AI development, multi-cloud and edge deployments, and much more!
Read more
  • 0
  • 0
  • 2450

article-image-is-your-enterprise-measuring-the-right-devops-metrics
Guest Contributor
17 Sep 2018
6 min read
Save for later

Is your Enterprise Measuring the Right DevOps Metrics?

Guest Contributor
17 Sep 2018
6 min read
As of 2018, 17% of the companies worldwide have fully adopted DevOps while 14% are still in the consideration stage. Amazon, Netflix and Target are few of the companies that have attained success with DevOps. Amazon’s move to Amazon Web Services resulted in their ability to scale their capacity up or down as needed for the servers, thus allowing their engineers to deploy their own code to the server whenever they wanted to. This resulted in continuous deployment, thus reducing the duration as well as number of outages experienced by the companies using AWS. Netflix used DevOps to improve their cloud infrastructure and to ensure smooth streaming of videos online. When you say “we have adopted DevOps in your Enterprise”, what do you really mean? It means you have adopted a software philosophy that integrates software development and operations, thus reducing the time to market your end product. The questions which come next are: How do you measure the true success of DevOps in your organization? Have you been working on the right metrics all along? Let’s talk about first measuring DevOps in organizations. It is all about uptime, transactions per second, bugs fixed, the commits and other operational as well as productivity metrics. This is what most organizations tend to look at as metrics, when you talk about DevOps. But are these the Right DevOps Metrics? For a while, companies have been working on a set of metrics, discussed above, to determine the success of the DevOps. However, these are not the right metrics, and should not be considered. A metric is an indicator of the performance of the DevOps, and not every single indicator will determine the success. Your metrics might differ based on the data you collect. You would end up collecting large volumes of data; however, not every data available can be converted into a metric. Here’s how you can determine the metrics for your DevOps. Avoid using too many metrics You should, at the most, use 10 metrics. We suggest using less than 10 in fact. The fewer the metrics used, the better your judgment would be. You should broaden your perspective when choosing the metrics. It is important to choose metrics that account for the overall organizational health, and don’t just take into consideration the operational and development data. Metrics that connect with your organization What is the ultimate aim for your organization? How would you determine your organization is successful? The answer to these questions will help you determine the metrics. Most organizations determine their success based on customer experience and the overall operational efficiency. You will need to choose metrics that help you determine these two values. Tie the metrics to your goals As a businessperson, you are more concerned with customer attrition, bad feedback and non-returning customers than the lines of code that goes into creating a successful software product. You will need to tie your DevOps success metrics to these goals. While you are concerned about the failure of your website or the downtime, the true concern is the customer’s abandonment of your website. Causes that affect the DevOps While the business metrics will help you measure the success to a certain extent, there are certain things that affect the operations and development teams. You will need to check these causes, and go to the root to understand how it affects the DevOps teams  and what needs to be done to create a balance between the development and operational teams. Next, we will talk about the actual DevOps metrics that you should take into consideration when deriving value for your organization and measuring the success. The Velocity With most of the enterprise elements being automated, velocity is one of the most important metrics that will determine the success of your DevOps. The idea is to get the updates out to the users in the quickest and fastest way possible, without compromising on security or reliability. You stay competitive, offer new features and boost customer retention. The two variables that help measure this tangible metric include deployment frequency and deployment lead time. The former measures the frequency of releases and the latter measures the speed at which the team commits a code and pushes forth the update. Service Quality Service quality directly impacts the goals set forth by the organization, and is intangible. The idea is to maintain the service quality throughout the releases and  changes made to the application. The variables that determine this metric include change failure rate, number of support tickets and MTTR (Mean time to recovery). When you release an update, and that leads to an error or fault in the application, it is the change failure rate. In case there are bugs or performance issues in your releases, and these are being reported, then the variable number of support tickets or errors comes into existence. MTTR is the variable that measures the number of issues resolved and the time taken to solve them. The idea is to be more responsive to the problems faced by the customers. User Experience This is the final metric that impacts the success of your DevOps. You need to check if all the features and updates you have insisted upon are in sync with the user needs. The variables that are concerned with measuring this aspect include feature usage and business impact. You will need to check how many people from the target audience are using the new feature update you have released, and determine their personas. You can check the number of sessions, completed transactions and duration of the session to quantify the number of people. Check their profiles to get their personas.. Planning your DevOps strategy It is not easy to roll out DevOps in your organization, and expect agility immediately. You need to have a perfect strategy, align it to your business goals, and determine the effective DevOps metrics to determine the success of your roll out. Planning is of essence for a thorough roll out of DevOps. It is important to consider every data, when you have DevOps in your organization. Make sure you store and analyze every data, and use the data that suits the DevOps metrics you have determined for success. It is important that the DevOps metrics are aligned to your business goals and the objectives you have defined. About Author: Vishal Virani is a Founder and CEO of Coruscate Solutions, a mobile app development company. He enjoys writing about technology, mobile apps, custom web development and latest industry trends.
Read more
  • 0
  • 0
  • 2935