Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Servers

95 Articles
article-image-learning-essential-linux-commands-for-navigating-the-shell-effectively
Expert Network
16 Aug 2021
9 min read
Save for later

Learning Essential Linux Commands for Navigating the Shell Effectively 

Expert Network
16 Aug 2021
9 min read
Once we learn how to deploy an Ubuntu server, how to manage users, and how to manage software packages, we should take a moment to learn some important concepts and commands that will allow us to build more of the foundational knowledge that will serve us well while understanding the advanced concepts and treading the path of expertise. These foundational concepts include core Linux commands for navigating the shell.  This article is an excerpt from the book, Mastering Ubuntu Server, Third Edition by Jeremy “Jay” La Croix – A hands-on book that will teach you how to deploy, maintain and troubleshoot Ubuntu Server.    Learning essential Linux commands Building a solid competency on the command line is essential and effectively gives any system administrator or engineer superpowers. Our new abilities won’t allow us to leap tall buildings in a single bound, but will definitely enable us to execute terminal commands as if we’re ninjas. While we won’t master the art of using the command line in this section (that can only come with years and experience), we will definitely become more confident.  First, let’s talk about moving from one place to another within the Linux filesystem. Specifically, by “Linux filesystem”, I’m referring to the default structure of the various folders (also referred to as “directories”) contained within your Ubuntu installation. The Linux filesystem contains many important directories, each with their own designated purpose, which we’ll talk about in more detail in the book. Before we can explore that further, we’ll need to learn how to navigate from one directory to another. The first command we’ll cover in this section relative to navigating the filesystem will clarify the directory you’re currently working from. For that, we have the pwd command. The pwd command pwd stands for print working directory, and shows you where you currently are in the filesystem. If you run it, you may see output such as this:  Figure 4.1: Viewing the current working directory  In this example, when I ran pwd, the output informed me that my current working directory is /home/jay. This is known as your home directory and, by default, every user has one. This is where all the files for your user account will reside by default. Sure, you can create files anywhere you’d like, even outside your home directory if you have permission to do so or you use sudo. But just because you can doesn’t mean you should. As you’ll learn in this article, the Linux filesystem has a designated place for just about everything. But your home directory, located at /home/<username>, is yours. You own it, you control it—it’s your home on the server. In the early 2000s, Linux installations with a graphical user interface even depicted your home directory with an icon of a house.  Typically, files that you create in your home directory will have permission string similar to this:  -rw-rw-r-- 1 jay  jay      0 Jul  5 14:10 testfile.txt  You can see by default, files you create in your home directory are owned by your user, your group, and are readable by all three categories (user, group, and other).  The cd command To change our current directory and navigate to another, we can use the cd command along with a path we’d like to move to:  cd /etc  Now, I haven’t gone over the file and directory layout yet, so I just randomly picked the etc directory. The forward slash at the beginning designates the beginning of the filesystem. More on that later. Now, we’re in the /etc directory, and our command prompt has even changed as well:  Figure 4.2: Command prompt and pwd command after changing a directory  As you could probably guess, the cd command stands for change directory, and it’s how you move your working directory from one to another while navigating around. You can use the following command, for example, to return back to the home directory:  cd /home/<user>  In fact, there are several ways to return home, a few of which are demonstrated in the following screenshot:    Figure 4.3: Other ways of navigating to the home directory  The first command, cd -, doesn’t actually have anything to do with your home directory specifically. It’s a neat trick to return you to whatever directory you were in most previously. For me, the cd – command took me to the previous directory I was just in, which just so happened to be /home/jay. The second command, cd /home/jay, took me directly to my home directory since I called out the entire path. The last command, cd ~, also took me to my home directory. This is because ~ is shorthand for the full path to your home directory, so you don’t really ever have to type out the entire path to /home/<user>. You can just refer to that path simply as ~.  The ls command Another essential command is ls. The ls command lists the contents of the current working directory. We probably don’t have any contents in our home directory yet. But if we navigate to /etc by running cd /etc, as we did earlier, and then execute ls, we’ll see that the /etc</span> directory has a number of files in it. Go ahead and try it yourself and see:  cd /etc ls  We didn’t actually have to change our working directory to /etc just to list the contents. We could’ve just executed the following command:  ls /etc  Even better, we can run:  ls -l /etc  This gives us the contents in a long list, which I think is much easier to understand. It will show each directory or file entry on its own line, along with the permission string. But you probably already must be knowing ls as well as ls -l so I won’t go into too much more detail here. The -l portion of the ls command in that example is known as an argument. I’m not referring to an argument such as the ever-ensuing debate in the Linux community over which command-line text editor is the best between vim and emacs (it’s clearly vim). Instead, I’m referring to the concept of an argument in shell commands that allow you to override the defaults, or feed options to the command in some way, such as in this example, where we format the output of ls to be in a long list.  The rm command The rm command is another one that we touched on in, when we were discussing manually removing the home directory of a user that was removed from the system. So, at this point, you’re probably well aware of that command and what it does (it removes files and directories). It’s a potentially dangerous command, as you could use it to accidentally remove something that you shouldn’t have. We used the following command to remove the home directory of user dscully:  rm -r /home/dscully  As you can see, we’re using the -r argument to alter the behavior of the rm command, which, by default, doesn’t remove directories but only files. The -r argument instructs rm to remove everything recursively, even if it’s a directory. The -r argument will also remove subdirectories of the path as well, so you’ll definitely want to be careful with this command. As I’ve mentioned earlier in the book, if you use sudo with rm, you can hypothetically delete your entire Ubuntu installation!  Another option offered by rm is the -f argument which is short for force, and it tells rm not to prompt before removing things. This argument won’t be needed as often, and use cases for it are outside the scope of this article. But keep in mind that it exists, should you need it.  The touch command Another foundational command that’s good to know is touch, which actually serves two purposes. First, assuming you have permission to do so in your current working directory, the touch command will create an empty file if it doesn’t already exist. Second, the touch command will update the modification time of a file or directory if it does already exist:  Figure 4.4: Experimenting with the touch command  To illustrate this, in the related screenshot, I ran several commands. First, I ran the following command to create an empty file:  touch testfile.txt  That file didn’t exist before, so when I ran ls -l afterward, it showed the newly created file with a size of 0 bytes. Next, I ran the touch testfile.txt command again a minute later, and you can see in the screenshot that the modification time went from 15:12 to 15:13.  When it comes to viewing the contents of a file, we’ll get to that later on in the book, Mastering Ubuntu Server, Third Edition. And there are definitely more commands that we’ll need to learn to build the basis of our foundation. But for now, let’s take a break from the foundational concepts to understand the Linux filesystem layout better.  Summary There are more Linux commands than you will never be able to memorize. Most of us just memorize our favorite commands and variations of commands. You’ll develop your own menu of these commands as you learn and expand your knowledge. In this article, we covered many of the foundational commands that are, for the most part, essential. Commands such as grep, cat, cd, ls, and others were explored this time around.  About Jeremy “Jay” La Croix is a technologist and open-source enthusiast, specializing in Linux. Jay is currently the director of Cloud Services, Adaptavist. He has a net field experience of 20 years across different firms as a Solutions Architect and holds a master’s degree in Information Systems Technology Management from Capella University.     In addition, Jay also has an active Linux-focused YouTube channel with over 186K followers and 15.9M views, available at LearnLinux.tv, where he posts instructional tutorial videos and other Linux-related content.
Read more
  • 0
  • 0
  • 12619

article-image-new-for-2020-in-operations-and-infrastructure-engineering
Richard Gall
19 Dec 2019
5 min read
Save for later

New for 2020 in operations and infrastructure engineering

Richard Gall
19 Dec 2019
5 min read
It’s an exciting time if you work in operations and software infrastructure. Indeed, you could even say that as the pace of change and innovation increases, your role only becomes more important. Operations and systems engineers, solution architects, everyone - you’re jobs are all about bringing stability, order and control into what can sometimes feel like chaos. As anyone that’s been working in the industry knows, managing change, from a personal perspective, requires a lot of effort. To keep on top of what’s happening in the industry - what tools are being released and updated, what approaches are gaining traction - you need to have one eye on the future and the wider industry. To help you with that challenge and get you ready for 2020, we’ve put together a list of what’s new for 2020 - and what you should start learning. Learn how to make Kubernetes work for you It goes without saying that Kubernetes was huge in 2019. But there are plenty of murmurs and grumblings that it’s too complicated and adds an additional burden for engineering and operations teams. To a certain extent there’s some truth in this - and arguably now would be a good time to accept that just because it seems like everyone is using Kubernetes, it doesn’t mean it’s the right solution for you. However, having said that, 2020 will be all about understanding how to make Kubernetes relevant to you. This doesn’t mean you should just drop the way you work and start using Kubernetes, but it does mean that spending some time with the platform and getting a better sense of how it could be used in the future is a useful way to spend your learning time in 2020. Explore Packt's extensive range of Kubernetes eBooks and videos on the Packt store. Learn how to architect If software has eaten the world, then by the same token perhaps complexity has well and truly eaten software as we know it. Indeed, Kubernetes is arguably just one of the symptoms and causes of this complexity. Another is the growing demand for architects in engineering and IT teams. There are a number of different ‘architecture’ job roles circulating across the industry, from solutions architect to application architect. While they each have their own subtle differences, and will even vary from company to company, they’re all roles that are about organizing and managing different pieces into something that is both stable and value-driving. Cloud has been particularly instrumental in making architect roles more prominent in the industry. As organizations look to resist the pitfalls of lock-in and better manage resources (financial and otherwise), it will be down to architects to balance business and technology concerns carefully. Learn how to architect cloud native applications. Read Architecting Cloud Computing Solutions. Get to grips with everything you need to know to be a software architect. Pick up Software Architect's Handbook. Artificial intelligence It’s strange that the hype around AI doesn’t seem to have reached the world of ops. Perhaps this is because the area is more resistant to the spin that comes with AI, preferring instead to focus more on the technical capabilities of tools and platforms. Whatever the case, it’s nevertheless true that AI will play an important part in how we manage and secure infrastructure. From monitoring system health, to automating infrastructure deployments and configuration, and even identifying security threats, artificial intelligence is already an important component for operations engineers and others. Indeed, artificial intelligence is being embedded inside products and platforms that ops teams are using - this means the need to ‘learn’ artificial intelligence is somewhat reduced. But it would be wrong to think it’s something that can just be managed from a dashboard. In 2020 it will be essential to better understand where and how artificial intelligence can fit into your operations and architectural toolchain. Find artificial intelligence eBooks and videos in Packt's collection of curated data science bundles. Observability, monitoring, tracing, and logging One of the challenges of software complexity is understanding exactly what’s going on under the hood. Yes, the network might be unreliable, as the saying goes, but what makes things even worse is that we’re not even sure why. This is where observability and the next generation of monitoring, logging and tracing all come into play. Having detailed insights into how applications and infrastructures are performing, how resources are being managed, and what things are actually causing problems is vitally important from a team perspective. Without the ability to understand these things, it can put pressure on teams as knowledge becomes siloed inside the brains of specific engineers. It makes you vulnerable to failure as you start to have points of failure at a personnel level. There are, of course, a wide range of tools and products available that can make monitoring and tracing easy (or easier, at least). But understanding which ones are right for your needs still requires some time learning and exploring the options out there. Make sure you do exactly that in 2020. Learn how to monitor distributed systems with Learn Centralized Logging and Monitoring with Kubernetes. Making serverless a reality We’ve talked about serverless a lot this year. But as a concept there’s still considerable confusion about what role it should play in modern DevOps processes. Indeed, even the nomenclature is a little confusing. Platforms using their own terminology, such as ‘lambdas’ and ‘functions’, only adds to the sense that serverless is something amorphous and hard to pin down. So, in 2020, we need to work out how to make serverless work for us. Just as we need to consider how Kubernetes might be relevant to our needs, we need to consider in what ways serverless represents both a technical and business opportunity. Search Packt's library for the latest serverless eBooks and videos. Explore more technology eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 4388

article-image-chaos-engineering-company-gremlin-launches-scenarios-making-it-easier-to-tackle-downtime-issues
Richard Gall
26 Sep 2019
2 min read
Save for later

Chaos engineering company Gremlin launches Scenarios, making it easier to tackle downtime issues

Richard Gall
26 Sep 2019
2 min read
At the second ChaosConf in San Francisco, Gremlin CEO Kolton Andrus revealed the company's latest step in its war against downtime: 'Scenarios.' Scenarios makes it easy for engineering teams to simulate a common issues that lead to downtime. It's a natural and necessary progression for Gremlin that is seeing even the most forward thinking teams struggling to figure out how to implement chaos engineering in a way that's meaningful to their specific use case. "Since we released Gremlin Free back in February thousands of customers have signed up to get started with chaos engineering," said Andrus. "But many organisations are still struggling to decide which experiments to run in order to avoid downtime and outages." Scenarios, then, is a useful way into chaos engineering for teams that are reticent about taking their first steps. As Andrus notes, it makes it possible to inject failure "with a couple of clicks." What failure scenarios does Scenarios let engineering teams simulate? Scenarios lets Gremlin users simulate common issues that can cause outages. These include: Traffic spikes (think Black Friday site failures) Network failures Region evacuation This provides a great starting point for anyone that wants to stress test their software. Indeed, it's inevitable that these issues will arise at some point so taking advance steps to understand what the consequences could be will minimise their impact - and their likelihood. Why chaos engineering? Over the last couple of years plenty of people have been attempting to answer why chaos engineering? But in truth the reasons are clear: software - indeed, the internet as we know it - is becoming increasingly complex, a mesh of interdependent services and platforms. At the same time, the software being developed today is more critical than ever. For eCommerce sites downtime means money, but for those in IoT and embedded systems world (like self-driving cars, for example), it's sometimes a matter of life and death. This makes Gremlin's Scenarios an incredibly exciting an important prospect - it should end the speculation and debate about whether we should be doing chaos engineering, and instead help the world to simply start doing it. At ChaosConf Andrus said that Gremlin's mission is to build a more reliable internet. We should all hope they can deliver.
Read more
  • 0
  • 0
  • 3754

article-image-announcing-linux-5-0
Melisha Dsouza
04 Mar 2019
2 min read
Save for later

Announcing Linux 5.0!

Melisha Dsouza
04 Mar 2019
2 min read
Yesterday, Linus Torvalds, announced the stable release of Linux 5.0. This release comes with AMDGPU FreeSync support, Raspberry Pi touch screen support and much more. According to Torvalds, “I'd like to point out (yet again) that we don't do feature-based releases, and that ‘5.0’ doesn't mean anything more than that the 4.x numbers started getting big enough that I ran out of fingers and toes.” Features of Linux 5.0 AMDGPU FreeSync support, which will improve the display of fast-moving images and will prove advantageous especially for gamers. According to CRN, this will also make Linux a better platform for dense data visualizations and support “a dynamic refresh rate, aimed at providing a low monitor latency and a smooth, virtually stutter-free viewing experience.” Support for the Raspberry Pi’s official touch-screen. All information is copied into a memory mapped area by RPi's firmware, instead of using a conventional bus. Energy-aware scheduling feature, that lets the task scheduler to take scheduling decisions resulting in lower power usage on asymmetric SMP platforms. This feature will use Arm's big.LITTLE CPUs and help achieve better power management in phones Adiantum file system encryption for low power devices. Btrfs can support swap files, but the swap file must be fully allocated as "nocow" with no compression on one device. Support for binderfs, a binder filesystem that will help run multiple instances of Android and is backward compatible. Improvement to reduce Fragmentation by over 90%. This results in better transparent hugepage (THP) usage. Support for Speculation Barrier (SB) instruction This is introduced as part of the fallout from Spectre and Meltdown. The merge window for 5.1 is now open. Read Linux’s official documentation for the detailed list of upgraded features in Linux 5.0. Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack Intel releases patches to add Linux Kernel support for upcoming dedicated GPU releases Undetected Linux Backdoor ‘SpeakUp’ infects Linux, MacOS with cryptominers
Read more
  • 0
  • 0
  • 4295

Banner background image
article-image-key-trends-in-software-infrastructure-in-2019
Richard Gall
17 Dec 2018
10 min read
Save for later

Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity

Richard Gall
17 Dec 2018
10 min read
Software infrastructure has, over the last decade or so, become a key concern for developers of all stripes. Long gone are narrowly defined job roles; thanks to DevOps, accountability for how code is now shared between teams on both development and deployment sides. For anyone that’s ever been involved in the messy frustration of internal code wars, this has been a welcome change. But as developers who have traditionally sat higher up the software stack dive deeper into the mechanics of deploying and maintaining software, for those of us working in system administration, DevOps, SRE, and security (the list is endless, apologies if I’ve forgotten you), the rise of distributed systems only brings further challenges. Increased complexity not only opens up new points of failure and potential vulnerability, at a really basic level it makes understanding what’s actually going on difficult. And, essentially, this is what it will mean to work in software delivery and maintenance in 2019. Understanding what’s happening, minimizing downtime, taking steps to mitigate security threats - it’s a cliche, but finding strategies to become more responsive rather than reactive will be vital. Indeed, many responses to these kind of questions have emerged this year. Chaos engineering and observability, for example, have both been gaining traction within the SRE world, and are slowly beginning to make an impact beyond that particular job role. But let’s take a deeper look at what is really going to matter in the world of software infrastructure and architecture in 2019. Observability and the rise of the service mesh Before we decide what to actually do, it’s essential to know what’s actually going on. That seems obvious, but with increasing architectural complexity, that’s getting harder. Observability is a term that’s being widely thrown around as a response to this - but it has been met with some cynicism. For some developers, observability is just a sexed up way of talking about good old fashioned monitoring. But although the two concepts have a lot in common, observability is more of an approach, a design pattern maybe, rather than a specific activity. This post from The New Stack explains the difference between monitoring and observability incredibly well. Observability is “a measure of how well internal states of a system can be inferred from knowledge of its external outputs.” which means observability is more a property of a system, rather than an activity. There are a range of tools available to help you move towards better observability. Application management and logging tools like Splunk, Datadog, New Relic and Honeycomb can all be put to good use and are a good first step towards developing a more observable system. Want to learn how to put monitoring tools to work? Check out some of these titles: AWS Application Architecture and Management [Video]     Hands on Microservices Monitoring and Testing       Software Architecture with Spring 5.0      As well as those tools, if you’re working with containers, Kubernetes has some really useful features that can help you more effectively monitor your container deployments. In May, Google announced StackDriver Kubernetes Monitoring, which has seen much popularity across the community. Master monitoring with Kubernetes. Explore these titles: Google Cloud Platform Administration     Mastering Kubernetes      Kubernetes in 7 Days [Video]        But there’s something else emerging alongside observability which only appears to confirm it’s importance: that thing is the notion of a service mesh. The service mesh is essentially a tool that allows you to monitor all the various facets of your software infrastructure helping you to manage everything from performance to security to reliability. There are a number of different options out there when it comes to service meshes - Istio, Linkerd, Conduit and Tetrate being the 4 definitive tools out there at the moment. Learn more about service meshes inside these titles: Microservices Development Cookbook     The Ultimate Openshift Bootcamp [Video]     Cloud Native Application Development with Java EE [Video]       Why is observability important? Observability is important because it sets the foundations for many aspects of software management and design in various domains. Whether you’re an SRE or security engineer, having visibility on the way in which your software is working will be essential in 2019. Chaos engineering Observability lays the groundwork for many interesting new developments, chaos engineering being one of them. Based on the principle that modern, distributed software is inherently unreliable, chaos engineering ‘stress tests’ software systems. The word ‘chaos’ is a bit of a misnomer. All testing and experimentation on your software should follow a rigorous and almost scientific structure. Using something called chaos experiments - adding something unexpected into your system, or pulling a piece of it out like a game of Jenga - chaos engineering helps you to better understand the way it will act in various situations. In turn, this allows you to make the necessary changes that can help ensure resiliency. Chaos engineering is particularly important today simply because so many people, indeed, so many things, depend on software to actually work. From an eCommerce site to a self driving car, if something isn’t working properly there could be terrible consequences. It’s not hard to see how chaos engineering fits alongside something like observability. To a certain extent, it’s really another way of achieving observability. By running chaos experiments, you can draw out issues that may not be visible in usual scenarios. However, the caveat is that chaos engineering isn’t an easy thing to do. It requires a lot of confidence and engineering intelligence. Running experiments shouldn’t be done carelessly - in many ways, the word ‘chaos’ is a bit of a misnomer. All testing and experimentation on your software should follow a rigorous and almost scientific structure. While chaos engineering isn’t straightforward, there are tools and platforms available to make it more manageable. Gremlin is perhaps the best example, offering what they describe as ‘resiliency-as-a-service’. But if you’re not ready to go in for a fully fledged platform, it’s worth looking at open source tools like Chaos Monkey and ChaosToolkit. Want to learn how to put the principles of chaos engineering into practice? Check out this title: Microservice Patterns and Best Practices       Learn the principles behind resiliency with these SRE titles: Real-World SRE       Practical Site Reliability Engineering       Better integrated security and code testing Both chaos engineering and observability point towards more testing. And this shouldn’t be surprising: testing is to be expected in a world where people are accountable for unpredictable systems. But what’s particularly important is how testing is integrated. Whether it’s for security or simply performance, we’re gradually moving towards a world where testing is part of the build and deploy process, not completely isolated from it. There are a diverse range of tools that all hint at this move. Archery, for example, is a tool designed for both developers and security testers to better identify and assess security vulnerabilities at various stages of the development lifecycle. With a useful dashboard, it neatly ties into the wider trend of observability. ArchUnit (sounds similar but completely unrelated) is a Java testing library that allows you to test a variety of different architectural components. Similarly on the testing front, headless browsers continue to dominate. We’ve seen some of the major browsers bringing out headless browsers, which will no doubt delight many developers. Headless browsers allow developers to run front end tests on their code as if it were live and running in the browser. If this sounds a lot like PhantomJS, that’s because it is actually quite a bit like PhantomJS. However, headless browsers do make the testing process much faster. Smarter software purchasing and the move to hybrid cloud The key trends we’ve seen in software architecture are about better understanding your software. But this level of insight and understanding doesn’t matter if there’s no alignment between key decision makers and purchasers. Whatever cloud architecture you have, strong leadership and stakeholder management are essential. This can manifest itself in various ways. Essentially, it’s a symptom of decision makers being disconnected from engineers buried deep in their software. This is by no means a new problem, cloud coming to define just about every aspect of software, it’s now much easier for confusion to take hold. The best thing about cloud is also the worst thing - the huge scope of opportunities it opens up. It makes decision making a minefield - which provider should we use? What parts of it do we need? What’s going to be most cost effective? Of course, with hybrid cloud, there's a clear way of meeting those issues. But it's by no means a silver bullet. Whatever cloud architecture you have, strong leadership and stakeholder management are essential. This is something that ThoughtWorks references in its most recent edition of Radar (November 2018). Identifying two trends they call ‘bounded buy’ and ‘risk commensurate vendor strategy’ ThoughtWorks highlights how organizations can find their SaaS of choice shaping their strategy in its own image (bounded buy) or look to outsource business critical applications, functions or services. T ThoughtWorks explains: “This trade-off has become apparent as the major cloud providers have expanded their range of service offerings. For example, using AWS Secret Management Service can speed up initial development and has the benefit of ecosystem integration, but it will also add more inertia if you ever need to migrate to a different cloud provider than it would if you had implemented, for example, Vault”. Relatedly, ThoughtWorks also identifies a problem with how organizations manage cost. In the report they discuss what they call ‘run cost as architecture fitness function’ which is really an elaborate way of saying - make sure you look at how much things cost. So, for example, don’t use serverless blindly. While it might look like a cheap option for smaller projects, your costs could quickly spiral and leave you spending more than you would if you ran it on a typical cloud server. Get to grips with hybrid cloud: Hybrid Cloud for Architects       Building Hybrid Clouds with Azure Stack     Become an effective software and solutions architect in 2019: AWS Certified Solutions Architect - Associate Guide     Architecting Cloud Computing Solutions     Hands-On Cloud Solutions with Azure       Software complexity needs are best communicated in a simple language: money In practice, this takes us all the way back to the beginning - it’s simply the financial underbelly of observability. Performance, visibility, resilience - these matter because they directly impact the bottom line. That might sound obvious, but if you’re trying to make the case, say, for implementing chaos engineering, or using a any other particular facet of a SaaS offering, communicating to other stakeholders in financial terms can give you buy-in and help to guarantee alignment. If 2019 should be about anything, it’s getting closer to this fantasy of alignment. In the end, it will keep everyone happy - engineers and businesses
Read more
  • 0
  • 0
  • 6014

article-image-its-black-friday-but-whats-the-business-and-developer-cost-of-downtime
Richard Gall
23 Nov 2018
4 min read
Save for later

It's Black Friday: But what's the business (and developer) cost of downtime?

Richard Gall
23 Nov 2018
4 min read
Black Friday is back, and, as you've probably already noticed, with a considerable vengeance. According to Adobe Analytics data, online spending is predicted to hit $3.7 billion over this holiday season in the U.S, up from $2.9 billion in 2017. But while consumers clamour for deals and businesses reap the rewards, it's important to remember there's a largely hidden plane of software engineering labour. Without this army of developers, consumers will most likely be hitting their devices in frustration, while business leaders will be missing tough revenue targets - so, as we enter into Black Friday let's pour one out for all those engineers on call and trying their best to keep eCommerce sites on their feet. Here's to the software engineers keeping things running on Black Friday Of course, the pain that hits on days like Black Friday and Cyber Monday can be minimised with smart planning and effective decision making long before those sales begin. However, for engineering teams under-resourced and lacking the right tools, that is simply impossible. This means that software engineers are left in a position where they're treading water, knowing that they're going to be sinking once those big days come around. It doesn't have to be like this. With smarter leadership and, indeed, more respect for the intensive work engineers put in to make websites and apps actually work, revenue driving platforms can become more secure, resilient and stable. Chaos engineering platform Gremlin publishes the 'true cost of downtime' This is the central argument of chaos engineering platform Gremlin, who we've covered a number of times this year. To coincide with Black Friday the team has put together what they believe is the 'true cost of downtime'. On the one hand this is a good marketing hook for their chaos engineering platform, but, cynicism aside, it's also a good explanation of why the principles of chaos engineering can be so valuable from both a business and developer perspective. Estimating the annual revenue of some of the biggest companies in the world, Gremlin has been then created an interactive table to demonstrate what the cost of downtime for each of those businesses would be, for the length of time you are on the page. For 20 minutes downtime, Amazon.com would have lost a staggering $4.4 million. For Walgreens it's more than $80,000. Gremlin provide some context to all this, saying: "Enterprise commerce businesses typically rely on a complex microservices architecture, from fulfillment, to website security, ability to scale with holiday traffic, and payment processing - there is a lot that can go wrong and impact revenue, damage customer trust, and consume engineering time. If an ecommerce site isn’t 100% online and performant, it’s losing revenue." "The holiday season is especially demanding for SREs working in ecommerce. Even the most skilled engineering teams can struggle to keep up with the demands of peak holiday traffic (i.e. Black Friday and Cyber Monday). Just going down for a few seconds can mean thousands in lost revenue, but for some sites, downtime can be exponentially more expensive." For Gremlin, chaos engineering is clearly the answer to many of the problems days like Black Friday poses. While it might not work for every single organization, it's nevertheless true that failing to pay attention to the value of your applications and websites at an hour by hour level could be incredibly damaging. With outages on Facebook, WhatsApp, and Instagram happening earlier this week, these problems aren't hidden away - they're in full view of the public. What does remain hidden, however, is the work and stress that goes in to tackling these issues and ensuring things are working as they should be. Perhaps it's time to start learning the lessons of Black Friday - business revenues will be that little bit healthier, but engineers will also be that little bit happier. 
Read more
  • 0
  • 0
  • 4929
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-linux-4-19-kernel-releases-with-open-arms-and-aio-based-polling-interface-linus-back-to-managing-the-linux-kernel
Natasha Mathur
22 Oct 2018
4 min read
Save for later

Linux 4.19 kernel releases with open arms and AIO-based polling interface; Linus back to managing the Linux kernel

Natasha Mathur
22 Oct 2018
4 min read
It was last month when Linus Torvalds took a break from kernel development. During his break, he had assigned Greg Kroah-Hartman as Linux's temporary leader, who went ahead and released the Linux 4.19 today at the ongoing Linux Foundation Open Source Summit in Edinburg, after eight release candidates. The new release includes features such as new AIO-based polling interface, L1TF vulnerability mitigations, the block I/O latency controller, time-based packet transmission, and the CAKE queuing discipline, among other minor changes. The Linux 4.19 kernel release announcement is slightly different and longer than usual as apart from mentioning major changes, it also talks about welcoming newcomers by helping them learn things with ease. “By providing a document in the kernel source tree that shows that all people, developers, and maintainers alike, will be treated with respect and dignity while working together, we help to create a more welcome community to those newcomers, which our very future depends on if we all wish to see this project succeed at its goals”, mentions Hartman. Moreover, Hartman also welcomed Linus back into the game as he wrote, “And with that, Linus, I'm handing the kernel tree back to you.  You can have the joy of dealing with the merge window”. Let’s discuss the features in Linux 4.19 Kernel. AIO-based polling interface A new polling API based on the asynchronous I/O (AIO) mechanism was posted by Christoph Hellwig, earlier this year.  AIO enables submission of I/O operations without waiting for their completion. Polling is a natural addition to AIO and point of polling is to avoid waiting for operations to get completed. Linux 4.19 kernel release comes with AIO poll operations that operate in the "one-shot" mode. So, once a poll notification gets generated, a new IOCB_CMD_POLL IOCB is submitted for that file descriptor. To provide support for AIO-based polling, two functions, namely,  poll() method in struct file_operations:  int (*poll) (struct file *file, struct poll_table_struct *table) (supports the polling system calls in previous kernels), are split into separate file_operations methods. Hence, it then adds these two new entries to that structure:    struct wait_queue_head *(*get_poll_head)(struct file *file, int mask);    int (*poll_mask) (struct file *file, int mask); L1 terminal fault vulnerability mitigations The Meltdown CPU vulnerability was first disclosed earlier this year and allowed unprivileged attackers to easily read the arbitrary memory in systems. Then, "L1 terminal fault" (L1TF) vulnerability (also going by the name Foreshadow) was disclosed which brought back both threats, namely, easy attacks against host memory from inside a guest. Mitigations are available in Linux 4.19 kernel and have been merged into the mainline kernel. However, they can be expensive for some users. The block I/O latency controller Large data centers make use of control groups that help them balance the use of the available computing resources among competing users. Block I/O bandwidth can be considered .as one of the most important resources for specific types of workloads. However, kernel's I/O controller was not a complete solution to the problem. This is where block I/O latency controller comes into the picture. Linux 4.19 kernel has a block I/O latency controller now.  It regulates latency (instead of bandwidth) at a relatively low level in the block layer. When in use, each control group directory comprises an io.latency file that sets the parameters for that group. A line is written to that file following this pattern: major:minor target=target-time Here major and minor are used to identify the specific block device of interest. Target-time is the maximum latency that this group should be experiencing (in milliseconds). Time-based packet transmission The time-based packet transmission comes with a new socket option, and a new qdisc, which is designed so that it can buffer the packets until a configurable time before their deadline (tx times). Packets intended for timed transmission should be sent with sendmsg(), with a control-message header (of type SCM_TXTIME) which indicates the transmission deadline as a 64-bit nanoseconds value. CAKE queuing discipline “Common Applications Kept Enhanced" (CAKE) queuing discipline in Linux 4.19 exists between the higher-level protocol code and the network interface. It decides which packets need to be dispatched at any given time. It also comprises four different components that are designed to make things work on home links. It prevents the overfilling of buffers along with improving various aspects of networking performance such as bufferbloat reduction and queue management. For more information, check out the official announcement. The kernel community attempting to make Linux more secure KUnit: A new unit testing framework for Linux Kernel Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux
Read more
  • 0
  • 0
  • 3749

article-image-windows-powershell-desired-state-configuration-video
Fatema Patrawala
16 Jul 2018
1 min read
Save for later

Scripting with Windows Powershell Desired State Configuration [Video]

Fatema Patrawala
16 Jul 2018
1 min read
https://www.youtube.com/watch?v=H3jqgto5Rk8&list=PLTgRMOcmRb3OpgM9tsUjuI3MgLCHDJ3oM&index=4 What is Desired State Configuration? Powershell Desired State Configuration (DSC) is really a powerful way of scripting. It is a declarative model of scripting, instead of you defining Powershell exactly each and every step to get from point A to point B. You only need to describe what point B is and Powershell takes care of it before anything. The biggest benefit is that we get to define our configuration, our infrastructures, our servers as a code. Desired State Configuration in Powershell can really be achieved through 3 simple steps: Create the Configuration Compile the Configuration into a MoF file Deploy the Configuration What will you need to run Powershell DSC? Thankfully we do not need a whole lot, Powershell comes with it built-in. So, for managing Windows systems with DSC you are going to need modern version of Powershell, that is: Windows 4.0, 5.0, 5.1 Powershell DSC for Linux is available Currently limited support for Powershell Core Exploring Windows PowerShell 5.0 Introducing PowerShell Remoting Managing Nano Server with Windows PowerShell and Windows PowerShell DSC    
Read more
  • 0
  • 0
  • 4189

article-image-how-use-xmlhttprequests-send-post-server
Antonio Cucciniello
03 Apr 2017
5 min read
Save for later

How to use XmlHttpRequests to Send POST to Server

Antonio Cucciniello
03 Apr 2017
5 min read
So, you need to send some bits of information from your browser to the server in order to complete some processing. Maybe you need the information to search for something in a database, or just to update something on your server. Today I am going to show you how to send some data to your server from the client through a POST request using XmlHttpRequest. First, we need to set up our environment! Set up The first thing to make sure you have is Node and NPM installed. Create a new directory for your project; here we will call it xhr-post: $ mkdir xhr-post $ cd xhr-post Then we would like to install express.js and body-parser: $ npm install express $ npm install body-parser Express makes it easy for us to handle HTTP requests, and body-parser allows us to parse incoming request bodies. Let's create two files: one for our server called server.js and one for our front end code called index.html. Then initialize your repo with a package.json file by doing: $ npm init Client Now it’s time to start with some front end work. Open and edit your index.html file with: <!doctype html> <html> <h1> XHR POST to Server </h1> <body> <input type='text' id='num' /> <script> function send () { var number = { value: document.getElementById('num').value } var xhr = new window.XMLHttpRequest() xhr.open('POST', '/num', true) xhr.setRequestHeader('Content-Type', 'application/json;charset=UTF-8') xhr.send(JSON.stringify(number)) } </script> <button type='button' value='Send' name='Send' onclick='send()' > Send </button> </body> </html> This file simply has a input field to allow users to enter some information, and a button to then send the information entered to the server. What we should focus on here is the button's onclick method send(). This is the function that is called once the button is clicked. We create a JSON object to hold the value from the text field. Then we create a new instance of an XMLHttpRequest with xhr. We call xhr.open() to initialize our request by giving it a request method (POST), the url we would like to open the request with ('/num') and determine if it should be asynchronous or not (set true for asynchronous). We then call xhr.setRequestHeader(). This sets the value of the HTTP request to json and UTF-8. As a last step, we send the request with xhr.send(). We pass the value of the text box and stringify it to send the data as raw text to our server, where it can be manipulated. Server Here our server is supposed to handle the POST request and we are simply going to log the request received from the client. const express = require('express') const app = express() const path = require('path') var bodyParser = require('body-parser') var port = 3000 app.listen(port, function () { console.log('We are listening on port ' + port) }) app.use(bodyParser.urlencoded({extended: false})) app.use(bodyParser.json()) app.get('*', function (req, res) { res.sendFile(path.join(__dirname, '/index.html')) }) app.post('/num', function (req, res) { var num = req.body.value console.log(num) return res.end('done') }) At the top, we declare our variables, obtaining an instance of express, path and body-parser. Then we set our server to listen on port 3000. Next, we use bodyParser object to decide what kind of information we would like to parse, we set it to json because we sent a json object from our client, if you recall the last section. This is done with: app.use(bodyParser.json()) Then we serve our html file in order to see our front end created in the last section with: app.get('*', function (req, res) { res.sendFile(path.join(__dirname, '/index.html')) }) The last part of server.js is where we handle the POST request from the client. We access the value sent over by checking for corresponding property on the body object which is part of the request object. Then, as a last step for us to verify we have the correct information, we will log the data received to the console and send a response to the client. Test Let's test what we have done. In the project directory, we can run: $ node server.js Open your web browser and go to the url localhost:3000. This is what your web page should look like: This is what your output to the console should look like if you enter a 5 in the input field: Conclusion You are all done! You now have a web page that sends some JSON data to your server using XmlHttpRequest! Here is a summary of what we went over: Created a front end with an input field and button Created a function for our button to send an XmlHttpRequest Created our server to listen on port 3000 Served our html file Handled our POST request at route '/num' Logged the value to our console If you enjoyed this post, share it on twitter. Check out the code for this tutorial on GitHub. Possible Resources Check out my GitHub View my personal blog Information on XmlHtttpRequest GitHub pages for: express body-parser About the author Antonio Cucciniello is a software engineer with a background in C, C++, and JavaScript (Node.Js). He is from New Jersey, USA. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using their voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. To contact Antonio, e-mail him at Antonio.cucciniello16@gmail.com, follow him on twitter at @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 18137

article-image-hyper-v-architecture-and-components
Packt
04 Jan 2017
15 min read
Save for later

Hyper-V Architecture and Components

Packt
04 Jan 2017
15 min read
In this article by Charbel Nemnom and Patrick Lownds, the author of the book Windows Server 2016 Hyper-V Cookbook, Second Edition, we will see Hyper-V architecture along with the most important components in Hyper-V and also differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware. Virtualization is not a new feature or technology that everyone decided to have in their environment overnight. Actually, it's quite old. There are a couple of computers in the mid-60s that were using virtualization already, such as the IBM M44/44X, where you could run multiple VMs using hardware and software abstraction. It is known as the first virtualization system and the creation of the term virtual machine. Although Hyper-V is in its fifth version, Microsoft virtualization technology is very mature. Everything started in 1988 with a company named Connectix. It had innovative products such as Connectix Virtual PC and Virtual Server, an x86 software emulation for Mac, Windows, and OS/2. In 2003, Microsoft acquired Connectix and a year later released Microsoft Virtual PC and Microsoft Virtual Server 2005. After lots of improvements in the architecture during the project Viridian, Microsoft released Hyper-V in 2008, the second version in 2009 (Windows Server 2008 R2), the third version in 2012 (Windows Server 2012), a year later in 2013 the fourth version was released (Windows Server 2012 R2), the current and fifth version in 2016 (Windows Server 2016). In the past years, Microsoft has proven that Hyper-V is a strong and competitive solution for server virtualization and provides scalability, flexible infrastructure, high availability, and resiliency. To better understand the different virtualization models, and how the VMs are created and managed by Hyper-V, it is very important to know its core, architecture, and components. By doing so, you will understand how it works, you can compare with other solutions, and troubleshoot problems easily. Microsoft has long told customers that Azure datacenters are powered by Microsoft Hyper-V, and the forthcoming Azure Stack will actually allow us to run Azure in our own datacenters on top of Windows Server 2016 Hyper-V as well. For more information about Azure Stack, please refer to the following link: https://azure.microsoft.com/en-us/overview/azure-stack/ Microsoft Hyper-V proves over the years that it's a very scalable platform to virtualize any and every workload without exception. This appendix includes well-explained topics with the most important Hyper-V architecture components compared with other versions. (For more resources related to this topic, see here.) Understanding Hypervisors The Virtual Machine Manager (VMM), also known as Hypervisor, is the software application responsible for running multiple VMs in a single system. It is also responsible for creation, preservation, division, system access, and VM management running on the Hypervisor layer. These are the types of Hypervisors: VMM Type 2 VMM Hybrid VMM Type 1 VMM Type 2 This type runs Hypervisor on top of an OS, as shown in the following diagram, we have the hardware at the bottom, the OS and then the Hypervisor running on top. Microsoft Virtual PC and VMware Workstation is an example of software that uses VMM Type 2. VMs pass hardware requests to the Hypervisor, to the host OS, and finally reaching the hardware. That leads to performance and management limitation imposed by the host OS. Type 2 is common for test environments—VMs with hardware restrictions—to run on software applications that are installed in the host OS. VMM Hybrid When using the VMM Hybrid type, the Hypervisor runs on the same level as the OS, as shown in the following diagram. As both Hypervisor and the OS are sharing the same access to the hardware with the same priority, it is not as fast and safe as it could be. This is the type used by the Hyper-V predecessor named Microsoft Virtual Server 2005: VMM Type 1 VMM Type 1 is a type that has the Hypervisor running in a tiny software layer between the hardware and the partitions, managing and orchestrating the hardware access. The host OS, known as Parent Partition, run on the same level as the Child Partition, known as VMs, as shown in the next diagram. Due to the privileged access that the Hypervisor has on the hardware, it provides more security, performance, and control over the partitions. This is the type used by Hyper-V since its first release: Hyper-V architecture Knowing how Hyper-V works and how its architecture is constructed will make it easier to understand its concepts and operations. The following sections will explore the most important components in Hyper-V. Windows before Hyper-V Before we dive into the Hyper-V architecture details, it will be easy to understand what happens after Hyper-V is installed, by looking at Windows without Hyper-V, as shown in the following diagram: In a normal Windows installation, the instructions access is divided by four privileged levels in the processor called Rings. The most privileged level is Ring 0, with direct access to the hardware and where the Windows Kernel sits. Ring 3 is responsible for hosting the user level, where most common applications run and with the least privileged access. Windows after Hyper-V When Hyper-V is installed, it needs a higher privilege than Ring 0. Also, it must have dedicated access to the hardware. This is possible due to the capabilities of the new processor created by Intel and AMD, called Intel-VT and AMD-V respectively, that allows the creation of a fifth ring called Ring -1. Hyper-V uses this ring to add its Hypervisor, having a higher privilege and running under Ring 0, controlling all the access to the physical components, as shown in the following diagram: The OS architecture suffers several changes after Hyper-V installation. Right after the first boot, the Operating System Boot Loader file (winload.exe) checks the processor that is being used and loads the Hypervisor image on Ring -1 (using the files Hvix64.exe for Intel processors and Hvax64.exe for AMD processors). Then, Windows Server is initiated running on top of the Hypervisor and every VM that runs beside it. After Hyper-V installation, Windows Server has the same privilege level as a VM and is responsible for managing VMs using several components. Differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware There are four different versions of Hyper-V—the role that is installed on Windows Server 2016 (Core or Full Server), the role that can be installed on a Nano Server, its free version called Hyper-V Server and the Hyper-V that comes in Windows 10 called Hyper-V Client. The following sections will explain the differences between all the versions and a comparison between Hyper-V and its competitor, VMware. Windows Server 2016 Hyper-V Hyper-V is one of the most fascinating and improved role on Windows Server 2016. Its fifth version goes beyond virtualization and helps us deliver the correct infrastructure to host your cloud environment. Hyper-V can be installed as a role in both Windows Server Standard and Datacenter editions. The only difference in Windows Server 2012 and 2012 R2 in the Standard edition, two free Windows Server OSes are licensed whereas there are unlimited licenses in the Datacenter edition. However, in Windows Server 2016 there are significant changes between the two editions. The following table will show the difference between Windows Server 2016 Standard and Datacenter editions: Resource Windows Server 2016 Datacenter edition Windows Server 2016 Standard edition Core functionality of Windows Server Yes Yes OSes/Hyper-V Containers Unlimited 2 Windows Server Containers Unlimited Unlimited Nano Server Yes Yes Storage features for software-defined datacenter including Storage Spaces Direct and Storage Replica Yes N/A Shielded VMs Yes N/A Networking stack for software-defined datacenter Yes N/A Licensing Model Core + CAL Core + CAL As you can see in preceding table, the Datacenter edition is designed for highly virtualized private and hybrid cloud environments and Standard edition is for low density or non-virtualized (physical) environments. In Windows Server 2016, Microsoft is also changing the licensing model from a per-processor to per-core licensing for Standard and Datacenter editions. The following points will guide you in order to license Windows Server 2016 Standard and Datacenter edition: All physical cores in the server must be licensed. In other words, servers are licensed based on the number of processor cores in the physical server. You need a minimum of 16 core licenses for each server. You need a minimum of 8 core licenses for each physical processor. The core licenses will be sold in packs of two. Eight 2-core packs will be the minimum required to license each physical server. The 2-core pack for each edition is one-eighth the price of a 2-processor license for corresponding Windows Server 2012 R2 editions. The Standard edition provides rights for up to two OSEs or Hyper-V containers when all physical cores in the server are licensed. For every two additional VMs, all the cores in the server have to be licensed again. The price of 16-core licenses of Windows Server 2016 Datacenter and Standard edition will be the same price as the 2-processor license of the corresponding editions of the Windows Server 2012 R2 version. Existing customers' servers under Software Assurance agreement will receive core grants as required, with documentation. The following table illustrates the new licensing model based on number of 2-core pack licenses: Legend: Gray cells represent licensing costs White cells represent additional licensing is required Windows Server 2016 Standard edition may need additional licensing. Nano Server Nano Server is a new headless, 64-bit only installation option that installs "just enough OS" resulting in a dramatically smaller footprint that results in more uptime and a smaller attack surface. Users can choose to add server roles as needed, including Hyper-V, Scale out File Server, DNS Server and IIS server roles. User can also choose to install features, including Container support, Defender, Clustering, Desired State Configuration (DSC), and Shielded VM support. Nano Server is available in Windows Server 2016 for: Physical Machines Virtual Machines Hyper-V Containers Windows Server Containers Supports the following inbox optional roles and features: Hyper-V, including container and shielded VM support Datacenter Bridging Defender DNS Server Desired State Configuration Clustering IIS Network Performance Diagnostics Service (NPDS) System Center Virtual Machine Manager and System Center Operations Manager Secure Startup Scale out File Server, including Storage Replica, MPIO, iSCSI initiator, Data Deduplication The Windows Server 2016 Hyper-V role can be installed on a Nano Server; this is a key Nano Server role, shrinking the OS footprint and minimizing reboots required when Hyper-V is used to run virtualization hosts. Nano server can be clustered, including Hyper-V failover clusters. Hyper-V works the same on Nano Server including all features does in Windows Server 2016, aside from a few caveats: All management must be performed remotely, using another Windows Server 2016 computer. Remote management consoles such as Hyper-V Manager, Failover Cluster Manager, PowerShell remoting, and management tools like System Center Virtual Machine Manager as well as the new Azure web-based Server Management Tool (SMT) can all be used to manage a Nano Server environment. RemoteFX is not available. Microsoft Hyper-V Server 2016 Hyper-V Server 2016, the free virtualization solution from Microsoft has all the features included on Windows Server 2016 Hyper-V. The only difference is that Microsoft Hyper-V Server does not include VM licenses and a graphical interface. The management can be done remotely using PowerShell, Hyper-V Manager from another Windows Server 2016 or Windows 10. All the other Hyper-V features and limits in Windows Server 2016, including Failover Cluster, Shared Nothing Live Migration, RemoteFX, Discrete Device Assignment and Hyper-V Replica are included in the Hyper-V free version. Hyper-V Client In Windows 8, Microsoft introduced the first Hyper-V Client version. Its third version now with Windows 10. Users can have the same experience from Windows Server 2016 Hyper-V on their desktops or tablet, making their test and development virtualized scenarios much easier. Hyper-V Client in Windows 10 goes beyond only virtualization and helps Windows developers to use containers by bringing Hyper-V Containers natively into Windows 10. This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows. Since Hyper-V Containers utilize their own instance of the Windows kernel, the container is truly a server container all the way down to the kernel. Plus, with the flexibility of Windows container runtimes (Windows Server Containers or Hyper-V Containers), containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers. Because Windows 10 only supports Hyper-V containers, the Hyper-V feature must also be enabled. Hyper-V Client is present only in the Windows 10 Pro or Enterprise version and requires the same CPU feature as in Windows Server 2016 called Second Level Address Translation (SLAT). Although Hyper-V client is very similar to the server version, there are some components that are only present on Windows Server 2016 Hyper-V. Here is a list of components you will find only on the server version: Hyper-V Replica Remote FX capability to virtualize GPUs Discrete Device Assignment (DDA) Live Migration and Shared Nothing Live Migration ReFS Accelerated VHDX Operations SR-IOV Networks Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET) Virtual Fibre Channel Network Virtualization Failover Clustering Shielded VMs VM Monitoring Even with these limitations, Hyper-V Client has very interesting features such as Storage Migration, VHDX, VMs running on SMB 3.1 File Shares, PowerShell integration, Hyper-V Manager, Hyper-V Extensible Switch, Quality of Services, Production Checkpoints, the same VM hardware limits as Windows Server 2016 Hyper-V, Dynamic Memory, Runtime Memory Resize, Nested Virtualization, DHCP Guard, Port Mirroring, NIC Device Naming and much more. In Windows 8, Microsoft introduced the first Hyper-V Client version. Its third version now with Windows 10. Users can have the same experience from Windows Server 2016 Hyper-V on their desktops or tablet, making their test and development virtualized scenarios much easier. Hyper-V Client in Windows 10 goes beyond only virtualization and helps Windows developers to use containers by bringing Hyper-V Containers natively into Windows 10. This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows. Since Hyper-V Containers utilize their own instance of the Windows kernel, the container is truly a server container all the way down to the kernel. Plus, with the flexibility of Windows container runtimes (Windows Server Containers or Hyper-V Containers), containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers. Because Windows 10 only supports Hyper-V containers, the Hyper-V feature must also be enabled. Hyper-V Client is present only in the Windows 10 Pro or Enterprise version and requires the same CPU feature as in Windows Server 2016 called Second Level Address Translation (SLAT). Although Hyper-V client is very similar to the server version, there are some components that are only present on Windows Server 2016 Hyper-V. Here is a list of components you will find only on the server version: Hyper-V Replica Remote FX capability to virtualize GPUs Discrete Device Assignment (DDA) Live Migration and Shared Nothing Live Migration ReFS Accelerated VHDX Operations SR-IOV Networks Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET) Virtual Fibre Channel Network Virtualization Failover Clustering Shielded VMs VM Monitoring Even with these limitations, Hyper-V Client has very interesting features such as Storage Migration, VHDX, VMs running on SMB 3.1 File Shares, PowerShell integration, Hyper-V Manager, Hyper-V Extensible Switch, Quality of Services, Production Checkpoints, the same VM hardware limits as Windows Server 2016 Hyper-V, Dynamic Memory, Runtime Memory Resize, Nested Virtualization, DHCP Guard, Port Mirroring, NIC Device Naming and much more. Windows Server 2016 Hyper-V X VMware vSphere 6.0 VMware is the existing competitor of Hyper-V and the current version 6.0 offers the VMware vSphere as a free and a standalone Hypervisor, vSphere Standard, Enterprise, and Enterprise Plus. The following list compares all the features existing in the free version of Hyper-V with VMware Sphere and Enterprise Plus: Feature Windows Server 2012 R2 Windows Server 2016 VMware vSphere 6.0 VMware vSphere 6.0 Enterprise Plus Logical Processors 320 512 480 480 Physical Memory 4TB 24TB 6TB 6TB/12TB Virtual CPU per Host 2,048 2,048 4,096 4,096 Virtual CPU per VM 64 240 8 128 Memory per VM 1TB 12TB 4TB 4TB Active VMs per Host 1,024 1,024 1,024 1,024 Guest NUMA Yes Yes Yes Yes Maximum Nodes 64 64 N/A 64 Maximum VMs per Cluster 8,000 8,000 N/A 8,000 VM Live Migration Yes Yes No Yes VM Live Migration with Compression Yes Yes N/A No VM Live Migration using RDMA Yes Yes N/A No 1GB Simultaneous Live Migrations Unlimited Unlimited N/A 4 10GB Simultaneous Live Migrations Unlimited Unlimited N/A 8 Live Storage Migration Yes Yes No Yes Shared Nothing Live Migration Yes Yes No Yes Cluster Rolling Upgrades Yes Yes N/A Yes VM Replica Hot/Add virtual Disk Yes Yes Yes Yes Native 4-KB Disk Support Yes Yes No No Maximum Virtual Disk Size 64TB 64TB 2TB 62TB Maximum Pass Through Disk Size 256TB or more 256TB or more 64TB 64TB Extensible Network Switch Yes Yes No Third party vendors   Network Virtualization Yes Yes No Requires vCloud networking and security IPsec Task Offload Yes Yes No No SR-IOV Yes Yes N/A Yes Virtual NICs per VM 12 12 10 10 VM NIC Device Naming No Yes N/A No Guest OS Application Monitoring Yes Yes No No Guest Clustering with Live Migration Yes Yes N/A No Guest Clustering with Dynamic Memory Yes Yes N/A No Shielded VMs No Yes N/A No Summary In this article, we have covered Hyper-V architecture along with the most important components in Hyper-V and also differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware. Resources for Article: Further resources on this subject: Storage Practices and Migration to Hyper-V 2016 [article] Proxmox VE Fundamentals [article] Designing and Building a vRealize Automation 6.2 Infrastructure [article]
Read more
  • 0
  • 0
  • 8571
article-image-managing-users-and-groups
Packt
10 Nov 2016
7 min read
Save for later

Managing Users and Groups

Packt
10 Nov 2016
7 min read
In this article, we will cover the following recipes: Creating user account Creating user accounts in batch mode Creating a group Introduction In this article by Uday Sawant, the author of the book Ubuntu Server Cookbook, you will see how to add new users to the Ubuntu server, update existing users. You will get to know the default setting for new users and how to change them. (For more resources related to this topic, see here.) Creating user account While installing Ubuntu, we add a primary user account on the server; if you are using the cloud image, it comes preinstalled with the default user. This single user is enough to get all tasks done in Ubuntu. There are times when you need to create more restrictive user accounts. This recipe shows how to add a new user to the Ubuntu server. Getting ready You will need super user or root privileges to add a new user to the Ubuntu server. How to do it… Follow these steps to create the new user account: To add a new user in Ubuntu, enter following command in your shell: $ sudo adduser bob Enter your password to complete the command with sudo privileges: Now enter a password for the new user: Confirm the password for the new user: Enter the full name and other information about new user; you can skip this part by pressing the Enter key. Enter Y to confirm that information is correct: This should have added new user to the system. You can confirm this by viewing the file /etc/passwd: How it works… In Linux systems, the adduser command is higher level command to quickly add a new user to the system. Since adduser requires root privileges, we need to use sudo along with the command, adduser completes following operations: Adds a new user Adds a new default group with the same name as the user Chooses UID (user ID) and GID (group ID) conforming to the Debian policy Creates a home directory with skeletal configuration (template) from /etc/skel Creates a password for the new user Runs the user script, if any If you want to skip the password prompt and finger information while adding the new user, use the following command: $ sudo adduser --disabled-password --gecos "" username Alternatively, you can use the useradd command as follows: $ sudo useradd -s <SHELL> -m -d <HomeDir> -g <Group> UserName Where: -s specifies default login shell for the user -d sets the home directory for the user -m creates a home directory if one does not already exist -g specifies the default group name for the user Creating a user with the command useradd does not set password for the user account. You can set or change the user password with the following command: $sudo passwd bob This will change the password for the user account bob. Note that if you skip the username part from the preceding command you will end up changing the password of root account. There's more… With adduser, you can do five different tasks: Add a normal user Add a system user with system option Add user group with the--group option and without the--system option Add a system group when called with the --system option Add an existing user to existing group when called with two non-option arguments Check out the manual page man adduser to get more details. You can also configure various default settings for the adduser command. A configuration file /etc/adduser.conf can be used to set the default values to be used by the adduser, addgroup, and deluser commands. A key value pair of configuration can set various default values, including the home directory location, directory structure skel to be used, default groups for new users, and so on. Check the manual page for more details on adduser.conf with following command: $ man adduser.conf See also Check out the command useradd, a low level command to add new user to system Check out the command usermod, a command to modify a user account See why every user has his own group at: http://unix.stackexchange.com/questions/153390/why-does-every-user-have-his-own-group Creating user accounts in batch mode In this recipe, we will see how to create multiple user accounts in batch mode without using any external tool. Getting ready You will need a user account with root or root privileges. How to do it... Follow these steps to create a user account in batch mode: Create a new text file users.txt with the following command: $ touch users.txt Change file permissions with the following command: $ chmod 600 users.txt Open users.txt with GNU nano and add user accounts details: $ nano users.txt Press Ctrl + O to save the changes. Press Ctrl + X to exit GNU nano. Enter $ sudo newusers users.txt to import all users listed in users.txt file. Check /etc/passwd to confirm that users are created: How it works… We created a database of user details listed in format as the passwd file. The default format for each row is as follows: username:passwd:uid:gid:full name:home_dir:shell Where: username: This is the login name of the user. If a user exists, information for user will be changed; otherwise, a new user will be created. password: This is the password of the user. uid: This is the uid of the user. If empty, a new uid will be assigned to this user. gid: This is the gid for the default group of user. If empty, a new group will be created with the same name as the username. full name: This information will be copied to the gecos field. home_dir: This defines the home directory of the user. If empty, a new home directory will be created with ownership set to new or existing user. shell: This is the default login shell for the user. The new user command reads each row and updates the user information if user already exists, or it creates a new user. We made the users.txt file accessible to owner only. This is to protect this file, as it contains the user's login name and password in unencrypted format. Creating a group Group is a way to organize and administer user accounts in Linux. Groups are used to collectively assign rights and permissions to multiple user accounts. Getting ready You will need super user or root privileges to add a group to the Ubuntu server. How to do it... Enter the following command to add a new group: $ sudo addgroup guest Enter your password to complete addgroup with root privileges. How it works… Here, we are simply adding a new group guest to the server. As addgroup needs root privileges, we need to use sudo along with the command. After creating a new group, addgroup displays the GID of the new group. There's more… Similar to adduser, you can use addgroup in different modes: Add a normal group when used without any options Add a system group with the--system option Add an existing user to existing group when called with two non-option arguments Check out groupadd, a low level utility to add new group to the server See also Check out groupadd, a low level utility to add new group to the server Summary In this article, we have discussed how to create user account, how to create a group and also about how to create user accounts in batch mode. Resources for Article: Further resources on this subject: Directory Services [article] Getting Started with Ansible [article] Lync 2013 Hybrid and Lync Online [article]
Read more
  • 0
  • 0
  • 1787

article-image-setting-upa-network-backup-server-bacula
Packt
19 Sep 2016
12 min read
Save for later

Setting Up a Network Backup Server with Bacula

Packt
19 Sep 2016
12 min read
In this article by Timothy Boronczyk,the author of the book CentOS 7 Server Management Cookbook,we'll discuss how to set up a network backup server with Bacula. The fact of the matter is that we are living in a world that is becoming increasingly dependent on data. Also, from accidental deletion to a catastrophic hard drive failure, there are many threats to the safety of your data. The more important your data is and the more difficult it is to recreate if it were lost, the more important it is to have backups. So, this article shows you how you can set up a backup server using Bacula and how to configure other systems on your network to back up their data to it. (For more resources related to this topic, see here.) Getting ready This article requires at least two CentOS systems with working network connections. The first system is the local system which we'll assume has the hostname benito and the IP address 192.168.56.41. The second system is the backup server. You'll need administrative access on both systems, either by logging in with the root account or through the use of sudo. How to do it… Perform the following steps on your local system to install and configure the Bacula file daemon: Install the bacula-client package. yum install bacula-client Open the file daemon's configuration file with your text editor. vi /etc/bacula/bacula-fd.conf In the FileDaemon resource, update the value of the Name directive to reflect the system's hostname with the suffix -fd. FileDaemon {   Name = benito-fd ... } Save the changes and close the file. Start the file daemon and enable it to start when the system reboots. systemctl start bacula-fd.service systemctl enable bacula-fd.service Open the firewall to allow TCP traffic through to port 9102. firewall-cmd --zone=public --permanent --add-port=9102/tcp firewall-cmd --reload Repeat steps 1-6 on each system that will be backed up. Install the bacula-console, bacula-director, bacula-storage, and bacula-client packages. yum install bacula-console bacula-director bacula-storage bacula-client Re-link the catalog library to use SQLite database storage. alternatives --config libbaccats.so Type 2 when asked to provide the selection number. Create the SQLite database file and import the table schema. /usr/libexec/bacula/create_sqlite3_database /usr/libexec/bacula/make_sqlite3_tables Open the director's configuration file with your text editor. vi /etc/bacula/bacula-dir.conf In the Job resource where Name has the value BackupClient1, change the value of the Name directive to reflect one of the local systems. Then add a Client directive with a value that matches that system's FileDaemonName. Job {   Name = "BackupBenito"   Client = benito-fd   JobDefs = "DefaultJob" } Duplicate the Job resource and update its directive values as necessary so that there is a Job resource defined for each system to be backed up. For each system that will be backed up, duplicate the Client resource where the Name directive is set to bacula-fd. In the copied resource, update the Name and Address directives to identify that system. Client {   Name = bacula-fd   Address = localhost   ... } Client {   Name = benito-fd   Address = 192.168.56.41   ... } Client {   Name = javier-fd   Address = 192.168.56.42   ... } Save your changes and close the file. Open the storage daemon's configuration file. vi /etc/bacula/bacula-sd.conf In the Device resource where Name has the value FileStorage, change the value of the Archive Device directive to /bacula. Device {   Name = FileStorage   Media Type = File   Archive Device = /bacula ... Save the update and close the file. Create the /bacula directory and assign it the proper ownership. mkdir /bacula chown bacula:bacula /bacula If you have SELinux enabled, reset the security context on the new directory. restorecon -Rv /bacula Start the director and storage daemons and enable them to start when the system reboots. systemctl start bacula-dir.service bacula-sd.service bacula-fd.service systemctl enable bacula-dir.service bacula-sd.service bacula-fd.service Open the firewall to allow TCP traffic through to ports 9101-9103. firewall-cmd --zone=public --permanent --add-port=9101-9103/tcp firewall-cmd –reload Launch Bacula's console interface. bconsole Enter label to create a destination for the backup. When prompted for the volume name, use Volume0001 or a similar value. When prompted for the pool, select the File pool. label Enter quit to leave the console interface. How it works… The suite's distributed architecture and the amount of flexibility it offers us can make configuring Bacula a daunting task.However, once you have everything up and running, you'll be able to rest easy knowing that your data is safe from disasters and accidents. Bacula is broken up into several components. In this article, our efforts centered on the following three daemons: the director, the file daemon, and the storage daemon. The file daemon is installed on each local system to be backed up and listens for connections from the director. The director connects to each file daemon as scheduled and tells it whichfiles to back up and where to copy them to (the storage daemon). This allows us to perform all scheduling at a central location. The storage daemon then receives the data and writes it to the backup medium, for example, disk or tape drive. On the local system, we installed the file daemon with the bacula-client package andedited the file daemon's configuration file at /etc/bacula/bacula-fd.conf to specify the name of the process. The convention is to add the suffix -fd to the system's hostname. FileDaemon {   Name = benito-fd   FDPort = 9102   WorkingDirectory = /var/spool/bacula   Pid Directory = /var/run   Maximum Concurrent Jobs = 20 } On the backup server, we installed thebacula-console, bacula-director, bacula-storage, and bacula-client packages. This gives us the director and storage daemon and another file daemon. This file daemon's purpose is to back up Bacula's catalog. Bacula maintains a database of metadata about previous backup jobs called the catalog, which can be managed by MySQL, PostgreSQL, or SQLite. To support multiple databases, Bacula is written so that all of its database access routines are contained in shared libraries with a different library for each database. When Bacula wants to interact with a database, it does so through libbaccats.so, a fake library that is nothing more than a symbolic link pointing to one of the specific database libraries. This let's Bacula support different databases without requiring us to recompile its source code. To create the symbolic link, we usedalternatives and selected the real library that we want to use. I chose SQLite since it's an embedded database library and doesn't require additional services. Next, we needed to initialize the database schema using scripts that come with Bacula. If you want to use MySQL, you'll need to create a dedicated MySQL user for Bacula to use and then initialize the schema with the following scripts instead. You'll also need to review Bacula's configuration files to provide Bacula with the required MySQL credentials. /usr/libexec/bacula/grant_mysql_privileges /usr/libexec/bacula/create_mysql_database /usr/libexec/bacula/make_mysql_tables Different resources are defined in the director's configuration file at /etc/bacula/bacula-dir.conf, many of which consist not only of their own values but also reference other resources. For example, the FileSet resource specifies which files are included or excluded in backups and restores, while a Schedule resource specifies when backups should be made. A JobDef resource can contain various configuration directives that are common to multiple backup jobs and also reference particular FileSet and Schedule resources. Client resources identify the names and addresses of systems running file daemons, and a Job resource will pull together a JobDef and Client resource to define the backup or restore task for a particular system. Some resources define things at a more granular level and are used as building blocks to define other resources. Thisallows us to create complex definitions in a flexible manner. The default resource definitions outline basic backup and restore jobs that are sufficient for this article (you'll want to study the configuration and see how the different resources fit together so that you can tweak them to better suit your needs). We customized the existing backup Jobresource by changing its name and client. Then, we customized the Client resource by changing its name and address to point to a specific system running a file daemon. A pair of Job and Client resources can be duplicated for each additional system youwant to back up. However, notice that I left the default Client resource that defines bacula-fd for the localhost. This is for the file daemon that's local to the backup server and will be the target for things such as restore jobs and catalog backups. Job {   Name = "BackupBenito"   Client = benito-fd   JobDefs = "DefaultJob" }   Job {   Name = "BackupJavier"   Client = javier-fd   JobDefs = "DefaultJob" }   Client {   Name = bacula-fd   Address = localhost   ... }   Client {   Name = benito-fd   Address = 192.168.56.41   ... }   Client {   Name = javier-fd   Address = 192.168.56.42   ... } To complete the setup, we labeled a backup volume. This task, as with most others, is performed through bconsole, a console interface to the Bacula director. We used thelabel command to specify a label for the backup volume and when prompted for the pool, we assigned the labeled volume to the File pool. In a way very similar to how LVM works, an individual device or storage unit is allocated as a volume and the volumes are grouped into storage pools. If a pool contains two volumes backed by tape drives for example and one of the drives is full, the storage daemon will write the data to the tape that has space available. Even though in our configuration we're storing the backup to disk, we still need to create a volume as the destination for data to be written to. There's more... At this point, you should consider which backup strategy works best for you. A full backup is a complete copy of your data, a differential backup captures only the files that have changed since the last full backup, and an incremental backup copies the files that have changed since the last backup (regardless of the type of backup). Commonly, administrators employ a previous combination, perhaps making a full backup at the start of the week and then differential or incremental backups each day thereafter. This saves storage space because the differential and incremental backups are not only smaller but also convenient when the need to restore a file arises because a limited number of backups need to be searched for the file. Another consideration is the expected size of each backup and how long it will take for the backup to run to completion. Full backups obviously take longer to run, and in an office with 9-5 working hours, Monday through Friday and it may not be possible to run a full backup during the evenings. Performing a full backup on Fridays gives the backup time over the weekend to run. Smaller, incremental backups can be performed on the other days when time is lesser. Yet another point that is important in your backup strategy is, how long the backups will be kept and where they will be kept. A year's worth of backups is of no use if your office burns down and they were sitting in the office's IT closet. At one employer, we kept the last full back up and last day's incremental on site;they were then duplicated to tape and stored off site. Regardless of the strategy you choose to implement, your backups are only as good as your ability to restore data from them. You should periodically test your backups to make sure you can restore your files. To run a backup job on demand, enter run in bconsole. You'll be prompted with a menu to select one of the current configured jobs. You'll then be presented with the job's options, such as what level of backup will be performed (full, incremental, or differential), it's priority, and when it will run. You can type yes or no to accept or cancel it or mod to modify a parameter. Once accepted, the job will be queued and assigned a job ID. To restore files from a backup, use the restore command. You'll be presented with a list of options allowing you to specify which backup the desired files will be retrieved from. Depending on your selection, the prompts will be different. Bacula's prompts are rather clear, so read them carefully and they will guide you through the process. Apart from the run and restore commands, another useful command is status. It allows you to see the current status of the Bacula components, if there are any jobs currently running, and which jobs have completed. A full list of commands can be retrieved by typing help in bconsole. See also For more information on working with Bacula, refer to the following resources: Bacula documentation (http://blog.bacula.org/documentation/) How to use Bacula on CentOS 7 (http://www.digitalocean.com/community/tutorial_series/how-to-use-bacula-on-centos-7) Bacula Web (a web-based reporting and monitoring tool for Bacula) (http://www.bacula-web.org/) Summary In this article, we discussed how we can set up a backup server using Bacula and how to configure other systems on our network to back up our data to it. Resources for Article: Further resources on this subject: Jenkins 2.0: The impetus for DevOps Movement [article] Gearing Up for Bootstrap 4 [article] Introducing Penetration Testing [article]
Read more
  • 0
  • 0
  • 5977

article-image-virtualizing-hosts-and-applications
Packt
20 May 2016
19 min read
Save for later

Virtualizing Hosts and Applications

Packt
20 May 2016
19 min read
In this article by Jay LaCroix, the author of the book Mastering Ubuntu Server, you will learn how there have been a great number of advancements in the IT space in the last several decades, and a few technologies have come along that have truly revolutionized the technology industry. The author is sure few would argue that the Internet itself is by far the most revolutionary technology to come around, but another technology that has created a paradigm shift in IT is virtualization. It evolved the way we maintain our data centers, allowing us to segregate workloads into many smaller machines being run from a single server or hypervisor. Since Ubuntu features the latest advancements of the Linux kernel, virtualization is actually built right in. After installing just a few packages, we can create virtual machines on our Ubuntu Server installation without the need for a pricey license agreement or support contract. In this article, Jay will walk you through creating, running, and managing Docker containers. (For more resources related to this topic, see here.) Creating, running, and managing Docker containers Docker is a technology that seemed to come from nowhere and took the IT world by storm just a few years ago. The concept of containerization is not new, but Docker took this concept and made it very popular. The idea behind a container is that you can segregate an application you'd like to run from the rest of your system, keeping it sandboxed from the host operating system, while still being able to use the host's CPU and memory resources. Unlike a virtual machine, a container doesn't have a virtual CPU and memory of its own, as it shares resources with the host. This means that you will likely be able to run more containers on a server than virtual machines, since the resource utilization would be lower. In addition, you can store a container on a server and allow others within your organization to download a copy of it and run it locally. This is very useful for developers developing a new solution and would like others to test or run it. Since the Docker container contains everything the application needs to run, it's very unlikely that a systematic difference between one machine or another will cause the application to behave differently. The Docker server, also known as Hub, can be used remotely or locally. Normally, you'd pull down a container from the central Docker Hub instance, which will make various containers available, which are usually based on a Linux distribution or operating system. When you download it locally, you'll be able to install packages within the container or make changes to its files, just as if it were a virtual machine. When you finish setting up your application within the container, you can upload it back to Docker Hub for others to benefit from or your own local Hub instance for your local staff members to use. In some cases, some developers even opt to make their software available to others in the form of containers rather than creating distribution-specific packages. Perhaps they find it easier to develop a container that can be used on every distribution than build separate packages for individual distributions. Let's go ahead and get started. To set up your server to run or manage Docker containers, simply install the docker.io package: # apt-get install docker.io Yes, that's all there is to it. Installing Docker has definitely been the easiest thing we've done during this entire article. Ubuntu includes Docker in its default repositories, so it's only a matter of installing this one package. You'll now have a new service running on your machine, simply titled docker. You can inspect it with the systemctl command, as you would any other: # systemctl status docker Now that Docker is installed and running, let's take it for a test drive. Having Docker installed gives us the docker command, which has various subcommands to perform different functions. Let's try out docker search: # docker search ubuntu What we're doing with this command is searching Docker Hub for available containers based on Ubuntu. You could search for containers based on other distributions, such as Fedora or CentOS, if you wanted. The command will return a list of Docker images available that meet your search criteria. The search command was run as root. This is required, unless you make your own user account a member of the docker group. I recommend you do that and then log out and log in again. That way, you won't need to use root anymore. From this point on, I won't suggest using root for the remaining Docker examples. It's up to you whether you want to set up your user account with the docker group or continue to run docker commands as root. To pull down a docker image for our use, we can use the docker pull command, along with one of the image names we saw in the output of our search command: docker pull ubuntu With this command, we're pulling down the latest Ubuntu container image available on Docker Hub. The image will now be stored locally, and we'll be able to create new containers from it. To create a new container from our downloaded image, this command will do the trick: docker run -it ubuntu:latest /bin/bash Once you run this command, you'll notice that your shell prompt immediately changes. You're now within a shell prompt from your container. From here, you can run commands you would normally run within a real Ubuntu machine, such as installing new packages, changing configuration files, and so on. Go ahead and play around with the container, and then we'll continue on with a bit more theory on how it actually works. There are some potentially confusing aspects of Docker we should get out of the way first before we continue with additional examples. The most likely thing to confuse newcomers to Docker is how containers are created and destroyed. When you execute the docker run command against an image you've downloaded, you're actually creating a container. Each time you use the docker run command, you're not resuming the last container, but creating a new one. To see this in action, run a container with the docker run command provided earlier, and then type exit. Run it again, and then type exit again. You'll notice that the prompt is different each time you run the command. After the root@ portion of the bash prompt within the container is a portion of a container ID. It'll be different each time you execute the docker run command, since you're creating a new container with a new ID each time. To see the number of containers on your server, execute the docker info command. The first line of the output will tell you how many containers you have on your system, which should be the number of times you've run the docker run command. To see a list of all of these containers, execute the docker ps -a command: docker ps -a The output will give you the container ID of each container, the image it was created from, the command being run, when the container was created, its status, and any ports you may have forwarded. The output will also display a randomly generated name for each container, and these names are usually quite wacky. As I was going through the process of creating containers while writing this section, the codenames for my containers were tender_cori, serene_mcnulty, and high_goldwasser. This is just one of the many quirks of Docker, and some of these can be quite hilarious. The important output of the docker ps -a command is the container ID, the command, and the status. The ID allows you to reference a specific container. The command lets you know what command was run. In our example, we executed /bin/bash when we started our containers. Using the ID, we can resume a container. Simply execute the docker start command with the container ID right after. Your command will end up looking similar to the following: docker start 353c6fe0be4d The output will simply return the ID of the container and then drop you back to your shell prompt. Not the shell prompt of your container, but that of your server. You might be wondering at this point, then, how you get back to the shell prompt for the container. We can use docker attach for that: docker attach 353c6fe0be4d You should now be within a shell prompt inside your container. If you remember from earlier, when you type exit to disconnect from your container, the container stops. If you'd like to exit the container without stopping it, press CTRL + P and then CTRL + Q on your keyboard. You'll return to your main shell prompt, but the container will still be running. You can see this for yourself by checking the status of your containers with the docker ps -a command. However, while these keyboard shortcuts work to get you out of the container, it's important to understand what a container is and what it isn't. A container is not a service running in the background, at least not inherently. A container is a collection of namespaces, such as a namespace for its filesystem or users. When you disconnect without a process running within the container, there's no reason for it to run, since its namespace is empty. Thus, it stops. If you'd like to run a container in a way that is similar to a service (it keeps running in the background), you would want to run the container in detached mode. Basically, this is a way of telling your container, "run this process, and don't stop running it until I tell you to." Here's an example of creating a container and running it in detached mode: docker run -dit ubuntu /bin/bash Normally, we use the -it options to create a container. This is what we used a few pages back. The -i option triggers interactive mode, while the -t option gives us a psuedo-TTY. At the end of the command, we tell the container to run the Bash shell. The -d option runs the container in the background. It may seem relatively useless to have another Bash shell running in the background that isn't actually performing a task. But these are just simple examples to help you get the hang of Docker. A more common use case may be to run a specific application. In fact, you can even run a website from a Docker container by installing and configuring Apache within the container, including a virtual host. The question then becomes this: how do you access the container's instance of Apache within a web browser? The answer is port redirection, which Docker also supports. Let's give this a try. First, let's create a new container in detached mode. Let's also redirect port 80 within the container to port 8080 on the host: docker run -dit -p 8080:80 ubuntu /bin/bash The command will output a container ID. This ID will be much longer than you're accustomed to seeing, because when we run docker ps -a, it only shows shortened container IDs. You don't need to use the entire container ID when you attach; you can simply use part of it, so long as it's long enough to be different from other IDs—like this: docker attach dfb3e Here, I've attached to a container with an ID that begins with dfb3e. I'm now attached to a Bash shell within the container. Let's install Apache. We've done this before, but to keep it simple, just install the apache2 package within your container, we don't need to worry about configuring the default sample web page or making it look nice. We just want to verify that it works. Apache should now be installed within the container. In my tests, the apache2 daemon wasn't automatically started as it would've been on a real server instance. Since the latest container available on Docker Hub for Ubuntu hasn't yet been upgraded to 16.04 at the time of writing this (it's currently 14.04), the systemctl command won't work, so we'll need to use the legacy start command for Apache: # /etc/init.d/apache2 start We can similarly check the status, to make sure it's running: # /etc/init.d/apache2 status Apache should be running within the container. Now, press CTRL + P and then CTRL + Q to exit the container, but allow it to keep running in the background. You should be able to visit the sample Apache web page for the container by navigating to localhost:8080 in your web browser. You should see the default "It works!" page that comes with Apache. Congratulations, you're officially running an application within a container! Before we continue, think for a moment of all the use cases you can use Docker for. It may seem like a very simple concept (and it is), but it allows you to do some very powerful things. I'll give you a personal example. At a previous job, I worked with some embedded Linux software engineers, who each had their preferred Linux distribution to run on their workstation computers. Some preferred Ubuntu, others preferred Debian, and a few even ran Gentoo. For developers, this poses a problem—the build tools are different in each distribution, because they all ship different versions of all development packages. The application they developed was only known to compile in Debian, and newer versions of the GNU Compiler Collection (GCC) compiler posed a problem for the application. My solution was to provide each developer a Docker container based on Debian, with all the build tools baked in that they needed to perform their job. At this point, it no longer mattered which distribution they ran on their workstations. The container was the same no matter what they were running. I'm sure there are some clever use cases you can come up with. Anyway, back to our Apache container: it's now running happily in the background, responding to HTTP requests over port 8080 on the host. But, what should we do with it at this point? One thing we can do is create our own image from it. Before we do, we should configure Apache to automatically start when the container is started. We'll do this a bit differently inside the container than we would on an actual Ubuntu server. Attach to the container, and open the /etc/bash.bashrc file in a text editor within the container. Add the following to the very end of the file: /etc/init.d/apache2 start Save the file, and exit your editor. Exit the container with the CTRL + P and CTRL + Q key combinations. We can now create a new image of the container with the docker commit command: docker commit <Container ID> ubuntu:apache-server This command will return to us the ID of our new image. To view all the Docker images available on our machine, we can run the docker images command to have Docker return a list. You should see the original Ubuntu image we downloaded, along with the one we just created. We'll first see a column for the repository the image came from. In our case, it's Ubuntu. Next, we can see the tag. Our original Ubuntu image (the one we used docker pull command to download) has a tag of latest. We didn't specify that when we first downloaded it, it just defaulted to latest. In addition, we see an image ID for both, as well as the size. To create a new container from our new image, we just need to use docker run but specify the tag and name of our new image. Note that we may already have a container listening on port 8080, so this command may fail if that container hasn't been stopped: docker run -dit -p 8080:80 ubuntu:apache-server /bin/bash Speaking of stopping a container, I should probably show you how to do that as well. As you could probably guess, the command is docker stop followed by a container ID. This will send the SIGTERM signal to the container, followed by SIGKILL if it doesn't stop on its own after a delay: docker stop <Container ID> To remove a container, issue the docker rm command followed by a container ID. Normally, this will not remove a running container, but it will if you add the -f option. You can remove more than one docker container at a time by adding additional container IDs to the command, with a space separating each. Keep in mind that you'll lose any unsaved changes within your container if you haven't committed the container to an image yet: docker rm <Container ID> The docker rm command will not remove images. If you want to remove a docker image, use the docker rmi command followed by an image ID. You can run the docker image command to view images stored on your server, so you can easily fetch the ID of the image you want to remove. You can also use the repository and tag name, such as ubuntu:apache-server, instead of the image ID. If the image is in use, you can force its removal with the -f option: docker rmi <Image ID> Before we conclude our look into Docker, there's another related concept you'll definitely want to check out: Dockerfiles. A Dockerfile is a neat way of automating the building of docker images, by creating a text file with a set of instructions for their creation. The easiest way to set up a Dockerfile is to create a directory, preferably with a descriptive name for the image you'd like to create (you can name it whatever you wish, though) and inside it create a file named Dockerfile. Following is a sample—copy this text into your Dockerfile and we'll look at how it works: FROM ubuntu MAINTAINER Jay <jay@somewhere.net> # Update the container's packages RUN apt-get update; apt-get dist-upgrade # Install apache2 and vim RUN apt-get install -y apache2 vim # Make Apache automatically start-up` RUN echo "/etc/init.d/apache2 start" >> /etc/bash.bashrc Let's go through this Dockerfile line by line to get a better understanding of what it's doing: FROM ubuntu We need an image to base our new image on, so we're using Ubuntu as a base. This will cause Docker to download the ubuntu:latest image from Docker Hub if we don't already have it downloaded: MAINTAINER Jay <myemail@somewhere.net> Here, we're setting the maintainer of the image. Basically, we're declaring its author: # Update the container's packages Lines beginning with a hash symbol (#) are ignored, so we are able to create comments within the Dockerfile. This is recommended to give others a good idea of what your Dockerfile does: RUN apt-get update; apt-get dist-upgrade -y With the RUN command, we're telling Docker to run a specific command while the image is being created. In this case, we're updating the image's repository index and performing a full package update to ensure the resulting image is as fresh as can be. The -y option is provided to suppress any requests for confirmation while the command runs: RUN apt-get install -y apache2 vim Next, we're installing both apache2 and vim. The vim package isn't required, but I personally like to make sure all of my servers and containers have it installed. I mainly included it here to show you that you can install multiple packages in one line: RUN echo "/etc/init.d/apache2 start" >> /etc/bash.bashrc Earlier, we copied the startup command for the apache2 daemon into the /etc/bash.bashrc file. We're including that here so that we won't have to do this ourselves when containers are crated from the image. To build the image, we can use the docker build command, which can be executed from within the directory that contains the Dockerfile. What follows is an example of using the docker build command to create an image tagged packt:apache-server: docker build -t packt:apache-server Once you run this command, you'll see Docker create the image for you, running each of the commands you asked it to. The image will be set up just the way you like. Basically, we just automated the entire creation of the Apache container we used as an example in this section. Once this is complete, we can create a container from our new image: docker run -dit -p 8080:80 packt:apache-server /bin/bash Almost immediately after running the container, the sample Apache site will be available on the host. With a Dockerfile, you'll be able to automate the creation of your Docker images. There's much more you can do with Dockerfiles though; feel free to peruse Docker's official documentation to learn more. Summary In this article, we took a look at virtualization as well as containerization. We began by walking through the installation of KVM as well as all the configuration required to get our virtualization server up and running. We also took a look at Docker, which is a great way of virtualizing individual applications rather than entire servers. We installed Docker on our server, and we walked through managing containers by pulling down an image from Docker Hub, customizing our own images, and creating Dockerfiles to automate the deployment of Docker images. We also went over many of the popular Docker commands to manage our containers. Resources for Article: Further resources on this subject: Configuring and Administering the Ubuntu Server[article] Network Based Ubuntu Installations[article] Making the most of Ubuntu through Windows Proxies[article]
Read more
  • 0
  • 0
  • 1605
article-image-understanding-drivers
Packt
04 May 2016
7 min read
Save for later

Understanding Drivers

Packt
04 May 2016
7 min read
In this article by Jeff Stokes and Manuel Singer, authors of the book Mastering the Microsoft Deployment Toolkit 2013, we will discuss how to utilize Microsoft Deployment Toolkit (MDT) to make the complex world of device drivers into a much more manageable experience. We will focus on how drivers get installed via MDT, how to specifically control the drivers that get installed, and general best practices around proper driver management. We will cover the following topics in this article: Understanding offline servicing The MDT method of driver detection and injection (For more resources related to this topic, see here.) Understanding offline servicing Those of us who created images for the deployment of Windows XP were often met with an enormous challenge of dealing with drivers for many different models of hardware. We were already forced to create separate images for different hardware abstraction layer (HAL) families. Additionally, in order to deal with different hardware models within the same HAL family, the standard practice was to usually have a folder called C:Drivers, which contained a copy of every possible driver that could be required by this image for all of the different hardware models it would be installed to. There was an OemPnPDriversPath entry in the registry that individually listed each of the driver paths (subfolders under the C:Drivers directory) for the Windows Plug and Play process to locate and install the driver. As you can imagine, this was not a very efficient way to manage drivers. One reason is that every driver for every machine was staged in the image, causing the image size to grow. Another reason being that we were relying on Plug and Play to figure out the right driver to install, which gives us less control of the driver that actually gets installed, based on a driver ranking process. Fast forward to Windows Vista and current versions of Windows, and we can now utilize the magic of offline servicing to inject drivers into our Windows Imaging Format (WIM) as it is getting deployed. With this in mind, consider the concept of having your customized Windows image created through your reference image build process, but it contains no drivers. Now, when we deploy this image, we can utilize a process to detect all the hardware in the target machine, and then grab only the correct drivers that we need for this particular machine. Then, we can utilize Deployment Image Servicing and Management (DISM) to inject them into our WIM before the WIM actually gets installed, therefore, making the drivers available to be installed as Windows is installed on this machine. MDT is doing just that. The MDT method of driver detection and injection When we boot a target machine via our Lite Touch media, one of the initial task sequence steps will enumerate (via PnpEnum) all the PNP IDs for every device in the machine. Then, as part of the inject drivers task sequence step, we will search all of our Out-of-Box driver INF files to find the matching driver, then MDT will utilize DISM to inject these drivers offline into the WIM. Note that, by default, we will be searching our entire Out-of-Box repository and letting PnP figure things out. We can force MDT to only choose from drivers that we specify, therefore, gaining strict control over which drivers actually get installed. The preceding scenario indicates that this whole process hinges on the fact that we are searching through driver INF files to find matching PNP IDs in order to correctly detect and install the correct driver. This brings up a concern: what if the driver does not contain an INF file, but rather it simply has to be installed via an EXE program? In this scenario, we cannot utilize the driver injection process. Instead, we would treat that driver as an application in MDT, meaning we would add a new application, using the EXE program as the source files, specifying the command-line syntax to launch the driver install program and install silently, and then adding this application as a task sequence step. I will later demonstrate how to utilize conditional statements in your task sequence to only install that driver program on the model that it applies to; therefore, keeping our task sequence flexible to be able to install correctly on any hardware. Populating the Out-of-Box Drivers node of MDT The first step will be to visit the OEM Manufacturer’s website and download all the device drivers for each model machine that we will be deploying to. Note that many OEMs now offer a deployment-specific download or CAB file that has all the drivers for a particular model compressed into one single CAB file. This benefits you as you will not have to go through the hassle of downloading and extracting each individual driver for each device separately (NIC, video, audio, and so on). Once you download the necessary drivers, store them in a folder for each specific model, as you will need to extract the drivers within your folder before importing them into MDT. Next, we want to create a folder structure under the Out-of-Box Drivers node in MDT to organize our drivers. This will not only allow easy manageability of drivers, as new drivers are released by the OEM; but if we name the folders to match the model names exactly, we can later introduce logic to limit our PnP search to the exact folder that contains the correct drivers for our particular hardware model. As we will have different drivers for x86 and x64, as well as for different operating systems, a general best practice would be to create the first hierarchy of your folder structure. Perform the following steps to populate the node in MDT: In order to create the folder structure, simply click on Out-Of-Box-Drivers and choose New Folder, as shown in the following screenshot: Next, we will want to create a folder for each model that we will be deploying to: In order to ensure that you are using the correct model name, you can use the following WMI query to see what the hardware returns as the model name: Once you have your folder structure created, you are ready to inject the drivers. Right-click on the model folder and choose Import Drivers. Point the driver source directory to the folder, where you have downloaded and extracted the OEM drivers: There is a checkbox stating Import drivers even if they are duplicates of an existing driver. This is because MDT is utilizing the single instance storage technology to store the drivers in the actual deployment share. If you are importing multiple copies of a drivers to different folders, MDT only stores one copy of the file in the actual filesystem by default, and the folder structure you see within the MDT Workbench will be pointing duplicates to the same file in order to not waste space. As new drivers are released from the OEM, you can simply replace the drivers by going to the particular folder for this model, removing the old drivers, and importing the new drivers. Then, the next time you install your WIM in this model, you will be using the new drivers, and you won’t have to make any modifications or updates to your WIM. Summary In this article, we understood offline servicing, MDT method for driver detection and injection, and how to populate the Out-of-Box Drivers node of MDT. For more information related to MDT, refer to the following book by Packt Publishing: Mastering the Microsoft Deployment Toolkit 2013: https://www.packtpub.com/hardware-and-creative/mastering-microsoft-deployment-toolkit-2013 Resources for Article:   Further resources on this subject: The Configuration Manager Troubleshooting Toolkit [article] Social-Engineer Toolkit [article] Working with Entities in Google Web Toolkit 2 [article]
Read more
  • 0
  • 0
  • 3311

article-image-configuring-extra-features
Packt
27 Jan 2016
10 min read
Save for later

Configuring Extra Features

Packt
27 Jan 2016
10 min read
In this article by Piotr J Kula, the author of the book Raspberry Pi 2 Server Essentials, you will learn how to keep the Pi up-to-date and use the extra features of the GPU. There are some extra features on the Broadcom chip that can be used out of box or activated using extra licenses that can be purchased. Many of these features are undocumented and found by developers or hobbyists working on various projects for the Pi. (For more resources related to this topic, see here.) Updating the Raspberry Pi The Pi essentially has three software layers: the closed source GPU boot process, the boot loader—also known as the firmware, and the operating system. As of writing this book, we cannot update the GPU code. But maybe one day, Broadcom or hardware hackers will tell us how do to this. This leaves us with the firmware and operating system packages. Broadcom releases regular updates for the firmware as precompiled binaries to the Raspberry Pi Foundation, which then releases it to the public. The Foundation and other community members work on Raspbian and release updates via the aptitude repository; this is where we get all our wonderful applications from. It is essential to keep both the firmware and packages up-to-date so that you can benefit from bug fixes and new or improved functionality from the Broadcom chip. The Raspberry Pi 2 uses ARMv7 as opposed to the Pi 1, which uses ARMv6. It recommended using the latest version of Raspbian release to benefit from the speed increase. Thanks to the ARMv7 upgrade as it now supports standard Debian Hard Float packages and other ARMv7 operating systems, such as Windows IoT Core. Updating firmware Updating the firmware used to be quite an involved process, but thanks to a user on GitHub who goes under by the alias of Hexxeh. He has made some code to automatically do this for us. You don't need to run this as often as apt-update, but if you constantly upgrade the operating system, you may need to run this if advised, or when you are experiencing problems with new features or instability. rpi-update is now included as standard in the Raspbian image, and we can simply run the following: sudo rpi-update After the process is complete, you will need to restart the Pi in order to load the new firmware. Updating packages Keeping Raspbian packages up-to-date is also very important, as many changes might work together with fixes published in the firmware. Firstly, we update the source list, which downloads a list of packages and their versions to the aptitude cache. Then, we run the upgrade command that will compare the packages, which are already installed. It will also compare their dependencies, and then it downloads and updates them accordingly: sudo apt-get update sudo apt-get upgrade If there are major changes in the libraries, updating some packages might break your existing custom code or applications. If you need to change anything in your code before updating, you should always check the release notes. Updating distribution We may find that running the firmware update process and package updates does not always solve a particular problem. If you use a release, such as debian-armhf, you can use the following commands without the need to set everything up again: sudo apt-get dist-upgrade sudo apt-get install raspberrypi-ui-mods Outcomes If you have a long-term or production project that will be running independently, it is not a good idea to login from time to time to update the packages. With Linux, it is acceptable to configure your system and let it run for long periods of time without any software maintenance. You should be aware of critical updates and evaluate if you need to install them. For example, consider the recent Heartbleed vulnerability in SSH. If you had a Pi directly connected to the public internet, this would require instant action. Windows users are conditioned to update frequently, and it is very rare that something will go wrong. Though on Linux, running updates will update your software and operating system components, which could cause incompatibilities with other custom software. For example, you used an open source CMS web application to host some of your articles. It was specifically designed for PHP version x, but upgrading to version y also requires the entire CMS system to be upgraded. Sometimes, less popular open source sources may take several months before the code gets refactored to work with the latest PHP version, and consequently, unknowingly upgrading to the latest PHP may completely or partially break your CMS. One way to try and work around this is to clone your SD card and perform the updates on one card. If any issues are encountered, you can easily go back and use the other SD card. A distribution called CentOS tries to deal with this problem by releasing updates once a year. This is deliberate to make sure that everybody has enough time and have tested their software before you can do a full update with minimal or even no breaking changes. Unfortunately, CentOS has no ARM support, but you can follow this guideline by updating packages when you need them. Hardware watchdog A hardware watchdog is a digital clock that needs to be regularly restarted before it reaches a certain time. Just as in the TV series LOST, there is a dead man's switch hidden on the island that needs to be pressed at regular intervals; otherwise, an unknown event will begin. In terms of the Broadcom GPU, if the switch is not pressed, it means that the system has stopped responding, and the reaction event is used to restart the Raspberry Pi and reload the operating system with the expectation that it will, at least temporarily, resolve the issue. Raspbian has a kernel module included, which is disabled by default that deals with the watchdog hardware. A configurable daemon runs on the software layer that sends regular events (such as pressing a button) referred to as a heartbeat to the watchdog via the kernel module. Enabling the watchdog and daemon To get everything up and running, we need to do a few things as follows: Add the following in the console: sudomodprobebcm2708_wdog sudo vi /etc/modules Add the line of the text bcm2708_wdog to the file, then save and exit by pressing ESC and typing :wq. Next, we need to install the daemon that will send the heartbeat signals every 10 seconds. We use chkconfig and add it to the startup process. Then, we enable it as follows: sudo apt-get install watchdog chkconfig sudochkconfig --add watchdog chkconfig watchdog on We can now configure the daemon to do simple checks. Edit the following file: sudo vi /etc/watchdog.conf Uncomment the max-load-1 = 24 and watchdog-device lines by removing the hash (#) character. The max load means that it will take 24 Pi's to complete the task in 1 minute. In normal usage, this will never happen and would only really occur when the Pi is hung. You can now start the watchdog with that configuration. Each time you change something, you need to restart the watchdog: sudo /etc/init.d/watchdog start There are some other examples in the configuration file that you may find of interest. Testing the watchdog In Linux, you can easily place a function into a separate thread, which runs in a new process by using the & character on the command line. By exploiting this feature together with some anonymous functions, we can issue a very crude but effective system halt. This is a quick way to test if the watchdog daemon is working correctly, and it should not be used to halt the Pi. It is known as a fork bomb and many operating systems are susceptible to this. The random-looking series of characters are actually anonymous functions that create other new anonymous function. This is an endless and uncontrollable loop. Most likely, it adopts the name as a bomb because once it starts, it cannot be stopped. Even if you try to kill the original thread, it has created several new threads that need to be killed. It is just impossible to stop, and eventually, it bombs the system into a critical state, which is also known as a stack overflow. Type these characters into the command line and press Enter: : (){ :|:& };: After you press Enter, the Pi will restart after about 30 seconds, but it might take up to a minute. Enabling extra decoders The Broadcom chip actually has extra hardware for encoding and decoding a few other well-known formats. The Raspberry Pi foundation did not include these licenses because they wanted to keep the costs down to a minimum, but they have included the H.264 license. This allows you to watch HD media on your TV, use the webcam module, or transcode media files. If you would like to use these extra encoders/decoders, they did provide a way for users to buy separate licenses. At the time of writing this book, the only project to use these hardware codecs was the OMXPlayer project maintained by XBMC. The latest Raspbian package has the OMX package included. Buying licenses You can go to http://www.raspberrypi.com/license-keys/ to buy licenses that can be used once per device. Follow the instruction on the website to get your license key. MPEG-2 This is alos known as H.222/H.262. It is the standard of video and audio encoding, which is widely used by digital television, cable, and satellite TV. It is also the format used to store video and audio data on DVDs. This means that watching DVDs from a USB DVD-ROM drive should be possible without any CPU overhead whatsoever. Unfortunately, there is no package that uses this hardware directly, but hopefully, in the near future, it would be as simple as buying this license, which will allow us to watch DVDs or video stream in this format with ease. VC-1 VC-1 is formally known as SMPTE421M and was developed by Microsoft. Today, it is the official video format used on the Xbox and Silverlight frameworks. The format is supported by the HD-DVD and Blu-ray players. The only use for this codec will be to watch the Silverlight packaged media, and its popularity has grown over the years but still not very popular. This codec may need to be purchased if you would like to stream video using the Windows 10 IoT API. Hardware monitoring The Raspberry foundation provides a tool called vcgencmd, which gives you detailed data about various hardware used in the Pi. This tool is updated from time to time and can be used to log temperate of the GPU, voltage levels, processor frequencies, and so on: To see a list of supported commands, we type in this console: vcgencmd commands As newer versions are released, there will be more command available in here. To check the current GPU temperature, we use the following command: vcgencmdmeasure_temp We can use the following command to check how RAM is split for the CPU and GPU: vcgencmdget_mem arm/gpu To check the firmware version, we can use the following command: vcgencmd version The output of all these commands is simple text that can be parsed and displayed on a website or stored in a database. Summary This article's intention was to teach you about how hardware relies on good software, but most importantly, it's intention was to show you how to use leverage hardware using ready-made software packages. For reference, you can go to the following link: http://www.elinux.org/RPI_vcgencmd_usage Resources for Article: Further resources on this subject: Creating a Supercomputer [article] Develop a Digital Clock [article] Raspberry Pi and 1-Wire [article]
Read more
  • 0
  • 0
  • 1376