Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News

3711 Articles
Matthew Emerick
21 Aug 2020
3 min read
Save for later

Introduction to props in React from ui.dev's RSS Feed

Matthew Emerick
21 Aug 2020
3 min read
Whenever you have a system that is reliant upon composition, it’s critical that each piece of that system has an interface for accepting data from outside of itself. You can see this clearly illustrated by looking at something you’re already familiar with, functions. function getProfilePic (username) { return 'https://photo.fb.com/' + username } function getProfileLink (username) { return 'https://www.fb.com/' + username } function getAvatarInfo (username) { return { pic: getProfilePic(username), link: getProfileLink(username) } } getAvatarInfo('tylermcginnis') We’ve seen this code before as our very soft introduction to function composition. Without the ability to pass data, in this case username, to each of our of functions, our composition would break down. Similarly, because React relies heavily on composition, there needs to exist a way to pass data into components. This brings us to our next important React concept, props. Props are to components what arguments are to functions. Again, the same intuition you have about functions and passing arguments to functions can be directly applied to components and passing props to components. There are two parts to understanding how props work. First is how to pass data into components, and second is accessing the data once it’s been passed in. Passing data to a component This one should feel natural because you’ve been doing something similar ever since you learned HTML. You pass data to a React component the same way you’d set an attribute on an HTML element. <img src='' /> <Hello name='Tyler' /> In the example above, we’re passing in a name prop to the Hello component. Accessing props Now the next question is, how do you access the props that are being passed to a component? In a class component, you can get access to props from the props key on the component’s instance (this). class Hello extends React.Component { render() { return ( <h1>Hello, {this.props.name}</h1> ) } } Each prop that is passed to a component is added as a key on this.props. If no props are passed to a component, this.props will be an empty object. class Hello extends React.Component { render() { return ( <h1>Hello, {this.props.first} {this.props.last}</h1> ) } } <Hello first='Tyler' last='McGinnis' /> It’s important to note that we’re not limited to what we can pass as props to components. Just like we can pass functions as arguments to other functions, we’re also able to pass components (or really anything we want) as props to other components. <Profile username='tylermcginnis' authed={true} logout={() => handleLogout()} header={<h1>👋</h1>} /> If you pass a prop without a value, that value will be set to true. These are equivalent. <Profile authed={true} /> <Profile authed />
Read more
  • 0
  • 0
  • 1402

article-image-kali-linux-2020-3-release-zsh-win-kex-hidpi-bluetooth-arsenal-from-kali-linux
Matthew Emerick
18 Aug 2020
12 min read
Save for later

Kali Linux 2020.3 Release (ZSH, Win-Kex, HiDPI & Bluetooth Arsenal) from Kali Linux

Matthew Emerick
18 Aug 2020
12 min read
Its that time of year again, time for another Kali Linux release! Quarter #3 – Kali Linux 20202.3. This release has various impressive updates, all of which are ready for immediate download or updating. A quick overview of what’s new since the last release in May 2020: New Shell – Starting the process to switch from “Bash” to “ZSH“ The release of “Win-Kex” – Get ready WSL2 Automating HiDPI support – Easy switching mode Tool Icons – Every default tool now has its own unique icon Bluetooth Arsenal – New set of tools for Kali NetHunter Nokia Support – New devices for Kali NetHunter Setup Process – No more missing network repositories and quicker installs New Shell (Is Coming) Most people who use Kali Linux, (we hope), are very experienced Linux users. As a result, they feel very comfortable around the command line. We understand that “shells” are a very personal and precious thing to everyone (local or remote!), as that is how most people interact with Kali Linux. To the point where lots of experienced users only use a “GUI” to spin up multiple terminals. By default, Kali Linux has always used “bash” (aka “Bourne-Again SHell”) as the default shell, when you open up a terminal or console. Any seasoned Kali user would know the prompt kali@kali:~$ (or root@kali:~# for the older users!) very well! Today, we are announcing the plan to switch over to ZSH shell. This is currently scheduled to be the default shell in 2020.4 (for this 2020.3 release, bash will still be the default). If you have a fresh default install of Kali Linux 2020.3, you should have ZSH already installed (if not, do sudo apt install -y zsh zsh-syntax-highlighting zsh-autosuggestions), ready for a try. However if you installed an earlier version of Kali Linux and have upgraded to 2020.3, your user will be lacking the default ZSH configuration that we cooked with lots of love. So for upgrade users only, make sure to copy the configuration file: kali@kali:~$ cp /etc/skel/.zshrc ~/ kali@kali:~$ Then all you need to do is switch to ZSH: kali@kali:~$ zsh ┌──(kali㉿kali)-[~] └─$ If you like what you see, you can set ZSH as your default (replacing bash) by doing chsh -s /bin/zsh. Which is what we will be doing in 2020.4. We wanted to give the community a notice before this switch happens. This is a very large change (some may argue larger than the Gnome to Xfce switch last year). We are also looking for feedback. We hope we have the right balance of design and functionality, but we know these typically don’t get done perfect the first time. And, we don’t want to overload the default shell with too many features, as lower powered devices will then struggle or it may be hard to on the eyes to read. ZSH has been something we have wanted to do for a long time (even before the switch over to Xfce!). We will be doing extensive testing during this next cycle so we reserve the right to delay the default change, or change direction all together. Again, we encourage you to provide feedback on this process. There is no way we can cover every use case on our own, so your help is important. Q.) Why did you make the switch? What’s wrong with bash? A.) You can do a lot of advanced things with bash, and customize it to do even more, but ZSH allows you to do even more. This was one really large selling point. Q.) Why did you pick ZSH and not fish? A.) In the discussion of switching shells, one of the options that came up is Fish (Friendly Interactive SHell). Fish is a nice shell (probably nicer than ZSH), but realistically it was not a real consideration due to the fact that it is not POSIX compatible. This would cause a lot of issues, as common one-liners just won’t work. Q.) Are you going to use any ZSH frameworks (e.g. Oh-My-ZSH or Prezto)? A.) At this point in time, by default, no. The weight of these would not be workable for lower powered devices. You can still install them yourself afterwards (as many of our team do). Win-KeX Having Kali Linux on “Windows Subsystem for Linux” (WSL) is something we have been taking advantage of since it came out. With the release of WSLv2, the overall functionality and user experience improved dramatically. Today, the experience is improving once more with the introduction of Win-KeX (Windows + Kali Desktop EXperience). After installing it, typing in kex, or clicking on the button, Win-KeX will give you a persistent-session GUI. After getting WSL installed (there’s countless guides online, or you can follow ours), you can install Win-KeX by doing the following: sudo apt update && sudo apt install -y kali-win-kex Afterwards, if you want to make a shortcut, follow our guide, or you can just type in kex! On the subject of WSL (and this is true for Docker and AWS EC2) something we have seen a bit is after getting a desktop environment, people have noticed the tools are not “there”. This is because they are not included by default, to keep the image as small as possible. You either need to manually install them one by one, or grab the default metapackage to get all the tools from out-of-the-box: sudo apt install -y kali-linux-default Please note, Win-KeX does require WSL v2 on x64 as it’s not compatible with WSL v1, or arm64. For more information, please see our documentation page Automating HiDPI HiDPI displays are getting more and more common. Unfortunately, Linux support, out of the box, hasn’t been great (older Linux users may remember a time where this was very common for a lot of hardware changes.). Which means after doing a fresh install, there is a bit of tweaking required to get it working, otherwise the font/text/display may be very small to read. We have had a guide out explaining the process required to get it working, but the process before was a little “fiddly”. We wanted to do better. So we made kali-hidpi-mode. Now, either typing in kali-hidpi-mode or selecting it from the menu (as shown below), should automate switching between HiDPI modes. Tool Icons Over the last few releases, we have been showing the progress on getting more themed icons for tools. We can now say, if you use the default tool listing (kali-linux-default), every tool in the menu (and then a few extra ones!), should have their own icon now. We will be working on adding missing tools to the menu (and creating icons for them) over the next few releases of Kali, as well as expanding into the kali-linux-large metapackage (then kali-tools-everything). We also have plans for these icons, outside of the menu – more information in an upcoming release! Kali NetHunter Bluetooth Arsenal We are proud to introduce Bluetooth Arsenal by yesimxev from the Kali NetHunter team. It combines a set of bluetooth tools in the Kali NetHunter app with some pre-configured workflows and exciting use cases. You can use your external adapter for reconnaissance, spoofing, listening to and injecting audio into various devices, including speakers, headsets, watches, or even cars. Please note that RFCOMM and RFCOMM tty will need to be enabled in kernels from now on to support some of the tools. Kali NetHunter for Nokia Phones Kali NetHunter now supports the Nokia 3.1 and Nokia 6.1 phones, thanks to yesimxev. Images are available on our download site. Please note that those images contain a “minimal Kali rootfs” due to technical reasons but you can easily install all the default tools via sudo apt install -y kali-linux-default. Setup Process The full installer image always had all the packages required for an offline installation but if you installed a Kali Linux system with this image and without disabling the network, the installer would automatically run dist-upgrade during the install. This is done to make sure that you have the latest packages on first boot. And that step can take a very long time, especially after a few months after a release when lots of updates have accumulated. Starting with 2020.3, we disabled the network mirror in the full installer so that you always get the same installation speed, and the same packages and versions for that release – just make sure to update after installing! Whilst we were at it, we fixed another related issue. If you didn’t have network access (either voluntarily or otherwise) during installation, you would get an empty network repository (/etc/apt/sources.list). This means, you would not be able to use apt to install additional packages. While there might be some users who will never have network, we believe that it’s best to actually configure that file in all cases. So that’s what we did. By default, any fresh installs going forward after 2020.3 will have network repositories pre-defined. ARM Device Updates We have (along with the work of Francisco Jose Rodríguez Martos who did a lot of the back end changes) refreshed our build-scripts for our ARM devices. We pre-generated various different ARM images (as of 2020.3 – 19 images) to allow for quick download and deployment, but we have build scripts for more (as of 2020.3 – 39 images). If your device is not one of ones that we release images for, you’ll need to use the scripts to self generate the image. Notable changes in ARM’s 2020.3 release: All of the ARM images come with kali-linux-default metapackage installed, bringing them in line with the rest of our releases, so more tools are available when you first boot We have reduced the size of all our ARM images that are created, so downloads should be smaller. However, you will still need to use at least a 16GB sdcard/USB drive/eMMC Pinebook and Pinebook Pro images can now be used on either sdcard or eMMC The Pinebook image now has the WiFi driver built during image creation, instead of on first boot, this should speed up first boot time massively The Pinebook Pro has a change from the upstream firmware, which changes ccode=DE to ccode=all – this allows access to more 2.4GHz and 5GHz channels The 64-bit RaspberryPi images now have the RaspberryPi userland utilities built during image creation, so vcgencmd and various other utilities that were previously only available on the 32-bit image are now usable on 64-bit as well The ODROID-C2 image now uses the Kali kernel, instead of a vendor provided one. This means in the future, an apt dist-upgrade will get you kernel updates instead of waiting for a new Kali release The /etc/fstab file now includes the root partition via UUID, this should make it easier when trying to use a USB drive instead of sdcard on devices that support it A few things which are work in progress: RaspberryPi images are using 4.19 kernels. We would like to move to 5.4 however, nexmon isn’t working properly with it (as the new kernel requires firmware version => 7.45.202) for which no nexmon patch exists yet There is a new USBArmory Mk2 build script. We don’t have the hardware to test it however, so we are looking for community feedback who is able to test it out Veyron image will be released at a later date to kernel issues that haven’t yet been tracked down Desktop Environment As there has been minor update to Gnome, we have been taking some advantages of the new settings: GNOME’s file manager nautilus has a new theme GNOME’s system-monitor now matches the colors and also has stacked CPU charts Improved the design for “nested headerbars” (example, in the Settings Window, where the left headerbar is joined with the side-navbar) Community Shoutouts A new section in the release notes, community shoutouts. These are people from the public who have helped Kali and the team for the last release. And we want to praise them for their work (we like to give credit where due!): Crash who has been helping the community for some time now, thank you! FrangaL who has been doing some great work with Kali Linux ARM, thank you! Anyone can help out, anyone can get involved! Download Kali Linux 2020.3 Fresh Images So what are you waiting for? Start downloading already! Seasoned Kali Linux users are already aware of this, but for the ones who are not, we do also produce weekly builds that you can use as well. If you can’t wait for our next release and you want the latest packages when you download the image, you can just use the weekly image instead. This way you’ll have fewer updates to do. Just know these are automated builds that we don’t QA like we do our standard release images. But we gladly take bug reports about those images because we want any issues to be fixed before our next release. Existing Upgrades If you already have an existing Kali Linux installation, remember you can always do a quick update: kali@kali:~$ echo "deb http://http.kali.org/kali kali-rolling main non-free contrib" | sudo tee /etc/apt/sources.list kali@kali:~$ kali@kali:~$ sudo apt update && sudo apt -y full-upgrade kali@kali:~$ kali@kali:~$ [ -f /var/run/reboot-required ] && sudo reboot -f kali@kali:~$ You should now be on Kali Linux 2020.3. We can do a quick check by doing: kali@kali:~$ grep VERSION /etc/os-release VERSION="2020.3" VERSION_ID="2020.3" VERSION_CODENAME="kali-rolling" kali@kali:~$ kali@kali:~$ uname -v #1 SMP Debian 5.7.6-1kali2 (2020-07-01) kali@kali:~$ kali@kali:~$ uname -r 5.7.0-kali1-amd64 kali@kali:~$ NOTE: The output of uname -r may be different depending on the system architecture. As always, should you come across any bugs in Kali, please submit a report on our bug tracker. We’ll never be able to fix what we don’t know is broken! And Twitter is not a Bug Tracker!
Read more
  • 0
  • 0
  • 3032

Matthew Emerick
18 Aug 2020
2 min read
Save for later

React Newsletter #226 from ui.dev's RSS Feed

Matthew Emerick
18 Aug 2020
2 min read
News Storybook 6.0 is released Storybook 6.0 is a lot easier to set up and also incorporates many best practices for component-drive development. Other highlights include: Zero-configuration setup Next-gen, dynamic story format Live edit component examples The ability to combine multiple story books Rome: A new toolchain for JavaScript Sebastian McKenzie announced Rome’s first beta release last week, and called it “the spiritual successor of Babel” (he’s allowed to say that because he created Babel). “Rome is designed to replace Babel, ESLint, webpack, Prettier, Jest, and others” We wrote more in depth about Rome in yesterday’s issue of Bytes. Articles Understanding React’s useRef Hook In this article you’ll learn everything you’d ever want to know about React’s useRef Hook including but not limited to how you can recreate it with useState - because, why not? A Guide to Commonly Used React Component Libraries This guide gives some helpful background info and the pros and cons of various well-known component libraries. Tutorials Build a Landing Page with Chakra UI - Part 1 This tutorial series will teach you how to build a responsive landing page in React using the Chakra UI design system. This first part goes over how to set up your landing page and build the hero section. How to setup HTTPS locally with create-react-app This tutorial goes over how to serve a local React app via HTTPS. You’ll be setting up HTTPS in development for a create-react-app with an SSL certificate. Sponsor React developers are in demand on Vettery Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today. Get started today. Projects Flume An open-source library that provides a node editor for visual programming and a runtime engine for executing logic in any JS environment (also portable to non-js). Vite + React + Tailwind CSS starter This is a simple setup using Vite, React and Tailwind for faster prototyping. Videos How the React Native Bridge works This short video from Jimmy Cook gives a helpful deep dive into the React Native bridge and how communication between the native side and the JavaScript side will change in the future.
Read more
  • 0
  • 0
  • 1268

Matthew Emerick
17 Aug 2020
6 min read
Save for later

Understanding React's useRef Hook from ui.dev's RSS Feed

Matthew Emerick
17 Aug 2020
6 min read
The marketing pitch for useState is that it allows you to add state to function components. This is true, but we can break it down even further. Fundamentally, the useState Hook gives you two things - a value that will persist across renders and an API to update that value and trigger a re-render. const [value, setValueAndReRender] = React.useState( 'initial value' ) When building UI, both are necessary. Without the ability to persist the value across renders, you’d lose the ability to have dynamic data in your app. Without the ability to update the value and trigger a re-render, the UI would never update. Now, what if you had a use case where you weren’t dealing with any UI, so you didn’t care about re-rendering, but you did need to persist a value across renders? In this scenario, it’s like you need the half of useState that lets you persist a value across renders but not the other half that triggers a re-render — Something like this. function usePersistentValue (initialValue) { return React.useState({ current: initialValue })[0] } Alright, stick with me here. Remember, useState returns an array with the first element being a value that will persist across renders and the second element being the updater function which will trigger a re-render. Since we only care about the first element, the value, we append [0] to the invocation. Now, whenever we invoke usePersistentValue, what we’ll get is an object with a current property that will persist across renders. If it’s still fuzzy, looking at an actual example may help. If you’re not familiar with the native browser APIs setInterval and clearInterval, you can read about them here before continuing on. Let’s say we were tasked to build an app that had a counter that incremented by 1 every second and a button to stop the counter. How would you approach this? Here’s what one implementation might look like. function Counter () { const [count, setCount] = React.useState(0) let id const clear = () => { window.clearInterval(id) } React.useEffect(() => { id = window.setInterval(() => { setCount(c => c + 1) }, 1000) return clear }, []) return ( <div> <h1>{count}</h1> <button onClick={clear}>Stop</button> </div> ) } 💻 Play with the code. id is created inside of useEffect but we need to access it inside of the clear event handler to stop the interval. To do that, we move the declaration of id up to the main scope and then initialize it with the id when the effect runs. All good, right? Sadly, no. The reason for this is because id doesn’t persist across renders. As soon as our count state variable changes, React will re-render Counter, re-declaring id setting it back to undefined. What we need is a way to persist the id across renders 😏. Luckily for us, we have our usePersistentValue Hook we created earlier. Let’s try it out. function usePersistentValue(initialValue) { return React.useState({ current: initialValue })[0] } function Counter() { const [count, setCount] = React.useState(0) const id = usePersistentValue(null) const clearInterval = () => { window.clearInterval(id.current) } React.useEffect(() => { id.current = window.setInterval(() => { setCount(c => c + 1) }, 1000) return clearInterval }, []) return ( <div> <h1>{count}</h1> <button onClick={clearInterval}>Stop</button> </div> ) } 💻 Play with the code. Admittedly, it’s a bit hacky but it gets the job done. Now instead of id being re-declared on every render, because it’s really a value coming from useState, React will persist it across renders. As you probably guessed by now, the ability to persist a value across renders without causing a re-render is so fundamental that React comes with a built-in Hook for it called useRef. It is, quite literally, the same as our usePersistentValue Hook that we created. To prove this, here’s the exact same code as before except with useRef instead of usePersistentValue. function Counter() { const [count, setCount] = React.useState(0) const id = React.useRef(null) const clearInterval = () => { window.clearInterval(id.current) } React.useEffect(() => { id.current = window.setInterval(() => { setCount(c => c + 1) }, 1000) return clearInterval }, []) return ( <div> <h1>{count}</h1> <button onClick={clearInterval}>Stop</button> </div> ) } 💻 Play with the code. useRef follows the same API we created earlier. It accepts an initial value as its first argument and it returns an object that has a current property (which will initially be set to whatever the initial value was). From there, anything you add to current will be persisted across renders. The most popular use case for useRef is getting access to DOM nodes. If you pass the value you get from useRef as a ref prop on any React element, React will set the current property to the corresponding DOM node. This allows you to do things like grab input values or set focus. function Form () { const nameRef = React.useRef() const emailRef = React.useRef() const passwordRef = React.useRef() const handleSubmit = e => { e.preventDefault() const name = nameRef.current.value const email = emailRef.current.value const password = passwordRef.current.value console.log(name, email, password) } return ( <React.Fragment> <label> Name: <input placeholder="name" type="text" ref={nameRef} /> </label> <label> Email: <input placeholder="email" type="text" ref={emailRef} /> </label> <label> Password: <input placeholder="password" type="text" ref={passwordRef} /> </label> <hr /> <button onClick={() => nameRef.current.focus()}> Focus Name Input </button> <button onClick={() => emailRef.current.focus()}> Focus Email Input </button> <button onClick={() => passwordRef.current.focus()}> Focus Password Input </button> <hr /> <button onClick={handleSubmit}>Submit</button> </React.Fragment> ) } 💻 Play with the code. If you want to add state to your component that persists across renders and can trigger a re-render when it’s updated, go with useState or useReducer. If you want to add state to your component that persists across renders but doesn’t trigger a re-render when it’s updated, go with useRef.
Read more
  • 0
  • 0
  • 1188
Banner background image

Matthew Emerick
27 Jul 2020
1 min read
Save for later

The History of R (updated for 2020) from Revolutions

Matthew Emerick
27 Jul 2020
1 min read
As an update to this post, here's a list of the major events in R history since its creation:  1992: R development begins as a research project in Auckland, NZ by Robert Gentleman and Ross Ihaka  1993: First binary versions of R published at Statlib 1995: R first distributed as open-source software, under GPL2 license 1997: R core group formed 1997: CRAN founded (by Kurt Hornik and Fritz Leisch) 1999: The R website, r-project.org, founded 1999: First in-person meeting of R Core team, at inaugural Directions in Statistical Computing conference, Vienna 2000: R 1.0.0 released (February 29)  2000: John Chambers, recipient of the 1998 ACM Software Systems Award for the S language, joins R Core 2001: R News founded (later to become the R Journal) 2003: R Foundation founded 2004: First UseR! conference (in Vienna) 2004: R 2.0.0 released 2009: First edition of the R Journal 2013: R 3.0.0 released 2015: R Consortium founded, with R Foundation participation 2016: New R logo adopted 2017: CRAN exceeds 10,000 published packages 2020: R 4.0.0 released The presentation below (slides available here) also covers the history of R through 2020.  
Read more
  • 0
  • 0
  • 1107

Matthew Emerick
25 Jun 2020
1 min read
Save for later

R 4.0.2 now available from Revolutions

Matthew Emerick
25 Jun 2020
1 min read
R 4.0.2 is now available for download for Windows, Mac and Linux platforms. This update addresses a few minor bugs included in the R 4.0.0 release, and also a significant bug introduced in R 4.0.1 on the Windows platform.  Compared to R 4.0.0, the R 4.0.2 update also improves the performance of the merge function, and adds an option to better handle zero-length arguments to the paste and paste0 functions.  For the details on the changes in R 4.0.2 follow the link below, and visit your local CRAN mirror to download the update. R-announce mailing list: R 4.0.2 is released
Read more
  • 0
  • 0
  • 822
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Matthew Emerick
22 May 2020
5 min read
Save for later

Custom Package Repositories in R from Revolutions

Matthew Emerick
22 May 2020
5 min read
by Steve Belcher, Sr Technical Specialist, Microsoft Data & AI In some companies, R users can’t download R packages from CRAN. That might be because they work in an environment that’s isolated from the internet, or because company policy dictates that only specific R packages and/or package versions may be used. In this article, we share some ways you can set up a private R package repository you can use as a source of R packages. The best way to maintain R packages for the corporation when access to the internet is limited and/or package zip files are not allowed to be downloaded is to implement a custom package repository. This will give the company the most flexibility to ensure that only authorized and secure packages are available to the firm’s R users. You can use a custom repository with R downloaded from CRAN, with Microsoft R Open, with Microsoft R Client and Microsoft ML Server, or with self-built R binaries. Setting Up a Package Repository One of the strengths of the R language is the thousands of third-party packages that have been made publicly available via CRAN, the Comprehensive R Archive Network. R includes several functions that make it easy to download and install these packages. However, in many enterprise environments, access to the Internet is limited or non-existent. In such environments, it is useful to create a local package repository that users can access from within the corporate firewall. Your local repository may contain source packages, binary packages, or both. If at least some of your users will be working on Windows systems, you should include Windows binaries in your repository. Windows binaries are R-version-specific; if you are running R 3.3.3, you need Windows binaries built under R 3.3. These versioned binaries are available from CRAN and other public repositories. If at least some of your users will be working on Linux systems, you must include source packages in your repository. The main CRAN repository only includes Windows binaries for the current and prior release of R, but you can find packages for older version of R at the daily CRAN snapshots archived by Microsoft at MRAN. This is also a convenient source of older versions of binary packages for current R releases. There are two ways to create the package repository: either mirror an existing repository or create a new repository and populate it with just those packages you want to be available to your users. However, the entire set of packages available on CRAN is large, and if disk space is a concern you may want to restrict yourself to only a subset of the available packages. Maintaining a local mirror of an existing repository is typically easier and less error-prone, but managing your own repository gives you complete control over what is made available to your users. Creating a Repository Mirror Maintaining a repository mirror is easiest if you can use the rsync tool; this is available on all Linux systems and is available for Windows users as part of the Rtools collection. We will use rsync to copy packages from the original repository to your private repository. Creating a Custom Repository As mentioned above, a custom repository gives you complete control over which packages are available to your users. Here, too, you have two basic choices in terms of populating your repository: you can either rsync specific directories from an existing repository, or you can combine your own locally developed packages with packages from other sources. The latter option gives you the greatest control, but in the past, this has typically meant you needed to manage the contents using home-grown tools. Custom Repository Considerations The creation of a custom repository will give you ultimate flexibility to provide access to needed R packages while maintaining R installation security for the corporation. You could identify domain specific packages and rsync them from the Microsoft repository to your in-house custom repository. As part of this process, it makes sense to perform security and compliance scans on downloaded packages before adding them to your internal repository. To aid in the creation of a custom repository, a consultant at Microsoft created the miniCRAN package which allows you to construct a repository from a subset of packages on CRAN (as well as other CRAN-like repositories). The miniCRAN package includes a function that allows you to add your own custom packages to your new custom repository, which promotes sharing of code with your colleagues. Like many other capabilities in the R ecosystem, there are other packages and products that are available to create and work with repositories. A couple of open source packages available for working with R repositories include packrat, renv and drat. If you are looking for a supported, commercially available product to manage access to packages within your organization, RStudio offers the RStudio Package Manager.
Read more
  • 0
  • 0
  • 958

Matthew Emerick
13 May 2020
9 min read
Save for later

Create and deploy a Custom Vision predictive service in R with AzureVision from Revolutions

Matthew Emerick
13 May 2020
9 min read
The AzureVision package is an R frontend to Azure Computer Vision and Azure Custom Vision. These services let you leverage Microsoft’s Azure cloud to carry out visual recognition tasks using advanced image processing models, with minimal machine learning expertise. The basic idea behind Custom Vision is to take a pre-built image recognition model supplied by Azure, and customise it for your needs by supplying a set of images with which to update it. All model training and prediction is done in the cloud, so you don’t need a powerful machine of your own. Similarly, since you are starting with a model that has already been trained, you don’t need a very large dataset or long training times to obtain good predictions (ideally). This article walks you through how to create, train and deploy a Custom Vision model in R, using AzureVision. Creating the resources You can create the Custom Vision resources via the Azure portal, or in R using the facilities provided by AzureVision. Note that Custom Vision requires at least two resources to be created: one for training, and one for prediction. The available service tiers for Custom Vision are F0 (free, limited to 2 projects for training and 10k transactions/month for prediction) and S0. Here is the R code for creating the resources: library(AzureVision) # insert your tenant, subscription, resgroup name and location here rg <- AzureRMR::get_azure_login(tenant)$ get_subscription(sub_id)$ create_resource_group(rg_name, location=rg_location) # insert your desired Custom Vision resource names here res <- rg$create_cognitive_service(custvis_resname, service_type="CustomVision.Training", service_tier="S0") pred_res <- rg$create_cognitive_service(custvis_predresname, service_type="CustomVision.Prediction", service_tier="S0") Training Custom Vision defines two different types of endpoint: a training endpoint, and a prediction endpoint. Somewhat confusingly, they can both use the same hostname, but with different URL paths and authentication keys. To start, call the customvision_training_endpoint function with the service URL and key. url <- res$properties$endpoint key <- res$list_keys()[1] endp <- customvision_training_endpoint(url=url, key=key) Custom Vision is organised hierarchically. At the top level, we have a project, which represents the data and model for a specific task. Within a project, we have one or more iterations of the model, built on different sets of training images. Each iteration in a project is independent: you can create (train) an iteration, deploy it, and delete it without affecting other iterations. In turn, there are three different types of projects: A multiclass classification project is for classifying images into a set of tags, or target labels. An image can be assigned to one tag only. A multilabel classification project is similar, but each image can have multiple tags assigned to it. An object detection project is for detecting which objects, if any, from a set of candidates are present in an image. Let’s create a classification project: testproj <- create_classification_project(endp, "testproj", export_target="standard") Here, we specify the export target to be standard to support exporting the final model to one of various standalone formats, eg TensorFlow, CoreML or ONNX. The default is none, in which case the model stays on the Custom Vision server. The advantage of none is that the model can be more complex, resulting in potentially better accuracy. Adding and tagging images Since a Custom Vision model is trained in Azure and not locally, we need to upload some images. The data we’ll use comes from the Microsoft Computer Vision Best Practices project. This is a simple set of images containing 4 kinds of objects one might find in a fridge: cans, cartons, milk bottles, and water bottles. download.file( "https://cvbp.blob.core.windows.net/public/datasets/image_classification/fridgeObjects.zip", "fridgeObjects.zip" ) unzip("fridgeObjects.zip") The generic function to add images to a project is add_images, which takes a vector of filenames, Internet URLs or raw vectors as the images to upload. It returns a vector of image IDs, which are how Custom Vision keeps track of the images it uses. Let’s upload the fridge objects to the project. The method for classification projects has a tags argument which can be used to assign labels to the images as they are uploaded. We’ll keep aside 5 images from each class of object to use as validation data. cans <- dir("fridgeObjects/can", full.names=TRUE) cartons <- dir("fridgeObjects/carton", full.names=TRUE) milk <- dir("fridgeObjects/milk_bottle", full.names=TRUE) water <- dir("fridgeObjects/water_bottle", full.names=TRUE) # upload all but 5 images from cans and cartons, and tag them can_ids <- add_images(testproj, cans[-(1:5)], tags="can") carton_ids <- add_images(testproj, cartons[-(1:5)], tags="carton") If you don’t tag the images at upload time, you can do so later with add_image_tags: # upload all but 5 images from milk and water bottles milk_ids <- add_images(testproj, milk[-(1:5)]) water_ids <- add_images(testproj, water[-(1:5)]) add_image_tags(testproj, milk_ids, tags="milk_bottle") add_image_tags(testproj, water_ids, tags="water_bottle") Other image functions to be aware of include list_images, remove_images, and add_image_regions (which is for object detection projects). A useful one is browse_images, which takes a vector of IDs and displays the corresponding images in your browser. browse_images(testproj, water_ids[1:5]) Training the model Having uploaded the data, we can train the Custom Vision model with train_model. This trains the model on the server and returns a model iteration, which is the result of running the training algorithm on the current set of images. Each time you call train_model, for example to update the model after adding or removing images, you will obtain a different model iteration. In general, you can rely on AzureVision to keep track of the iterations for you, and automatically return the relevant results for the latest iteration. mod <- train_model(testproj) We can examine the model performance on the training data with the summary method. For this toy problem, the model manages to obtain a perfect fit. summary(mod) Obtaining predictions from the trained model is done with the predict method. By default, this returns the predicted tag (class label) for the image, but you can also get the predicted class probabilities by specifying type="prob". validation_imgs <- c(cans[1:5], cartons[1:5], milk[1:5], water[1:5]) validation_tags <- rep(c("can", "carton", "milk_bottle", "water_bottle"), each=5) predicted_tags <- predict(mod, validation_imgs) table(predicted_tags, validation_tags) ## validation_tags ## predicted_tags can carton milk_bottle water_bottle ## can 4 0 0 0 ## carton 0 5 0 0 ## milk_bottle 1 0 5 0 ## water_bottle 0 0 0 5 This shows that the model got 19 out of 20 predictions correct on the validation data, misclassifying one of the cans as a milk bottle. Deployment Publishing to a prediction resource The code above demonstrates using the training endpoint to obtain predictions, which is really meant only for model testing and validation. In a production setting, we would normally publish a trained model to a Custom Vision prediction resource. Among other things, a user with access to the training endpoint has complete freedom to modify the model and the data, whereas access to the prediction endpoint only allows getting predictions. Publishing a model requires knowing the Azure resource ID of the prediction resource. Here, we’ll use the resource object that we created earlier; you can also obtain this information from the Azure Portal. # publish to the prediction resource we created above publish_model(mod, "iteration1", pred_res) Once a model has been published, we can obtain predictions from the prediction endpoint in a manner very similar to previously. We create a predictive service object with classification_service, and then call the predict method. Note that a required input is the project ID; you can supply this directly or via the project object. It may also take some time before a published model shows up on the prediction endpoint. Sys.sleep(60) # wait for Azure to finish publishing pred_url <- pred_res$properties$endpoint pred_key <- pred_res$list_keys()[1] pred_endp <- customvision_prediction_endpoint(url=pred_url, key=pred_key) project_id <- testproj$project$id pred_svc <- classification_service(pred_endp, project_id, "iteration1") # predictions from prediction endpoint -- same as before predsvc_tags <- predict(pred_svc, validation_imgs) table(predsvc_tags, validation_tags) ## validation_tags ## predsvc_tags can carton milk_bottle water_bottle ## can 4 0 0 0 ## carton 0 5 0 0 ## milk_bottle 1 0 5 0 ## water_bottle 0 0 0 5 Exporting as standalone As an alternative to deploying the model to an online predictive service resource, for example if you want to create a custom deployment solution, you can also export the model as a standalone object. This is only possible if the project was created to support exporting. The formats supported include: ONNX 1.2 CoreML TensorFlow or TensorFlow Lite A Docker image for either the Linux, Windows or Raspberry Pi environment Vision AI Development Kit (VAIDK) To export the model, simply call export_model and specify the target format. This will download the model to your local machine. export_model(mod, "tensorflow") More information AzureVision is part of the AzureR family of packages. This provides a range of tools to facilitate access to Azure services for data scientists working in R, such as AAD authentication, blob and file storage, Resource Manager, container services, Data Explorer (Kusto), and more. If you are interested in Custom Vision, you may also want to check out CustomVision.ai, which is an interactive frontend for building Custom Vision models.
Read more
  • 0
  • 0
  • 828

Matthew Emerick
12 May 2020
9 min read
Save for later

Kali Linux 2020.2 Release from Kali Linux

Matthew Emerick
12 May 2020
9 min read
Despite the turmoil in the world, we are thrilled to be bringing you an awesome update with Kali Linux 2020.2! And it is available for immediate download. A quick overview of what’s new since January: KDE Plasma Makeover & Login PowerShell by Default. Kind of. Kali on ARM Improvements Lessons From The Installer Changes New Key Packages & Icons Behind the Scenes, Infrastructure Improvements KDE Plasma Makeover & Login With XFCE and GNOME having had a Kali Linux look and feel update, it’s time to go back to our roots (days of backtrack-linux) and give some love and attention to KDE Plasma. Introducing our dark and light themes for KDE Plasma: On the subject of theming, we have also tweaked the login screen (lightdm). It looks different, both graphically and the layout (the login boxes are aligned now)! PowerShell by Default. Kind of. A while ago, we put PowerShell into Kali Linux’s network repository. This means if you wanted powershell, you had to install the package as a one off by doing: kali@kali:~$ sudo apt install -y powershell We now have put PowerShell into one of our (primary) metapackages, kali-linux-large. This means, if you choose to install this metapackage during system setup, or once Kali is up and running (sudo apt install -y kali-linux-large), if PowerShell is compatible with your architecture, you can just jump straight into it (pwsh)! PowerShell isn’t in the default metapackage (that’s kali-linux-default), but it is in the one that includes the default and many extras, and can be included during system setup. Kali on ARM Improvements With Kali Linux 2020.1, desktop images no longer used “root/toor” as the default credentials to login, but had moved to “kali/kali”. Our ARM images are now the same. We are no longer using the super user account to login with. We also warned back in 2019.4 that we would be moving away from a 8GB minimum SD card, and we are finally ready to pull the trigger on this. The requirement is now 16GB or larger. One last note on the subject of ARM devices, we are not installing locales-all any more, so we highly recommend that you set your locale. This can be done by running the following command, sudo dpkg-reconfigure locales, then log out and back in. Lessons From Installer Changes With Kali Linux 2020.1 we announced our new style of images, “installer” & “live”. Issue It was intended that both “installer” & “live” could be customized during setup, to select which metapackage and desktop environment to use. When we did that, we couldn’t include metapackages beyond default in those images, as it would create too large of an ISO. As the packages were not in the image, if you selected anything other than the default options it would require network access to obtain the missing packages beyond default. After release, we noticed some users selecting “everything” and then waiting hours for installs to happen. They couldn’t understand why the installs where taking so long. We also have used different software on the back end to generate these images, and a few bugs slipped through the cracks (which explains the 2020.1a and 2020.1b releases). Solutions We have removed kali-linux-everything as an install time option (which is every package in the Kali Linux repository) in the installer image, as you can imagine that would have taken a long time to download and wait for during install We have cached kali-linux-large & every desktop environment into the install image (which is why its a little larger than previous to download) – allowing for a COMPLETE offline network install We have removed customization for “live” images – the installer switched back to copying the content of the live filesystem allowing again full offline install but forcing usage of our default XFCE desktop Summary If you are wanting to run Kali from a live image (DVD or USB stick), please use “live” If you are wanting anything else, please use “installer” If you are wanting anything other than XFCE as your desktop environment, please use “installer” If you are not sure, get “installer” Also, please keep in mind on an actual assessment “more” is not always “better”. There are very few reasons to install kali-linux-everything, and many reasons not too. To those of you that were selecting this option, we highly suggest you take some time and educate yourself on Kali before using it. Kali, or any other pentest distribution, is not a “turn key auto hack” solution. You still need to learn your platform, learn your tools, and educate yourself in general. Consider what you are really telling Kali to do when you are installing kali-linux-everything. Its similar to if you went into your phones app store and said “install everything!”. Thats likely not to have good results. We provide a lot of powerful tools and options in Kali, and while we may have a reputation of “Providing machine guns to monkeys”, but we actually expect you to know what you are doing. Kali is not going to hold your hand. It expects you to do the work of learning and Kali will be unforgiving if you don’t. New Key Packages & Icons Just like every Kali Linux release, we include the latest packages possible. Key ones to point out this release are: GNOME 3.36 – a few of you may have noticed a bug that slipped in during the first 12 hours of the update being available. We’re sorry about this, and have measures in place for it to not happen again Joplin – we are planning on replacing CherryTree with this in Kali Linux 2020.3! Nextnet Python 3.8 SpiderFoot For the time being, as a temporary measure due to certain tools needing it, we have re-included python2-pip. Python 2 has now reached “End Of Life” and is no longer getting updated. Tool makers, please, please, please port to Python 3. Users of tools, if you notice that a tool is not Python 3 yet, you can help too! It is not going to be around forever. Whilst talking about packages, we have also started to refresh our package logos for each tool. You’ll notice them in the Kali Linux menu, as well as the tools page on GitLab(more information on this coming soon!) If your tool has a logo and we have missed it, please let us know on the bug tracker. WSLconf WSLconf happened earlier this year, and @steev gave a 35 minute talk on “How We Use WSL at Kali“. Go check it out! Behind the Scenes, Infrastructure Improvements We have been celebrating the arrival of new servers, which over the last few weeks we have been migrating too. This includes a new ARM build server and what we use for package testing. This may not be directly noticeable, but you may reap the benefits of it! If you are wanting to help out with Kali, we have added a new section to our documentation showing how to submit a autopkgtest. Feedback is welcome! Kali Linux NetHunter We were so excited about some of the work that has been happening with NetHunter recently, we already did a mid-term release to showcase them and get it to you as quick as possible. On top of all the previous NetHunter news there is even more to announce this time around! Nexmon support has been revived, bringing WiFi monitor support and frame injection to wlan0 on the Nexus 6P, Nexus 5, Sony Xperia Z5 Compact, and more! OpenPlus 3T images have been added to the download page. We have crossed 160 different kernels in our repository, allowing NetHunter to support over 64 devices! Yes, over 160 kernels and over 64 devices supported. Amazing. Our documentation page has received a well deserved refresh, especially the kernel development section. One of the most common questions to come in about NetHunter is “What device should I run it on?”. Keep your eye on this page to see what your options are on an automatically updated basis! When you think about the amount of power NetHunter provides in such a compact package, it really is mind blowing. Its been amazing to watch this progress, and the entire Kali team is excited to show you what is coming in the future. Download Kali Linux 2020.2 Fresh images So what are you waiting for? Start downloading already! Seasoned Kali users are already aware of this, but for the ones who are not, we do also produce weekly builds that you can use as well. If you can’t wait for our next release and you want the latest packages when you download the image, you can just use the weekly image instead. This way you’ll have fewer updates to do. Just know these are automated builds that we don’t QA like we do our standard release images. Existing Upgrades If you already have an existing Kali installation, remember you can always do a quick update: kali@kali:~$ echo "deb http://http.kali.org/kali kali-rolling main non-free contrib" | sudo tee /etc/apt/sources.list kali@kali:~$ kali@kali:~$ sudo apt update && sudo apt -y full-upgrade kali@kali:~$ kali@kali:~$ [ -f /var/run/reboot-required ] && sudo reboot -f kali@kali:~$ You should now be on Kali Linux 2020.2. We can do a quick check by doing: kali@kali:~$ grep VERSION /etc/os-release VERSION="2020.2" VERSION_ID="2020.2" VERSION_CODENAME="kali-rolling" kali@kali:~$ kali@kali:~$ uname -v #1 SMP Debian 5.5.17-1kali1 (2020-04-21) kali@kali:~$ kali@kali:~$ uname -r 5.5.0-kali2-amd64 kali@kali:~$ NOTE: The output of uname -r may be different depending on the system architecture. As always, should you come across any bugs in Kali, please submit a report on our bug tracker. We’ll never be able to fix what we don’t know is broken! And Twitter is not a Bug Tracker!
Read more
  • 0
  • 0
  • 1361

Matthew Emerick
05 May 2020
2 min read
Save for later

AzureQstor: R interface to Azure Queue Storage now on GitHub from Revolutions

Matthew Emerick
05 May 2020
2 min read
This post is to announce that the AzureQstor package is now on GitHub. AzureQstor provides an R interface to Azure queue storage, building on the facilities provided by AzureStor. Queue Storage is a service for storing large numbers of messages, for example from automated sensors, that can be accessed remotely via authenticated calls using HTTP or HTTPS. A single queue message can be up to 64 KB in size, and a queue can contain millions of messages, up to the total capacity limit of a storage account. Queue storage is often used to create a backlog of work to process asynchronously. AzureQstor uses a combination of S3 and R6 classes. The queue endpoint is an S3 object for compatibility with AzureStor, while R6 classes are used to represent queues and messages. library(AzureQstor) endp <- storage_endpoint("https://mystorage.queue.core.windows.net", key="access_key") # creating, retrieving and deleting queues create_storage_queue(endp, "myqueue") qu <- storage_queue(endp, "myqueue") qu2 <- create_storage_queue(endp, "myqueue2") delete_storage_queue(qu2) The queue object exposes methods for getting (reading), peeking, deleting, updating, popping (reading and deleting) and putting (writing) messages: qu$put_message("Hello queue") msg <- qu$get_message() msg$text ## [1] "Hello queue" # get several messages at once qu$get_messages(n=30) The message object exposes methods for deleting and updating the message: msg$update(visibility_timeout=30, text="Updated message") msg$delete() You can also get and set metadata for a queue with the AzureStor get/set_storage_metadata generics: get_storage_metadata(qu) set_storage_metadata(qu, name1="value1", name2="value2") It’s anticipated that AzureQstor will be submitted to CRAN before long. If you are a queue storage user, please install it and give it a try; any feedback or bug report is much appreciated. You can email me or open an issue on GitHub.
Read more
  • 0
  • 0
  • 977
article-image-r-4-0-0-now-available-and-a-look-back-at-rs-history-from-revolutions
Matthew Emerick
27 Apr 2020
4 min read
Save for later

R 4.0.0 now available, and a look back at R's history from Revolutions

Matthew Emerick
27 Apr 2020
4 min read
R 4.0.0 was released in source form on Friday, and binaries for Windows, Mac and Linux are available for download now. As the version number bump suggests, this is a major update to R that makes some significant changes. Some of these changes — particularly the first one listed below — are likely to affect the results of R's calculations, so I would not recommend running scripts written for prior versions of R without validating them first. In any case, you'll need to reinstall any packages you were using for R 4.0.0. (You might find this R script useful for checking what packages you have installed for R 3.x.) You can find the full list of changes and fixes in the NEWS file (it's long!), but here are the biggest changes: Imported string data is no long converted to factors. The stringsAsFactors option, which since R's inception defaulted to TRUE to convert imported string data  to factor objects, is now FALSE. This default was probably the biggest stumbling block for prior users of R: it made statistical modeling a little easier and used a little less memory, but at the expense of confusing behavior on data you probably thought was ordinary strings. This change broke backward compatibility for many packages (mostly now updated on CRAN), and likely affects your own scripts unless you were diligent about including explicit stringsAsFactors declarations in your import function calls. A new syntax for specifying raw character strings. You can use syntax like r"(any characters except right paren)" to define a literal string. This is particularly useful for HTML code, regular expressions, and other strings that include quotes or backslashes that would otherwise have to be escaped. An enhanced reference counting system. When you delete an object in R, it usually releases the associated memory back to the operating system. Likewise, if you copy an object with y <- x, R won't allocate new memory for y unless x is later modified. In prior versions of R, however, that system breaks down if there are more than 2 references to any block of memory. Starting with R 4.0.0, all references will be counted, and so R should reclaim as much memory as possible, reducing R's overall memory footprint. This will have no impact on how you write R code, but this change make R run faster, especially on systems with limited memory and with slow storage systems. Normalization of matrix and array types. Conceptually, a matrix is just a 2-dimensional array. But prior versions of R handle matrix and 2-D array objects differently in some cases. In R 4.0.0, matrix objects will formally inherit from the array class, eliminating such inconsistencies. A refreshed color palette for charts. The base graphics palette for prior versions of R (shown as R3 below) features saturated colors that vary considerably in brightness (for example, yellow doesn't display as prominently as red). In R 4.0.0, the palette R4 below will be used, with colors of consistent luminance that are easier to distinguish, especially for viewers with color deficiencies. Additional palettes will make it easy to make base graphics charts that match the color scheme of ggplot2 and other graphics systems. Performance improvements. The grid graphics system has been revamped (which improves the rendering speed of ggplot2 graphics in particular), socket connections are faster, and various functions have been sped up.  Cairo graphics devices have been updated to support more fonts and symbols, an improvement particularly relevant to Linux-based users of R. R version 4 represents a major milestone in the history of R. It's been just over 20 years since R 1.0.0 was released on February 29 2000, and the history of R extends even further back than that. If you're interested in the other major milestones, I cover R's history in this recent talk for the SatRDays DC conference. For the details on the R 4.0.0 release, including the complete list of changes, check out the announcement at the link below. R-announce archives: R 4.0.0 is released
Read more
  • 0
  • 0
  • 1094

Matthew Emerick
21 Apr 2020
4 min read
Save for later

Major update to checkpoint package now available for beta test from Revolutions

Matthew Emerick
21 Apr 2020
4 min read
I’m Hong Ooi, data scientist with Microsoft Azure Global, and maintainer of the checkpoint package. The checkpoint package makes it easy for you freeze R packages in time, drawing from the daily snapshots of the CRAN repository that have been archived on a daily basis at MRAN since 2014. Checkpoint has been around for nearly 6 years now, helping R users solve the reproducible research puzzle. In that time, it’s seen many changes, new features, and, inevitably, bug reports. Some of these bugs have been fixed, while others remain outstanding in the too-hard basket. Many of these issues spring from the fact that it uses only base R functions, in particular install.packages, to do its work. The problem is that install.packages is meant for interactive use, and as an API, is very limited. For starters, it doesn’t return a result to the caller—instead, checkpoint has to capture and parse the printed output to determine whether the installation succeeded. This causes a host of problems, since the printout will vary based on how R is configured. Similarly, install.packages refuses to install a package if it’s in use, which means checkpoint must unload it first—an imperfect and error-prone process at best. In addition to these, checkpoint’s age means that it has accumulated a significant amount of technical debt over the years. For example, there is still code to handle ancient versions of R that couldn’t use HTTPS, even though the MRAN site (in line with security best practice) now accepts HTTPS connections only. I’m happy to announce that checkpoint 1.0 is now in beta. This is a major refactoring/rewrite, aimed at solving these problems. The biggest change is to switch to pkgdepends for the backend, replacing the custom-written code using install.packages. This brings the following benefits: Caching of downloaded packages. Subsequent checkpoints using the same MRAN snapshot will check the package cache first, saving possible redownloads. Allow installing packages which are in use, without having to unload them first. Comprehensive reporting of all aspects of the install process: dependency resolution, creating an install plan, downloading packages, and actual installation. Reliable detection of installation outcomes (no more having to screen-scrape the R window). In addition, checkpoint 1.0 features experimental support for a checkpoint.yml manifest file, to specify packages to include or exclude from the checkpoint. You can include packages from sources other than MRAN, such as Bioconductor or Github, or from the local machine; similarly, you can exclude packages which are not publicly distributed (although you’ll still have to ensure that such packages are visible to your checkpointed session). The overall interface is still much the same. To create a checkpoint, or use an existing one, call the checkpoint() function: library(checkpoint) checkpoint("2020-01-01") This calls out to two other functions, create_checkpoint and use_checkpoint, reflecting the two main objectives of the package. You can also call these functions directly. To revert your session to the way it was before, call uncheckpoint(). One difference to be aware of is that function names and arguments now consistently use snake_case, reflecting the general style seen in the tidyverse and related frameworks. The names of ancillary functions have also been changed, to better reflect their purpose, and the package size has been significantly reduced. See the help files for more information. There are two main downsides to the change, both due to known issues in the current pkgdepends/pkgcache chain: For Windows and MacOS, creating a checkpoint fails if there are no binary packages available at the specified MRAN snapshot. This generally happens if you specify a snapshot that either predates or is too far in advance of your R version. As a workaround, you can use the r_version argument to create_checkpoint to install binaries intended for a different R version. There is no support for a local MRAN mirror (accessed via a file:// URL). You must either use the standard MRAN site, or have an actual webserver hosting a mirror of MRAN. It’s anticipated that these will both be fixed before pkgdepends is released to CRAN. You can get the checkpoint 1.0 beta from GitHub: remotes::install_github("RevolutionAnalytics/checkpoint") Any comments or feedback will be much appreciated. You can email me directly, or open an issue at the repo.
Read more
  • 0
  • 0
  • 860

Matthew Emerick
18 Feb 2020
1 min read
Save for later

Entity Framework Core Migrations from C# Corner

Matthew Emerick
18 Feb 2020
1 min read
Eric Vogel uses code samples and screenshots to demonstrate how to do Entity Framework Core migrations in a .NET Core application through the command line and in code.
Read more
  • 0
  • 0
  • 947
article-image-google-introduces-e2-a-flexible-performance-driven-and-cost-effective-vms-for-google-compute-engine
Vincy Davis
12 Dec 2019
3 min read
Save for later

Google introduces E2, a flexible, performance-driven and cost-effective VMs for Google Compute Engine

Vincy Davis
12 Dec 2019
3 min read
Yesterday, June Yang, the director of product management at Google announced a new beta version of the E2 VMs for Google Compute Engine. It features a dynamic resource management that delivers a reliable performance with flexible configurations and the best total cost of ownership (TCO) than any other VMs in Google Cloud. According to Yang, “E2 VMs are a great fit for a broad range of workloads including web servers, business-critical applications, small-to-medium sized databases, and development environments.” He further adds, “For all but the most demanding workloads, we expect E2 to deliver similar performance to N1, at a significantly lower cost.” What are the key features offered by E2 VMs E2 VMs are built to offer 31% savings compared to N1, which is the lowest total cost of ownership of any VM in Google Cloud. Thus, the VMs acquire a sustainable performance at a consistently low price point. Unlike comparable options from other cloud providers, E2 VMs can support a high CPU load without complex pricing. The E2 VMs can be tailored up to 16 vCPUs and 128 GB of memory and will only distribute the resources that the user needs or with the ability to use custom machine types. Custom machine types are ideal for scenarios when workloads that require more processing power or more memory but don't need all of the upgrades that are provided by the next machine type level. How E2 VMs achieve optimal efficiency Large, efficient physical servers E2 VMs automatically take advantage of the continual improvements in machines by flexibly scheduling across the zone’s available CPU platforms. With new hardware upgrades, the E2 VMs are live migrated to newer and faster hardware which allows it to automatically take advantage of these new resources. Intelligent VM placement In E2 VMs, Borg, Google’s cluster management system predicts how a newly added VM will perform on a physical server by observing the CPU, RAM, memory bandwidth, and other resource demands of the VMs. Post this, Borg searches across thousands of servers to find the best location to add a VM. These observations by Borg ensures that a newly placed VM will be compatible with its neighbors and will not experience any interference from them. Performance-aware live migration After the VMs are placed on a host, its performance is continuously monitored so that if there is an increase in demand for VMs, a live migration can be used to transparently shift the E2 load to other hosts in the data center. A new hypervisor CPU scheduler In order to meet E2 VMs performance goals, Google has built a custom CPU scheduler with better latency and co-scheduling behavior than Linux’s default scheduler. The new scheduler yields sub-microsecond average wake-up latencies with fast context switching which helps in keeping the overhead of dynamic resource management negligible for nearly all workloads. https://twitter.com/uhoelzle/status/1204972503921131521 Read the official announcement to know the custom VM shapes and predefined configurations offered by E2 VMs. You can also read part- 2 of the announcement to know more about the dynamic resource management in E2 VMs. Why use JVM (Java Virtual Machine) for deep learning Brad Miro talks TensorFlow 2.0 features and how Google is using it internally EU antitrust regulators are investigating Google’s data collection practices, reports Reuters Google will not support Cloud Print, its cloud-based printing solution starting 2021 Google Chrome ‘secret’ experiment crashes browsers of thousands of IT admins worldwide
Read more
  • 0
  • 0
  • 3892

article-image-openjs-foundation-accepts-electron-js-in-its-incubation-program
Fatema Patrawala
12 Dec 2019
3 min read
Save for later

OpenJS Foundation accepts Electron.js in its incubation program

Fatema Patrawala
12 Dec 2019
3 min read
Yesterday, at the Node+JS Interactive in Montreal, the OpenJS Foundation announced the acceptance of Electron into the Foundation’s incubation program. The OpenJS Foundation provides vendor-neutral support for sustained growth within the open source JavaScript community. It's supported by 30 corporate and end-user members, including GoDaddy, Google, IBM, Intel, Joyent, and Microsoft. Electron is an open source framework created for building desktop apps using JavaScript, HTML, and CSS, it is based on Node.js and Chromium. Additionally, Electron is widely used on many well-known applications including Discord, Microsoft Teams, OpenFin, Skype, Slack, Trello, Visual Studio Code, etc. “We’re heading into 2020 excited and honored by the trust the Electron project leaders have shown through this significant contribution to the new OpenJS Foundation,” said Robin Ginn, Executive Director of the OpenJS Foundation. He further added, “Electron is a powerful development tool used by some of the most well-known companies and applications. On behalf of the community, I look forward to working with Electron and seeing the amazing contributions they will make.” Electron’s cross-platform capabilities make it possible to build and run apps on Windows, Mac, and Linux computers. Initially developed by GitHub in 2013, today the framework is maintained by a number of developers and organizations. Electron is suited for anyone who wants to ship visually consistent, cross-platform applications, fast and efficiently. “We’re excited about Electron’s move to the OpenJS Foundation and we see this as the next step in our evolution as an open source project,” said Jacob Groundwater, Manager at ElectronJS and Principal Engineering Manager at Microsoft. “With the Foundation, we’ll continue on our mission to play a prominent role in the adoption of web technologies by desktop applications and provide a path for JavaScript to be a sustainable platform for desktop applications. This will enable the adoption and development of JavaScript in an environment that has traditionally been served by proprietary or platform-specific technologies.” What this means for developers Electron joining the OpenJS Foundation does not change how Electron is made, released, or used — and does not directly affect developers building applications with Electron. Even though Electron was originally created at GitHub, it is currently maintained by a number of organizations and individuals. In 2019, Electron codified its governance structure and invested heavily into formalizing how decisions affecting the entire project are made. The Electron team believes that having multiple organizations and developers investing in and collaborating on Electron makes the project stronger. Hence, lifting Electron up from being owned by a single corporate entity and moving it into a neutral foundation focused on supporting the web and JavaScript ecosystem is a natural next step as they mature in the open-source ecosystem. To know more about this news, check out the official announcement from the OpenJS Foundation website. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger Node.js and JS Foundations are now merged into the OpenJS Foundation Denys Vuika on building secure and performant Electron apps, and more
Read more
  • 0
  • 0
  • 6906