Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Programming

573 Articles
article-image-mlops-with-r-and-github-actions-from-revolutions
Matthew Emerick
25 Aug 2020
2 min read
Save for later

MLOPS with R and GitHub Actions from Revolutions

Matthew Emerick
25 Aug 2020
2 min read
With thanks to the kind folks at Lander Analytics, video from my New York R Conference talk earlier this month is now available to view. The slides are also available for download in PDF format. In my talk, I described how I automated the deployment of a Shiny app using GitHub Actions. If you're new to GitHub Actions, it's pretty simple to set up a continuous deployment process: Define jobs as YAML files in the .github/workflows folder of your GitHub repository Search the GitHub Actions Marketplace for templates of tasks you'd like to perform Push changes to your workflow to trigger Actions according to the rules you specify In my case, I used Actions to create an on-demand cluster of VMs in Azure Machine Learning service, to train R models on the cluster with the azuremlsdk package, to deploy the trained model as an HTTP endpoint in Azure Container Instances, and to update the Shiny app which calls out to that endpoint. In the talk, I demonstrate the process in action (the demo starts at the 14:30 mark in the video below). I used Visual Studio Code to edit the app.R file in repository, and then pushed the changes to GitHub. That immediately triggered the action to deploy the updated file via SSH to the Shiny Server, running in a remote VM. Similarly, changes to the data file or to the R script files implementing the logistic regression model would trigger the model to be retrained in the cluster, and re-deploy the endpoint to deliver new predictions from the updated model. I've provided the complete GitHub repository implementing the app, the models, and the Actions at github.com/revodavid/mlops-r-gha. If you want to try it out yourself, all you need to do is clone the repo, follow the instructions to add secrets to your repository and set up the Shiny VM, and then trigger the Actions to build everything. The repository also includes links to references and other resources, including how to create a free Azure subscription with credits you can use to test everything out. If you have any questions you suggestions, please feel free to add an issue to the repository! GitHub (revodavid): MLOPS with R: An end-to-end process for building machine learning applications
Read more
  • 0
  • 0
  • 1014

article-image-python-3-5-10rc1-is-now-available-from-python-insider
Anonymous
22 Aug 2020
1 min read
Save for later

Python 3.5.10rc1 is now available from Python Insider

Anonymous
22 Aug 2020
1 min read
Python 3.5.10rc1 is now available. You can download it here.
Read more
  • 0
  • 0
  • 1045

article-image-the-history-of-r-updated-for-2020-from-revolutions
Matthew Emerick
27 Jul 2020
1 min read
Save for later

The History of R (updated for 2020) from Revolutions

Matthew Emerick
27 Jul 2020
1 min read
As an update to this post, here's a list of the major events in R history since its creation:  1992: R development begins as a research project in Auckland, NZ by Robert Gentleman and Ross Ihaka  1993: First binary versions of R published at Statlib 1995: R first distributed as open-source software, under GPL2 license 1997: R core group formed 1997: CRAN founded (by Kurt Hornik and Fritz Leisch) 1999: The R website, r-project.org, founded 1999: First in-person meeting of R Core team, at inaugural Directions in Statistical Computing conference, Vienna 2000: R 1.0.0 released (February 29)  2000: John Chambers, recipient of the 1998 ACM Software Systems Award for the S language, joins R Core 2001: R News founded (later to become the R Journal) 2003: R Foundation founded 2004: First UseR! conference (in Vienna) 2004: R 2.0.0 released 2009: First edition of the R Journal 2013: R 3.0.0 released 2015: R Consortium founded, with R Foundation participation 2016: New R logo adopted 2017: CRAN exceeds 10,000 published packages 2020: R 4.0.0 released The presentation below (slides available here) also covers the history of R through 2020.  
Read more
  • 0
  • 0
  • 1105

article-image-r-4-0-2-now-available-from-revolutions
Matthew Emerick
25 Jun 2020
1 min read
Save for later

R 4.0.2 now available from Revolutions

Matthew Emerick
25 Jun 2020
1 min read
R 4.0.2 is now available for download for Windows, Mac and Linux platforms. This update addresses a few minor bugs included in the R 4.0.0 release, and also a significant bug introduced in R 4.0.1 on the Windows platform.  Compared to R 4.0.0, the R 4.0.2 update also improves the performance of the merge function, and adds an option to better handle zero-length arguments to the paste and paste0 functions.  For the details on the changes in R 4.0.2 follow the link below, and visit your local CRAN mirror to download the update. R-announce mailing list: R 4.0.2 is released
Read more
  • 0
  • 0
  • 821

Banner background image
article-image-custom-package-repositories-in-r-from-revolutions
Matthew Emerick
22 May 2020
5 min read
Save for later

Custom Package Repositories in R from Revolutions

Matthew Emerick
22 May 2020
5 min read
by Steve Belcher, Sr Technical Specialist, Microsoft Data & AI In some companies, R users can’t download R packages from CRAN. That might be because they work in an environment that’s isolated from the internet, or because company policy dictates that only specific R packages and/or package versions may be used. In this article, we share some ways you can set up a private R package repository you can use as a source of R packages. The best way to maintain R packages for the corporation when access to the internet is limited and/or package zip files are not allowed to be downloaded is to implement a custom package repository. This will give the company the most flexibility to ensure that only authorized and secure packages are available to the firm’s R users. You can use a custom repository with R downloaded from CRAN, with Microsoft R Open, with Microsoft R Client and Microsoft ML Server, or with self-built R binaries. Setting Up a Package Repository One of the strengths of the R language is the thousands of third-party packages that have been made publicly available via CRAN, the Comprehensive R Archive Network. R includes several functions that make it easy to download and install these packages. However, in many enterprise environments, access to the Internet is limited or non-existent. In such environments, it is useful to create a local package repository that users can access from within the corporate firewall. Your local repository may contain source packages, binary packages, or both. If at least some of your users will be working on Windows systems, you should include Windows binaries in your repository. Windows binaries are R-version-specific; if you are running R 3.3.3, you need Windows binaries built under R 3.3. These versioned binaries are available from CRAN and other public repositories. If at least some of your users will be working on Linux systems, you must include source packages in your repository. The main CRAN repository only includes Windows binaries for the current and prior release of R, but you can find packages for older version of R at the daily CRAN snapshots archived by Microsoft at MRAN. This is also a convenient source of older versions of binary packages for current R releases. There are two ways to create the package repository: either mirror an existing repository or create a new repository and populate it with just those packages you want to be available to your users. However, the entire set of packages available on CRAN is large, and if disk space is a concern you may want to restrict yourself to only a subset of the available packages. Maintaining a local mirror of an existing repository is typically easier and less error-prone, but managing your own repository gives you complete control over what is made available to your users. Creating a Repository Mirror Maintaining a repository mirror is easiest if you can use the rsync tool; this is available on all Linux systems and is available for Windows users as part of the Rtools collection. We will use rsync to copy packages from the original repository to your private repository. Creating a Custom Repository As mentioned above, a custom repository gives you complete control over which packages are available to your users. Here, too, you have two basic choices in terms of populating your repository: you can either rsync specific directories from an existing repository, or you can combine your own locally developed packages with packages from other sources. The latter option gives you the greatest control, but in the past, this has typically meant you needed to manage the contents using home-grown tools. Custom Repository Considerations The creation of a custom repository will give you ultimate flexibility to provide access to needed R packages while maintaining R installation security for the corporation. You could identify domain specific packages and rsync them from the Microsoft repository to your in-house custom repository. As part of this process, it makes sense to perform security and compliance scans on downloaded packages before adding them to your internal repository. To aid in the creation of a custom repository, a consultant at Microsoft created the miniCRAN package which allows you to construct a repository from a subset of packages on CRAN (as well as other CRAN-like repositories). The miniCRAN package includes a function that allows you to add your own custom packages to your new custom repository, which promotes sharing of code with your colleagues. Like many other capabilities in the R ecosystem, there are other packages and products that are available to create and work with repositories. A couple of open source packages available for working with R repositories include packrat, renv and drat. If you are looking for a supported, commercially available product to manage access to packages within your organization, RStudio offers the RStudio Package Manager.
Read more
  • 0
  • 0
  • 958

article-image-create-and-deploy-a-custom-vision-predictive-service-in-r-with-azurevision-from-revolutions
Matthew Emerick
13 May 2020
9 min read
Save for later

Create and deploy a Custom Vision predictive service in R with AzureVision from Revolutions

Matthew Emerick
13 May 2020
9 min read
The AzureVision package is an R frontend to Azure Computer Vision and Azure Custom Vision. These services let you leverage Microsoft’s Azure cloud to carry out visual recognition tasks using advanced image processing models, with minimal machine learning expertise. The basic idea behind Custom Vision is to take a pre-built image recognition model supplied by Azure, and customise it for your needs by supplying a set of images with which to update it. All model training and prediction is done in the cloud, so you don’t need a powerful machine of your own. Similarly, since you are starting with a model that has already been trained, you don’t need a very large dataset or long training times to obtain good predictions (ideally). This article walks you through how to create, train and deploy a Custom Vision model in R, using AzureVision. Creating the resources You can create the Custom Vision resources via the Azure portal, or in R using the facilities provided by AzureVision. Note that Custom Vision requires at least two resources to be created: one for training, and one for prediction. The available service tiers for Custom Vision are F0 (free, limited to 2 projects for training and 10k transactions/month for prediction) and S0. Here is the R code for creating the resources: library(AzureVision) # insert your tenant, subscription, resgroup name and location here rg <- AzureRMR::get_azure_login(tenant)$ get_subscription(sub_id)$ create_resource_group(rg_name, location=rg_location) # insert your desired Custom Vision resource names here res <- rg$create_cognitive_service(custvis_resname, service_type="CustomVision.Training", service_tier="S0") pred_res <- rg$create_cognitive_service(custvis_predresname, service_type="CustomVision.Prediction", service_tier="S0") Training Custom Vision defines two different types of endpoint: a training endpoint, and a prediction endpoint. Somewhat confusingly, they can both use the same hostname, but with different URL paths and authentication keys. To start, call the customvision_training_endpoint function with the service URL and key. url <- res$properties$endpoint key <- res$list_keys()[1] endp <- customvision_training_endpoint(url=url, key=key) Custom Vision is organised hierarchically. At the top level, we have a project, which represents the data and model for a specific task. Within a project, we have one or more iterations of the model, built on different sets of training images. Each iteration in a project is independent: you can create (train) an iteration, deploy it, and delete it without affecting other iterations. In turn, there are three different types of projects: A multiclass classification project is for classifying images into a set of tags, or target labels. An image can be assigned to one tag only. A multilabel classification project is similar, but each image can have multiple tags assigned to it. An object detection project is for detecting which objects, if any, from a set of candidates are present in an image. Let’s create a classification project: testproj <- create_classification_project(endp, "testproj", export_target="standard") Here, we specify the export target to be standard to support exporting the final model to one of various standalone formats, eg TensorFlow, CoreML or ONNX. The default is none, in which case the model stays on the Custom Vision server. The advantage of none is that the model can be more complex, resulting in potentially better accuracy. Adding and tagging images Since a Custom Vision model is trained in Azure and not locally, we need to upload some images. The data we’ll use comes from the Microsoft Computer Vision Best Practices project. This is a simple set of images containing 4 kinds of objects one might find in a fridge: cans, cartons, milk bottles, and water bottles. download.file( "https://cvbp.blob.core.windows.net/public/datasets/image_classification/fridgeObjects.zip", "fridgeObjects.zip" ) unzip("fridgeObjects.zip") The generic function to add images to a project is add_images, which takes a vector of filenames, Internet URLs or raw vectors as the images to upload. It returns a vector of image IDs, which are how Custom Vision keeps track of the images it uses. Let’s upload the fridge objects to the project. The method for classification projects has a tags argument which can be used to assign labels to the images as they are uploaded. We’ll keep aside 5 images from each class of object to use as validation data. cans <- dir("fridgeObjects/can", full.names=TRUE) cartons <- dir("fridgeObjects/carton", full.names=TRUE) milk <- dir("fridgeObjects/milk_bottle", full.names=TRUE) water <- dir("fridgeObjects/water_bottle", full.names=TRUE) # upload all but 5 images from cans and cartons, and tag them can_ids <- add_images(testproj, cans[-(1:5)], tags="can") carton_ids <- add_images(testproj, cartons[-(1:5)], tags="carton") If you don’t tag the images at upload time, you can do so later with add_image_tags: # upload all but 5 images from milk and water bottles milk_ids <- add_images(testproj, milk[-(1:5)]) water_ids <- add_images(testproj, water[-(1:5)]) add_image_tags(testproj, milk_ids, tags="milk_bottle") add_image_tags(testproj, water_ids, tags="water_bottle") Other image functions to be aware of include list_images, remove_images, and add_image_regions (which is for object detection projects). A useful one is browse_images, which takes a vector of IDs and displays the corresponding images in your browser. browse_images(testproj, water_ids[1:5]) Training the model Having uploaded the data, we can train the Custom Vision model with train_model. This trains the model on the server and returns a model iteration, which is the result of running the training algorithm on the current set of images. Each time you call train_model, for example to update the model after adding or removing images, you will obtain a different model iteration. In general, you can rely on AzureVision to keep track of the iterations for you, and automatically return the relevant results for the latest iteration. mod <- train_model(testproj) We can examine the model performance on the training data with the summary method. For this toy problem, the model manages to obtain a perfect fit. summary(mod) Obtaining predictions from the trained model is done with the predict method. By default, this returns the predicted tag (class label) for the image, but you can also get the predicted class probabilities by specifying type="prob". validation_imgs <- c(cans[1:5], cartons[1:5], milk[1:5], water[1:5]) validation_tags <- rep(c("can", "carton", "milk_bottle", "water_bottle"), each=5) predicted_tags <- predict(mod, validation_imgs) table(predicted_tags, validation_tags) ## validation_tags ## predicted_tags can carton milk_bottle water_bottle ## can 4 0 0 0 ## carton 0 5 0 0 ## milk_bottle 1 0 5 0 ## water_bottle 0 0 0 5 This shows that the model got 19 out of 20 predictions correct on the validation data, misclassifying one of the cans as a milk bottle. Deployment Publishing to a prediction resource The code above demonstrates using the training endpoint to obtain predictions, which is really meant only for model testing and validation. In a production setting, we would normally publish a trained model to a Custom Vision prediction resource. Among other things, a user with access to the training endpoint has complete freedom to modify the model and the data, whereas access to the prediction endpoint only allows getting predictions. Publishing a model requires knowing the Azure resource ID of the prediction resource. Here, we’ll use the resource object that we created earlier; you can also obtain this information from the Azure Portal. # publish to the prediction resource we created above publish_model(mod, "iteration1", pred_res) Once a model has been published, we can obtain predictions from the prediction endpoint in a manner very similar to previously. We create a predictive service object with classification_service, and then call the predict method. Note that a required input is the project ID; you can supply this directly or via the project object. It may also take some time before a published model shows up on the prediction endpoint. Sys.sleep(60) # wait for Azure to finish publishing pred_url <- pred_res$properties$endpoint pred_key <- pred_res$list_keys()[1] pred_endp <- customvision_prediction_endpoint(url=pred_url, key=pred_key) project_id <- testproj$project$id pred_svc <- classification_service(pred_endp, project_id, "iteration1") # predictions from prediction endpoint -- same as before predsvc_tags <- predict(pred_svc, validation_imgs) table(predsvc_tags, validation_tags) ## validation_tags ## predsvc_tags can carton milk_bottle water_bottle ## can 4 0 0 0 ## carton 0 5 0 0 ## milk_bottle 1 0 5 0 ## water_bottle 0 0 0 5 Exporting as standalone As an alternative to deploying the model to an online predictive service resource, for example if you want to create a custom deployment solution, you can also export the model as a standalone object. This is only possible if the project was created to support exporting. The formats supported include: ONNX 1.2 CoreML TensorFlow or TensorFlow Lite A Docker image for either the Linux, Windows or Raspberry Pi environment Vision AI Development Kit (VAIDK) To export the model, simply call export_model and specify the target format. This will download the model to your local machine. export_model(mod, "tensorflow") More information AzureVision is part of the AzureR family of packages. This provides a range of tools to facilitate access to Azure services for data scientists working in R, such as AAD authentication, blob and file storage, Resource Manager, container services, Data Explorer (Kusto), and more. If you are interested in Custom Vision, you may also want to check out CustomVision.ai, which is an interactive frontend for building Custom Vision models.
Read more
  • 0
  • 0
  • 828
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-azureqstor-r-interface-to-azure-queue-storage-now-on-github-from-revolutions
Matthew Emerick
05 May 2020
2 min read
Save for later

AzureQstor: R interface to Azure Queue Storage now on GitHub from Revolutions

Matthew Emerick
05 May 2020
2 min read
This post is to announce that the AzureQstor package is now on GitHub. AzureQstor provides an R interface to Azure queue storage, building on the facilities provided by AzureStor. Queue Storage is a service for storing large numbers of messages, for example from automated sensors, that can be accessed remotely via authenticated calls using HTTP or HTTPS. A single queue message can be up to 64 KB in size, and a queue can contain millions of messages, up to the total capacity limit of a storage account. Queue storage is often used to create a backlog of work to process asynchronously. AzureQstor uses a combination of S3 and R6 classes. The queue endpoint is an S3 object for compatibility with AzureStor, while R6 classes are used to represent queues and messages. library(AzureQstor) endp <- storage_endpoint("https://mystorage.queue.core.windows.net", key="access_key") # creating, retrieving and deleting queues create_storage_queue(endp, "myqueue") qu <- storage_queue(endp, "myqueue") qu2 <- create_storage_queue(endp, "myqueue2") delete_storage_queue(qu2) The queue object exposes methods for getting (reading), peeking, deleting, updating, popping (reading and deleting) and putting (writing) messages: qu$put_message("Hello queue") msg <- qu$get_message() msg$text ## [1] "Hello queue" # get several messages at once qu$get_messages(n=30) The message object exposes methods for deleting and updating the message: msg$update(visibility_timeout=30, text="Updated message") msg$delete() You can also get and set metadata for a queue with the AzureStor get/set_storage_metadata generics: get_storage_metadata(qu) set_storage_metadata(qu, name1="value1", name2="value2") It’s anticipated that AzureQstor will be submitted to CRAN before long. If you are a queue storage user, please install it and give it a try; any feedback or bug report is much appreciated. You can email me or open an issue on GitHub.
Read more
  • 0
  • 0
  • 977

article-image-r-4-0-0-now-available-and-a-look-back-at-rs-history-from-revolutions
Matthew Emerick
27 Apr 2020
4 min read
Save for later

R 4.0.0 now available, and a look back at R's history from Revolutions

Matthew Emerick
27 Apr 2020
4 min read
R 4.0.0 was released in source form on Friday, and binaries for Windows, Mac and Linux are available for download now. As the version number bump suggests, this is a major update to R that makes some significant changes. Some of these changes — particularly the first one listed below — are likely to affect the results of R's calculations, so I would not recommend running scripts written for prior versions of R without validating them first. In any case, you'll need to reinstall any packages you were using for R 4.0.0. (You might find this R script useful for checking what packages you have installed for R 3.x.) You can find the full list of changes and fixes in the NEWS file (it's long!), but here are the biggest changes: Imported string data is no long converted to factors. The stringsAsFactors option, which since R's inception defaulted to TRUE to convert imported string data  to factor objects, is now FALSE. This default was probably the biggest stumbling block for prior users of R: it made statistical modeling a little easier and used a little less memory, but at the expense of confusing behavior on data you probably thought was ordinary strings. This change broke backward compatibility for many packages (mostly now updated on CRAN), and likely affects your own scripts unless you were diligent about including explicit stringsAsFactors declarations in your import function calls. A new syntax for specifying raw character strings. You can use syntax like r"(any characters except right paren)" to define a literal string. This is particularly useful for HTML code, regular expressions, and other strings that include quotes or backslashes that would otherwise have to be escaped. An enhanced reference counting system. When you delete an object in R, it usually releases the associated memory back to the operating system. Likewise, if you copy an object with y <- x, R won't allocate new memory for y unless x is later modified. In prior versions of R, however, that system breaks down if there are more than 2 references to any block of memory. Starting with R 4.0.0, all references will be counted, and so R should reclaim as much memory as possible, reducing R's overall memory footprint. This will have no impact on how you write R code, but this change make R run faster, especially on systems with limited memory and with slow storage systems. Normalization of matrix and array types. Conceptually, a matrix is just a 2-dimensional array. But prior versions of R handle matrix and 2-D array objects differently in some cases. In R 4.0.0, matrix objects will formally inherit from the array class, eliminating such inconsistencies. A refreshed color palette for charts. The base graphics palette for prior versions of R (shown as R3 below) features saturated colors that vary considerably in brightness (for example, yellow doesn't display as prominently as red). In R 4.0.0, the palette R4 below will be used, with colors of consistent luminance that are easier to distinguish, especially for viewers with color deficiencies. Additional palettes will make it easy to make base graphics charts that match the color scheme of ggplot2 and other graphics systems. Performance improvements. The grid graphics system has been revamped (which improves the rendering speed of ggplot2 graphics in particular), socket connections are faster, and various functions have been sped up.  Cairo graphics devices have been updated to support more fonts and symbols, an improvement particularly relevant to Linux-based users of R. R version 4 represents a major milestone in the history of R. It's been just over 20 years since R 1.0.0 was released on February 29 2000, and the history of R extends even further back than that. If you're interested in the other major milestones, I cover R's history in this recent talk for the SatRDays DC conference. For the details on the R 4.0.0 release, including the complete list of changes, check out the announcement at the link below. R-announce archives: R 4.0.0 is released
Read more
  • 0
  • 0
  • 1093

article-image-major-update-to-checkpoint-package-now-available-for-beta-test-from-revolutions
Matthew Emerick
21 Apr 2020
4 min read
Save for later

Major update to checkpoint package now available for beta test from Revolutions

Matthew Emerick
21 Apr 2020
4 min read
I’m Hong Ooi, data scientist with Microsoft Azure Global, and maintainer of the checkpoint package. The checkpoint package makes it easy for you freeze R packages in time, drawing from the daily snapshots of the CRAN repository that have been archived on a daily basis at MRAN since 2014. Checkpoint has been around for nearly 6 years now, helping R users solve the reproducible research puzzle. In that time, it’s seen many changes, new features, and, inevitably, bug reports. Some of these bugs have been fixed, while others remain outstanding in the too-hard basket. Many of these issues spring from the fact that it uses only base R functions, in particular install.packages, to do its work. The problem is that install.packages is meant for interactive use, and as an API, is very limited. For starters, it doesn’t return a result to the caller—instead, checkpoint has to capture and parse the printed output to determine whether the installation succeeded. This causes a host of problems, since the printout will vary based on how R is configured. Similarly, install.packages refuses to install a package if it’s in use, which means checkpoint must unload it first—an imperfect and error-prone process at best. In addition to these, checkpoint’s age means that it has accumulated a significant amount of technical debt over the years. For example, there is still code to handle ancient versions of R that couldn’t use HTTPS, even though the MRAN site (in line with security best practice) now accepts HTTPS connections only. I’m happy to announce that checkpoint 1.0 is now in beta. This is a major refactoring/rewrite, aimed at solving these problems. The biggest change is to switch to pkgdepends for the backend, replacing the custom-written code using install.packages. This brings the following benefits: Caching of downloaded packages. Subsequent checkpoints using the same MRAN snapshot will check the package cache first, saving possible redownloads. Allow installing packages which are in use, without having to unload them first. Comprehensive reporting of all aspects of the install process: dependency resolution, creating an install plan, downloading packages, and actual installation. Reliable detection of installation outcomes (no more having to screen-scrape the R window). In addition, checkpoint 1.0 features experimental support for a checkpoint.yml manifest file, to specify packages to include or exclude from the checkpoint. You can include packages from sources other than MRAN, such as Bioconductor or Github, or from the local machine; similarly, you can exclude packages which are not publicly distributed (although you’ll still have to ensure that such packages are visible to your checkpointed session). The overall interface is still much the same. To create a checkpoint, or use an existing one, call the checkpoint() function: library(checkpoint) checkpoint("2020-01-01") This calls out to two other functions, create_checkpoint and use_checkpoint, reflecting the two main objectives of the package. You can also call these functions directly. To revert your session to the way it was before, call uncheckpoint(). One difference to be aware of is that function names and arguments now consistently use snake_case, reflecting the general style seen in the tidyverse and related frameworks. The names of ancillary functions have also been changed, to better reflect their purpose, and the package size has been significantly reduced. See the help files for more information. There are two main downsides to the change, both due to known issues in the current pkgdepends/pkgcache chain: For Windows and MacOS, creating a checkpoint fails if there are no binary packages available at the specified MRAN snapshot. This generally happens if you specify a snapshot that either predates or is too far in advance of your R version. As a workaround, you can use the r_version argument to create_checkpoint to install binaries intended for a different R version. There is no support for a local MRAN mirror (accessed via a file:// URL). You must either use the standard MRAN site, or have an actual webserver hosting a mirror of MRAN. It’s anticipated that these will both be fixed before pkgdepends is released to CRAN. You can get the checkpoint 1.0 beta from GitHub: remotes::install_github("RevolutionAnalytics/checkpoint") Any comments or feedback will be much appreciated. You can email me directly, or open an issue at the repo.
Read more
  • 0
  • 0
  • 848

article-image-entity-framework-core-migrations-from-c-corner
Matthew Emerick
18 Feb 2020
1 min read
Save for later

Entity Framework Core Migrations from C# Corner

Matthew Emerick
18 Feb 2020
1 min read
Eric Vogel uses code samples and screenshots to demonstrate how to do Entity Framework Core migrations in a .NET Core application through the command line and in code.
Read more
  • 0
  • 0
  • 947
article-image-openjs-foundation-accepts-electron-js-in-its-incubation-program
Fatema Patrawala
12 Dec 2019
3 min read
Save for later

OpenJS Foundation accepts Electron.js in its incubation program

Fatema Patrawala
12 Dec 2019
3 min read
Yesterday, at the Node+JS Interactive in Montreal, the OpenJS Foundation announced the acceptance of Electron into the Foundation’s incubation program. The OpenJS Foundation provides vendor-neutral support for sustained growth within the open source JavaScript community. It's supported by 30 corporate and end-user members, including GoDaddy, Google, IBM, Intel, Joyent, and Microsoft. Electron is an open source framework created for building desktop apps using JavaScript, HTML, and CSS, it is based on Node.js and Chromium. Additionally, Electron is widely used on many well-known applications including Discord, Microsoft Teams, OpenFin, Skype, Slack, Trello, Visual Studio Code, etc. “We’re heading into 2020 excited and honored by the trust the Electron project leaders have shown through this significant contribution to the new OpenJS Foundation,” said Robin Ginn, Executive Director of the OpenJS Foundation. He further added, “Electron is a powerful development tool used by some of the most well-known companies and applications. On behalf of the community, I look forward to working with Electron and seeing the amazing contributions they will make.” Electron’s cross-platform capabilities make it possible to build and run apps on Windows, Mac, and Linux computers. Initially developed by GitHub in 2013, today the framework is maintained by a number of developers and organizations. Electron is suited for anyone who wants to ship visually consistent, cross-platform applications, fast and efficiently. “We’re excited about Electron’s move to the OpenJS Foundation and we see this as the next step in our evolution as an open source project,” said Jacob Groundwater, Manager at ElectronJS and Principal Engineering Manager at Microsoft. “With the Foundation, we’ll continue on our mission to play a prominent role in the adoption of web technologies by desktop applications and provide a path for JavaScript to be a sustainable platform for desktop applications. This will enable the adoption and development of JavaScript in an environment that has traditionally been served by proprietary or platform-specific technologies.” What this means for developers Electron joining the OpenJS Foundation does not change how Electron is made, released, or used — and does not directly affect developers building applications with Electron. Even though Electron was originally created at GitHub, it is currently maintained by a number of organizations and individuals. In 2019, Electron codified its governance structure and invested heavily into formalizing how decisions affecting the entire project are made. The Electron team believes that having multiple organizations and developers investing in and collaborating on Electron makes the project stronger. Hence, lifting Electron up from being owned by a single corporate entity and moving it into a neutral foundation focused on supporting the web and JavaScript ecosystem is a natural next step as they mature in the open-source ecosystem. To know more about this news, check out the official announcement from the OpenJS Foundation website. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger Node.js and JS Foundations are now merged into the OpenJS Foundation Denys Vuika on building secure and performant Electron apps, and more
Read more
  • 0
  • 0
  • 6854

article-image-wireguard-to-be-merged-with-linux-net-next-tree-and-will-be-available-by-default-in-linux-5-6
Savia Lobo
12 Dec 2019
3 min read
Save for later

WireGuard to be merged with Linux net-next tree and will be available by default in Linux 5.6

Savia Lobo
12 Dec 2019
3 min read
On December 9, WireGuard announced that its secure VPN tunnel kernel code will soon be included in Linux net-next tree. This indicates, “WireGuard will finally reach the mainline kernel with the Linux 5.6 cycle kicking off in late January or early February!”, reports Phoronix. WireGuard is a layer 3 secure networking tunnel made specifically for the kernel, that aims to be much simpler and easier to audit than IPsec. On December 8, Jason Donenfeld, WireGuard’s lead developer sent out patches for the net-next v2 WireGuard. “David Miller has already pulled in WireGuard as the first new feature in net-next that is destined for Linux 5.6 now that the 5.5 merge window is over,” the email thread mentions. While WireGuard was initiated as a Linux project, its Windows, macOS, BSD, iOS, and Android versions are already available. The reason behind the delay for Linux was that Donenfeld disliked Linux’s built-in cryptographic subsystem citing its API is too complex and difficult. Donenfeld had plans to introduce a new cryptographic subsystem — his own Zinc library. However, this didn’t go down well with several developers as they thought that rewriting the cryptographic subsystem was a waste of time. Fortunately for Donenfeld, Linus Torvalds was on his side. Torvalds stated, “I’m 1000% with Jason on this. The crypto/model is hard to use, inefficient, and completely pointless when you know what your cipher or hash algorithm is, and your CPU just does it well directly.” Finally, Donenfeld compromised saying, "WireGuard will get ported to the existing crypto API. So it's probably better that we just fully embrace it, and afterward work evolutionarily to get Zinc into Linux piecemeal." Hence a few Zine elements have been imported into the legacy crypto code in the next Linux 5.5 kernel. WireGuard would become the new standard for Linux VPNs This laid the foundation for WireGuard to finally ship in Linux early next year. WireGuard works by securely encapsulates IP packets over UDP. It's authentication and interface design has more to do with Secure Shell (SSH) than other VPNs. You simply configure the WireGuard interface with your private key and your peers' public keys, and you're ready to securely talk. After the arrival, WireGuard VPN can be expected to become the new standard for Linux VPNs with its key features, namely, tiny code-size, high-speed cryptographic primitives, and in-kernel design. With being super-fast, WireGuard for Linux would be secure too as it supports state-of-the-art cryptography technologies such as the Noise protocol framework, Curve25519, BLAKE2, SipHash24, ChaCha20, Poly1305, and HKD. Donenfeld in the email thread writes, “This is big news and very exciting. Thanks to all the developers, contributors, users, advisers, and mailing list interlocutors who have helped to make this happen. In the coming hours and days, I'll be sending followups on next steps.” ArsTechnica reports, “Although highly speculative, it's also possible that WireGuard could land in-kernel on Ubuntu 20.04 even without the 5.6 kernel—WireGuard founder Jason Donenfeld offered to do the work backporting WireGuard into earlier Ubuntu kernels directly. Donenfeld also stated today that a 1.0 WireGuard release is ‘on the horizon’." To know more about this news in detail, read the official email thread. WireGuard launches an official MacOS app Researchers find a new Linux vulnerability that allows attackers to sniff or hijack VPN connections. NCSC investigates several vulnerabilities in VPN products from Pulse secure, Palo Alto and Fortinet
Read more
  • 0
  • 0
  • 4361

article-image-elementary-os-5-1-hera-releases-with-flatpak-native-support-several-accessibility-improvements-and-more
Bhagyashree R
09 Dec 2019
3 min read
Save for later

elementary OS 5.1 Hera releases with Flatpak native support, several accessibility improvements, and more

Bhagyashree R
09 Dec 2019
3 min read
Last week, the CEO and CXO of elementary OS, Cassidy James Blaede announced the release of elementary OS 5.1, code named ‘Hera’. elementary OS is an Ubuntu-based desktop distribution, which promises to be a “fast, open, and privacy-respecting” replacement to macOS and Windows.  Building upon the solid foundations laid out by its predecessor Juno, Hera brings several new features including native support for Flatpak, a faster AppCentre storefront, accessibility features, among other updates. Key updates in elementary OS 5.1 Hera Brand new greeter and onboarding In elementary OS 5.1 Hera, the greeter and onboarding have seen major changes in order to give users an improved first-run experience. In addition to looking better, the redesigned greeter addresses some of the key reported issues including keyboard focus issues, HiDPI issues, and better localization. Hera also ships with a new Onboarding app that gives you a quick introduction to key features and also takes care of common first-run tasks like managing privacy settings. Native Flatpak support and AppCenter updates elementary OS 5.1 Hera comes with native support for Flatpack, an application sandboxing and distribution framework. It enables developers to create one application and distribute it to different Linux desktop distributions.  Hera includes a new core elementary OS utility called Sideload that allows users to sideload Flatpak apps. Any updates to the sideloaded apps will appear in AppCenter and apps from any user-added Flatpak remotes will show up in AppCenter as uncurated apps. Along with the Flatpak support, Blaede shared that it is now “up to 10× faster in Hera, loading the homepage and featured apps blazingly fast.” Accessibility improvements A bunch of accessibility features has landed in elementary OS 5.1 Hera. System Settings are now more accessible to all users. Discoverability of performance and keyboard shortcut has been improved. Sound settings has a new approach to handling external devices and there is a “Flash screen” option for event alerts to better manage whether alerts are audible, visual, both, or neither. The Mouse & Touchpad settings in elementary OS 5.1 Hera are now organized into sections based on different behavior. Several accessibility settings like long-press secondary click, reveal pointer, double-click speed, and control pointer using keypad have been exposed. Also, the touchpad settings now has an “Ignore when mouse is connected” toggle. Many developers have already started trying out this release. A Hacker News user shared their first impressions on a discussion regarding this release, “I installed this on my XPS 13 this morning, and it's really nice. It has a lot of overall polish that most DE's are missing, it looks and feels cohesive. It installed without any issues, and I had no problem with my Ubuntu-leaning dotfiles. I will probably keep this for the near future, it's very pleasant.” These were some of the updates in elementary OS 5.1 Hera. Check out the official announcement to know more about this release. Redox OS will soon permanently run rustc, the compiler for the Rust programming language, says Redox creator Jeremy Soller Nate Chamberlain talks about the Microsoft Enterprise Mobility and Security suite and becoming M365 certified Microsoft technology evangelist Matthew Weston on how Microsoft PowerApps is democratizing app development [Interview]
Read more
  • 0
  • 0
  • 4375
article-image-you-can-now-use-webassembly-from-net-with-wasmtime
Vincy Davis
05 Dec 2019
3 min read
Save for later

You can now use WebAssembly from .NET with Wasmtime!

Vincy Davis
05 Dec 2019
3 min read
Two months ago, ASP.NET Core 3.0 was released with an updated version of the Blazor framework. This framework allows the building of interactive client-side web UI with .NET. Yesterday, Peter Huene, a staff research engineer at Mozilla shared his experience of using Wasmtime with .NET. He affirms that using this format will enable developers to programmatically load and execute WebAssembly code directly from their .NET programs. Key benefits of using WebAssembly from .NET with Wasmtime Share more code across platforms Although .NET Core enables cross-platform use, developers find it difficult to use a native library as .Net Core requires native interop and a platform-specific build for each supported platform. However, if the native library is compiled to WebAssembly, then the same WebAssembly module can be used across many different platforms and programming environments, including .NET. Thus a more simplified distribution of the library and applications will allow developers to share more codes across platforms. Securely isolate untrusted code According to Huene, “The .NET Framework attempted to sandbox untrusted code with technologies such as Code Access Security and Application Domains, but ultimately these failed to properly isolate untrusted code.” This resulted in Microsoft deprecating its use for sandboxing and removing it from .NET Core. Huene asserts that since WebAssembly is designed for the web, its module will enable users to call the external explicitly imported function from a host environment and will also give access to only a region of memory given to it by the host. With WebAssembly, users can also leverage this design to sandbox code in a .NET program. Improved interoperability with interface types In August this year, WebAssembly’s interface types permitted users to run WebAssembly with many programming languages like Python, Ruby, and Rust. This interoperability reduced the amount of glue code which was necessary for passing complex types between the hosting application and a WebAssembly module. According to Huene, if Wasmtime implements official support for interface types for .NET API in the future, it will enable a seamless exchange of complex types between WebAssembly and .NET. Users have liked the approach of using WebAssembly from .NET with Wasmtime. https://twitter.com/mattferderer/status/1202276545840197633 https://twitter.com/seangwright/status/1202488332011347968 To know how Peter Huene used WebAssembly from .NET, check out his demonstrations on the Mozilla Hacks blog. Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist .NET Framework API Porting Project concludes with .NET Core 3.0 Wasmer’s first Postgres extension to run WebAssembly is here! Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module Introducing SwiftWasm, a tool for compiling Swift to WebAssembly
Read more
  • 0
  • 0
  • 4677

article-image-introducing-firefox-replay-a-tool-that-allows-firefox-tabs-to-record-replay-and-rewind-their-behavior
Bhagyashree R
02 Dec 2019
3 min read
Save for later

Introducing Firefox Replay, a tool that allows Firefox tabs to record, replay, and rewind their behavior

Bhagyashree R
02 Dec 2019
3 min read
Mozilla is constantly putting its efforts into improving Firefox’s devtools. One such effort is Firefox Replay, an experimental tool that allows Firefox content processes to record their behavior so that it can be replayed and rewound later. The main highlight of Firefox Replay is the “code timeline” that enables you to scan through every code execution at a glance. Along with execution points, the timeline also shows exceptions, events, and network requests in real-time. It also allows you to save your recordings and pick up where you left afterward. How Firefox Replay works The record and replay behavior is achieved by “controlling the non-determinism in the browser.” Initially, it records non-deterministic behaviors (intra-thread and inter-thread) and then replays it later to “force the browser to behave deterministically.” Firefox Replay includes IPC integration to enable communication between a recording or replaying process and the chrome process. Its rewind infrastructure allows a replaying process to restore a previous state. Its debugger integration enables the JS debugger to read the required information from a replaying process and control the process’s execution. Firefox Replay is not officially released yet, however, Mac users can give it try by downloading the nightly builds. Since it is still experimental, Firefox Replay is disabled by default. You can turn it on with the ‘devtools.recordreplay.enabled’ preference. Read also: Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature The team is working on support for other platforms as well. “Windows port work is underway but is not yet working.  The difficulties are in figuring out the set of system library APIs to intercept, in getting the memory management and dirty memory parts of the rewind infrastructure to work, and in handling the different graphics and IPC pathways on different platforms,” the official doc reads. In a discussion on Hacker News, many users were excited to try out this tool. A user commented, “This might be enough to get me to use Firefox to develop with. This could be huge for its market share, a big part of the reason chrome was able to become so popular was because of how good its devtools were (compared to the competition at the time). Firefox definitely managed to catch up but not before lots of devs switched to chrome and stopped checking for compatibility with Firefox.” “This will be an absolute game-changer for web development. I am currently working on a really simplified version of this but as a chrome extension. We deal with a lot of real-time data and have been facing some timing issues (network and user input) which is really hard to reproduce,” a user added. Check out Mozilla’s official docs to know more in detail. Firefox 70 released with better security, CSS, and JavaScript improvements The new WebSocket Inspector will be released in Firefox 71 Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70
Read more
  • 0
  • 0
  • 5076