Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Application Development

279 Articles
article-image-nvtop-an-htop-like-monitoring-tool-for-nvidia-gpus-on-linux
Prasad Ramesh
09 Oct 2018
2 min read
Save for later

NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linux

Prasad Ramesh
09 Oct 2018
2 min read
People started using htop when the top just didn’t provide enough information. Now there is NVTOP, a tool that looks similar to htop but displays the process information loaded on your NVIDIA GPU. It works on Linux systems and displays detailed information about processes, memory used, which GPU and also displays the total GPU and memory usage. The first version of this tool was released in July last year. The latest change made the process list and command options scrollable. Some of the features of NVTOP are: Sorting by column To Select / Ignore a specific GPU by ID To kill selected process Monochrome option Yes, it has multi GPU support and can display the running processes from all of your GPUs. The information printed out looks like the following, and is similar to something htop would display. Source: GitHub There is also a manual page to give some guidance in using NVTOP. It can be accessed with this command: man nvtop There are OS specific installation steps on GitHub for Ubuntu/Debian, Fedora/RedHat/CentOS, OpenSUSE, and Arch Linux. Requirements There are two libraries needed to build and run NVTOP: The NVIDIA Management Library (NVML) for querying GPU information. The ncurses library for the user interface and make it colorful. Supported GPUs The NVTOP tool works only for NVIDIA GPUs and runs on Linux systems. One of the dependencies is the NVML library which does not support some queries from GPUs before the Kepler microarchitecture. That is anything before GeForce 600 series, GeForce 700 series, or GeForce 800M wouldn’t likely work. For AMD users, there is a tool called radeontop. The tool is provided under the GPLV3 license. For more details, head on to the NVTOP GitHub repository. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499 NVIDIA open sources its material definition language, MDL SDK
Read more
  • 0
  • 0
  • 25464

article-image-github-introduces-template-repository-for-easy-boilerplate-code-management-and-distribution
Bhagyashree R
10 Jun 2019
2 min read
Save for later

GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution

Bhagyashree R
10 Jun 2019
2 min read
Yesterday GitHub introduced ‘Template repository’ using which you can share boilerplate code and directory structure across projects easily. This is similar to the idea of ‘Boilr’ and ‘Cookiecutter’. https://twitter.com/github/status/1136671651540738048 How to create a GitHub template repository? As per its name, ‘Template repository’ enable developers to mark a repository as a template, which they can use later for creating new repositories containing all of the template repository’s files and folders. You can create a new template repository or mark an existing one as a template with admin permissions. Just navigate to the Settings page and then click on the ‘Template repository’ checkbox. Once the template repository is created anyone who has access to it will be able to generate a new repository with same directory structure and files via ‘Use this template’ button. Source: GitHub All the templates that you own, have access to, or have used in a previous project will also be available to you when creating a new repository through ‘Choose a template’ drop-down. Every template repository will have a new URL ‘/generate’ endpoint that will allow you to distribute your template more efficiently. You just need to link your template users directly to this endpoint. Source: GitHub Templating is similar to cloning a repository, except it does not retain the history of the repository unlike cloning and gives users a clean new project with an initial commit. Though this function is still pretty basic, as GitHub will add more functionality in the future, it will be useful for junior developers and beginners to help them get started. Here’s what a Hacker News user believes we can do with this feature: “This is a part of something which could become a very powerful pattern: community-wide templates which include many best practices in a single commit: - Pre-commit hooks for linting/formatting and unit tests. - Basic CI pipeline configuration with at least build, test and release/deploy phases. - Package installation configuration for the frameworks you want. - Container/VM configuration for the languages you want to enable cross-platform and future-proof development. - Documentation to get started with it all.” Read the official announcement by GitHub for more details. Github Sponsors: Could corporate strategy eat FOSS culture for dinner? GitHub Satellite 2019 focuses on community, security, and enterprise Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack
Read more
  • 0
  • 0
  • 17780

article-image-llvm-officially-migrating-to-github-from-apache-svn
Prasad Ramesh
14 Jan 2019
2 min read
Save for later

LLVM officially migrating to GitHub from Apache SVN

Prasad Ramesh
14 Jan 2019
2 min read
In October last year, it was reported that LLVM (Low-Level Virtual Machine) is moving from Apache Subversion (SVN) to GitHub. Now the migration is complete and LLVM is available on GitHub. This transition was long under discussion and is now officially complete. LLVM is a toolkit for creating compilers, optimizers, and runtime environments. This migration comes in place as continuous integration is sometimes broken in LLVM because the SVN server was down. They migrated to GitHub for services lacking in SVN such as better 24/7 stability, disk space, code browsing, forking etc. GitHub is also used by most of the LLVM community. There already were unofficial mirrors on GitHub before this official migration. Last week, James Y Knight from the LLVM team wrote to a mailing list: “The new official monorepo is published to LLVM's GitHub organization, at: https://github.com/llvm/llvm-project. At this point, the repository should be considered stable -- there won't be any more rewrites which invalidate commit hashes (barring some _REALLY_ good reason...)” Along with LLVM, this monorepo also hosts Clang, LLD, Compiler-RT, and other LLVM sub-projects. Commits are being made to the LLVM GitHub repository even at the time of writing and the repo currently has about 200 stars. Updated workflow documents and instructions on migrating user work that is in-progress are being drafted and will be available soon. This move was initiated after positive responses from LLVM community members to migrate to GitHub. If you want to be up to date with more details, you can follow the LLVM mailing list. LLVM will be relicensing under Apache 2.0 start of next year A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring
Read more
  • 0
  • 0
  • 16312
Banner background image

article-image-gnome-3-32-released-with-fractional-scaling-improvements-to-desktop-web-and-much-more
Amrata Joshi
14 Mar 2019
3 min read
Save for later

GNOME 3.32 released with fractional scaling, improvements to desktop, web and much more

Amrata Joshi
14 Mar 2019
3 min read
Yesterday, the team at GNOME released the latest version of GNOME 3, GNOME 3.32, a free open-source desktop environment for Unix-like operating systems. This release comes with improvements to desktop, web and much more. What’s new in GNOME 3.32? Fractional Scaling Fractional scaling is available as an experimental option that includes several fractional values with good visual quality on any given monitor. This feature is a major enhancement for the GNOME desktop. It requires manually adding scale-monitor-framebuffer to the settings keyorg.gnome.mutter.experimental-features. Improved data structures in GNOME desktop This release comes with improvements to foundation data structures in the GNOME Desktop for faster and snappier feel to the animations, icons and top shell panel. The search database has been improved which helps in searching faster. Even the on-screen keyboard has been improved, it now supports an emoji chooser. New automation mode in the GNOME Web GNOME Web now comes with a new automation mode which allows the application to be controlled by WebDriver. The reader mode has been enhanced now that features a set of customizable preferences and an improved style. With this release, the touchpad users can now take advantage of more gestures while browsing. For example, swipe left or right to go back or forward through browsing history. New settings for permissions Settings come with a new “Application Permissions” panel that shows resources and permissions for various applications, including installed Flatpak applications. Users can now grant permissions to certain resources when requested by the application. The Sound settings have been enhanced for supporting a vertical layout and an intuitive placement of options. With this release, the night light color temperature can now be adjusted for a warmer or cooler setting. GNOME Boxes GNOME Boxes tries to enable 3D acceleration for virtual machines if both the guest and host support it. This leads to better performance of graphics-intensive guest applications such as games and video editors. Application Management from multiple sources This release can handle apps available from multiple sources, such as Flatpak and distribution repositories. With this release, Flatpak app entries now can list the permissions required on the details page. This will give users a comprehensive understanding of what data the software will need access to. Even browsing application details will get faster now with the new XML parsing library used in this release. To know more about this release, check out the official announcement. GNOME team adds Fractional Scaling support in the upcoming GNOME 3.32 GNOME 3.32 says goodbye to application menus Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes  
Read more
  • 0
  • 0
  • 12091

article-image-python-in-visual-studio-code-released-with-enhanced-variable-explorer-data-viewer-and-more
Amrata Joshi
27 Apr 2019
3 min read
Save for later

Python in Visual Studio Code released with enhanced Variable Explorer, Data Viewer, and more!

Amrata Joshi
27 Apr 2019
3 min read
This week, the team at Python announced the release of Python Extension for Visual Studio Code. This release comes with enhanced variable explorer and data viewer and improvements to the Python Language Server. What’s new in Python in Visual Studio Code? Enhanced Variable Explorer and Data Viewer This release comes with a built-in Variable Explorer along with a Data Viewer, which will help the users to easily view, inspect and filter the variables in the application, including lists, NumPy arrays, pandas data frames, and more. This release shows a section for variables while running code and cells in the Python Interactive window. On expanding it, users can see a list of the variables in the current Jupyter session. More variables will automatically show up as they get used in the code. And users can sort the variables in columns by clicking on each column header. Users can now double-click on each row or use the “Show variable in Data Viewer” button in order to view full data of each variable in the newly-added Data Viewer and can perform a simple search over its values. Improvements to debug configuration In this release, the process of configuring the debugger has now been simplified. If a user starts debugging through the Debug Panel and no debug configuration exists, then the users will now be prompted to create a debug configuration for their application. Instead of manually configuring the launch.json file, users can now create a debug configuration through a set of menus. Improvements to the Python Language Server This release comes with fixes and improvements to the Python Language Server. The team has added back the features that were removed in the 0.2 release including “Rename Symbol”, “Go to Definition” and “Find All References”. Also, the loading time and memory usage have been improved while importing scientific libraries such as pandas, Plotly, PyQt5, especially while running in full Anaconda environments.   Read Also: Visualizing data in R and Python using Anaconda [Tutorial] Major changes In this release, the default behavior of debugger has been changed to display return values. “Unit Test” has been renamed to “Test” or “Testing”. The debugStdLib setting has been replaced with justMyCode. This release comes with setting to just enable/disable the data science codelens. The reliability of test discovery while using pytest has been improved. Bug Fixes The issues with cell spacing have been resolved. Problems with errors not showing up for import have been fixed. Issues with the tabs in the comments section have been fixed. To know more about this news, check out Microsoft’s official blog post. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Debugging and Profiling Python Scripts [Tutorial]  
Read more
  • 0
  • 0
  • 11882

article-image-github-now-allows-issue-transfer-between-repositories-a-public-beta-version
Savia Lobo
01 Nov 2018
3 min read
Save for later

GitHub now allows issue transfer between repositories; a public beta version

Savia Lobo
01 Nov 2018
3 min read
Yesterday, GitHub announced that repository admins can now transfer issues from one repository to another better fitting repository, to help those issues find their home. This project by GitHub is currently is in public beta version. Nat Friedman, CEO of GitHub, in his tweet said, “We've just shipped the ability to transfer an issue from one repo to another. This is one of the most-requested GitHub features. Feels good!” When the user transfers an issue, the comments, assignees, and issue timeline events are retained. The issue's labels, projects, and milestones are not retained, although users can see past activity in the issue's timeline. People or teams who are mentioned in the issue will receive a notification letting them know that the issue has been transferred to a new repository. The original URL redirects to the new issue's URL. People who don't have read permissions in the new repository will see a banner letting them know that the issue has been transferred to a new repository that they can't access. Permission levels for issue transfer between repositories People with an owner or team maintainer roles can manage repository access with teams. Each team can have different repository access permissions. There are three types of repository permissions, i.e. Read, Write, and Admin, available for people or teams collaborating on repositories that belong to an organization. To transfer an open issue to another repository, the user needs to have admin permissions on the repository the issue is in and the repository where the issue is to be transferred. If the issue is being transferred from a repository that's owned by an organization, you are a member of, you must transfer it to another repository within your organization. To know more about the repository permission levels visit GitHubHelp blog post. Steps to transfer an Open issue to another repository On GitHub, navigate to the main page of the repository. Under your repository name, click  Issues. In the list of issues, click the issue you'd like to transfer. In the right sidebar, click Transfer this issue. 5. In "Choose a repository," select the repository you want to transfer the issue to. 6. Click Transfer issue. GitHub Business Cloud is now FedRAMP authorized GitHub updates developers and policymakers on EU copyright Directive at Brussels GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage    
Read more
  • 0
  • 0
  • 11455
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-exploring%e2%80%afforms-in-angular-types-benefits-and-differences%e2%80%af%e2%80%af%e2%80%af-%e2%80%af
Expert Network
21 Jul 2021
11 min read
Save for later

Exploring Forms in Angular – types, benefits and differences     

Expert Network
21 Jul 2021
11 min read
While developing a web application, or setting dynamic pages and meta tags we need to deal with multiple input elements and value types, such limitations could seriously hinder our work – in terms of either data flow control, data validation, or user experience.    This article is an excerpt from the book, ASP.NET Core 5 and Angular, Fourth Edition by Valerio De Sanctis – A revised edition of a bestseller that includes coverage of the Angular routing module, expanded discussion on the Angular CLI, and detailed instructions for deploying apps on Azure, as well as both Windows and Linux.   Sure, we could easily work around most of the issues by implementing some custom methods within our form-based components; we could throw some errors such as isValid(), isNumber(), and so on here and there, and then hook them up to our template syntax and show/hide the validation messages with the help of structural directives such as *ngIf, *ngFor, and the like. However, it would be a horrible way to address our problem; we didn't choose a feature-rich client-side framework such as Angular to work that way.   Luckily enough, we have no reason to do that since Angular provides us with a couple of alternative strategies to deal with these common form-related scenarios:   Template-Driven Forms   Model-Driven Forms, also known as Reactive Forms   Both are highly coupled with the framework and thus extremely viable; they both belong to the @angular/forms library and share a common set of form control classes. However, they also have their own specific sets of features, along with their pros and cons, which could ultimately lead to us choosing one of them.   Let's try to quickly summarize these differences.   Template-Driven Forms   If you've come from AngularJS, there's a high chance that the Template-Driven approach will ring a bell or two. As the name implies, Template-Driven Forms host most of the logic in the template code; working with a Template-Driven Form means:   Building the form in the .html template file   Binding data to the various input fields using ngModel instance   Using a dedicated ngForm object related to the whole form and containing all the inputs, with each being accessible through their name.   These things need to be done to perform the required validity checks. To understand this, here's what a Template-Driven Form looks like:   <form novalidate autocomplete="off" #form="ngForm" (ngSubmit)="onSubmit(form)">  <input type="text" name="name" value="" required   placeholder="Insert the city name..."    [(ngModel)]="city.Name" #title="ngModel"   />  <span *ngIf="(name.touched || name.dirty) &&       name.errors?.required">           Name is a required field: please enter a valid city name.   </span>   <button type="submit" name="btnSubmit"          [disabled]="form.invalid">         Submit   </button>   </form>     Here, we can access any element, including the form itself, with some convenient aliases – the attributes with the # sign – and check for their current states to create our own validation workflow.   These states are provided by the framework and will change in real-time, depending on various things: touched, for example, becomes True when the control has been visited at least once; dirty, which is the opposite of pristine, means that the control value has changed, and so on. We used both touched and dirty in the preceding example because we want our validation message to only be shown if the user moves their focus to the <input name="name"> and then goes away, leaving it blank by either deleting its value or not setting it.   These are Template-Driven Forms in a nutshell; now that we've had an overall look at them, let's try to summarize the pros and cons of this approach. Here are the main advantages of Template-Driven Forms: Template-Driven Forms are very easy to write. We can recycle most of our HTML knowledge (assuming that we have any). On top of that, if we come from AngularJS, we already know how well we can make them work once we've mastered the technique.   They are rather easy to read and understand, at least from an HTML point of view; we have a plain, understandable HTML structure containing all the input fields and validators, one after another. Each element will have a name, a two-way binding with the underlying ngModel, and (possibly) Template-Driven logic built upon aliases that have been hooked to other elements that we can also see, or to the form itself.   Here are their weaknesses:   Template-Driven Forms require a lot of HTML code, which can be rather difficult to maintain and is generally more error-prone than pure TypeScript.   For the same reason, these forms cannot be unit tested. We have no way to test their validators or to ensure that the logic we implemented will work, other than running an end-to-end test with our browser, which is hardly ideal for complex forms.   Their readability will quickly drop as we add more and more validators and input tags. Keeping all their logic within the template might be fine for small forms, but it does not scale well when dealing with complex data items. Ultimately, we can say that Template-Driven Forms might be the way to go when we need to build small forms with simple data validation rules, where we can benefit more from their simplicity. On top of that, they are quite like the typical HTML code we're already used to (assuming that we do have a plain HTML development background); we just need to learn how to decorate the standard <form> and <input> elements with aliases and throw in some validators handled by structural directives such as the ones we've already seen, and we'll be set in (almost) no time.   For additional information on Template-Driven Forms, we highly recommend that you read the official Angular documentation at: https://angular.io/guide/forms   That being said; the lack of unit testing, the HTML code bloat that they will eventually produce, and the scaling difficulties will eventually lead us toward an alternative approach for any non-trivial form. Model-Driven/Reactive Forms   The Model-Driven approach was specifically added in Angular 2+ to address the known limitations of Template-Driven Forms. The forms that are implemented with this alternative method are known as Model-Driven Forms or Reactive Forms, which are the exact same thing.   The main difference here is that (almost) nothing happens in the template, which acts as a mere reference to a more complex TypeScript object that gets defined, instantiated, and configured programmatically within the component class: the form model.   To understand the overall concept, let's try to rewrite the previous form in a Model-Driven/Reactive way (the relevant parts are highlighted). The outcome of doing this is as follows:  <form [formGroup]="form" (ngSubmit)="onSubmit()">  <input formControlName="name" required />   <span *ngIf="(form.get('name').touched || form.get('name').dirty)            && form.get('name').errors?.required">           Name is a required field: please enter a valid city name.   </span>  <button type="submit" name="btnSubmit"           [disabled]="form.invalid">  Submit  </button>     </form>  As we can see, the amount of required code is much lower.  Here's the underlying form model that we will define in the component class file (the relevant parts are highlighted in the following code):   import { FormGroup, FormControl } from '@angular/forms';   class ModelFormComponent implements OnInit {   form: FormGroup;         ngOnInit() {       this.form = new FormGroup({          title: new FormControl()       });     }   }   Let's try to understand what's happening here:   The form property is an instance of FormGroup and represents the form itself.   FormGroup, as the name suggests, is a container of form controls sharing the same purpose. As we can see, the form itself acts as a FormGroup, which means that we can nest FormGroup objects inside other FormGroup objects (we didn't do that in our sample, though).   Each data input element in the form template – in the preceding code, name – is represented by an instance of FormControl.   Each FormControl instance encapsulates the related control's current state, such as valid, invalid, touched, and dirty, including its actual value.   Each FormGroup instance encapsulates the state of each child control, meaning that it will only be valid if/when all its children are also valid.   Also, note that we have no way of accessing the FormControls directly like we were doing in Template-Driven Forms; we have to retrieve them using the .get() method of the main FormGroup, which is the form itself.   At first glance, the Model-Driven template doesn't seem too different from the Template-Driven one; we still have a <form> element, an <input> element hooked to a <span> validator, and a submit button; on top of that, checking the state of the input elements takes a bigger amount of source code since they have no aliases we can use. What's the real deal, then?  To help us visualize the difference, let's look at the following diagrams: here's a schema depicting how Template-Driven Forms work:   [caption id="attachment_72453" align="alignnone" width="690"] Fig 1: Template-Driven Forms schematic[/caption] By looking at the arrows, we can easily see that, in Template-Driven Forms, everything happens in the template; the HTML form elements are directly bound to the DataModel component represented by a property filled with an asynchronous HTML request to the Web Server, much like we did with our cities and country table.   That DataModel will be updated as soon as the user changes something, that is, unless a validator prevents them from doing that. If we think about it, we can easily understand how there isn't a single part of the whole workflow that happens to be under our control; Angular handles everything by itself using the information in the data bindings defined within our template.   This is what Template-Driven actually means: the template is calling the shots.  Now, let's take a look at the Model-Driven Forms (or Reactive Forms) approach:   [caption id="attachment_72454" align="alignnone" width="676"] Fig 2: Model-Driven/Reactive Forms schematic[/caption] As we can see, the arrows depicting the Model-Driven Forms workflow tell a whole different story. They show how the data flows between the DataModel component – which we get from the Web Server – and a UI-oriented form model that retains the states and the values of the HTML form (and its children input elements) that are presented to the user. This means that we'll be able to get in-between the data and the form control objects and perform a number of tasks firsthand: push and pull data, detect and react to user changes, implement our own validation logic, perform unit tests, and so on.  Instead of being superseded by a template that's not under our control, we can track and influence the workflow programmatically, since the form model that calls the shots is also a TypeScript class; that's what Model-Driven Forms are about. This also explains why they are also called Reactive Forms – an explicit reference to the Reactive programming style that favors explicit data handling and change management throughout the workflow.   Summary    In this article, we focused on the Angular framework and the two form design models it offers: the Template-Driven approach, mostly inherited from AngularJS, and the Model-Driven or Reactive alternative. We took some valuable time to analyze the pros and cons provided by both, and then we made a detailed comparison of the underlying logic and workflow. At the end of the day, we chose the Reactive way, as it gives the developer more control and enforces a more consistent separation of duties between the Data Model and the Form Model.   About the author   Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He's the founder and owner of Ryadel and the author of many best-selling books on back-end and front-end web development.      
Read more
  • 0
  • 0
  • 10884

article-image-qml-net-a-new-c-library-for-cross-platform-net-gui-development
Prasad Ramesh
10 Aug 2018
3 min read
Save for later

Qml.Net: A new C# library for cross-platform .NET GUI development

Prasad Ramesh
10 Aug 2018
3 min read
Qml.Net is a C# library for cross-platform GUI development with native dependency. It exposes the required object types to host a QML engine. In Qml.NET, Qml and JavaScript together form the UI layer. It can be thought of as the view in MVC. Qml.Net features The PInvoke code in this .NET library is hand-crafted by developer Paul Knopf to ensure appropriate memory management and pointer ownership semantics. He is pretty confident about the library and mentions in his blog “I’d bet you couldn’t generate a segfault, even if you wanted to.” In Qml.Net C# objects can be registered to be treated as QML components. You can then interoperate with them as you would with regular JavaScript objects. The registered C# objects serve as a portal through which the QML world can interact with your .NET objects. This has an added benefit of keeping your business/UI concerns separate cleanly. There will also be no chatty PInvoke calls for rendering. It is a great match. A pre-compiled portable installation of Qt and the native C wrapper is available for Windows, OSX, and Linux. Developers wouldn’t have to bother with C/C++. All you need to know is QML, C#, and JavaScript; QML if fairly simple. QML can’t really be classified as a language, in the semantic sense. More appropriately it can be considered as a combination of JSON and JavaScript. Qml.Net support and working Qml.Net will work with any .NET language including popular C# and functional languages like F#. Your libraries will reference the pure .NET NuGet package, Qml.Net. The host process (Program.Main) references the native NuGet packages. This is dependent on the OS you are on: Qml.Net.WindowsBinaries Qml.Net.OSXBinaries Qml.Net.LinuxBinaries Paul currently only tests his own models that are C# objects registered with the QML engine. They are specific to each control/page. Since Microsoft's announcement of .NET Core, there hasn’t been any clear idea on cross-platform GUI development. Although Microsoft plans to support WPF in .NET Core 3.0, it will be limited to Windows machines. With community involvement and support, Qml.net can be a potential game changer. You can head to the GitHub repository and also view some hosted examples to get a better idea. Read next Exciting New Features in C# 8.0 .NET Core completes move to the new compiler – RyuJIT Microsoft Azure's new governance DApp: An enterprise blockchain without mining
Read more
  • 0
  • 0
  • 10868

article-image-git-2-23-released-with-two-new-commands-git-switch-and-git-restore-a-new-tutorial-and-much-more
Amrata Joshi
19 Aug 2019
4 min read
Save for later

Git 2.23 released with two new commands ‘git switch’ and ‘git restore’, a new tutorial, and much more!

Amrata Joshi
19 Aug 2019
4 min read
Last week, the team behind Git released Git 2.23 that comes with experimental commands, backward compatibility and much more. This release has received contributions from over 77 contributors out of which 26 were new. What’s new in Git 2.23? Experimental commands This release comes with a new pair of experimental commands, git switch and git restore for providing a better interface for the git checkout.  “Two new commands "git switch" and "git restore" are introduced to split "checking out a branch to work on advancing its history" and "checking out paths out of the index and/or a tree-ish to work on advancing the current history" out of the single "git checkout" command,” the official mail thread reads.  Git checkout can be used to change branches with git checkout <branch>. In case if the user doesn’t want to switch branches, git checkout can be used to change individual files, too. These new commands aim to separate the responsibilities of git checkout into two narrower categories that is operations, which change branches and operations that change files.  Backward compatibility  The "--base" option of "format-patch" is now compatible with "git patch-id --stable".  Git fast-export/import pair The "git fast-export/import" pair will be now used to handle commits with log messages in encoding other than UTF-8. git clone --recurse-submodules "git clone --recurse-submodules" has now learned to set up the submodules for ignoring commit object names that are recorded in the superproject gitlink. git diff/grep The pattern "git diff/grep" that is used for extracting funcname and words boundary for Rust has now been added. git fetch" and "git pull The commands "git fetch" and "git pull" are used to report when a fetch results in non-fast-forward updates that lets the user notice unusual situation.    git status With this release, the extra blank lines in "git status" output have been reduced. Developer support This release comes with developer support for emulating unsatisfied prerequisites in tests for ensuring that the remainder of the tests succeeds when tests with prerequisites are skipped. A new tutorial for git-core developers This release comes with a new tutorial that target aspiring git-core developers. This tutorial demonstrates end-to-end workflow of creating a change to the Git tree, for sending it for review, as well as making changes that are based on comments. Bug fixes in Git 2.23 In the earlier version, "git worktree add" used to fail when another worktree that was connected to the same repository was corrupt. This issue has been corrected in this release. An issue with the file descriptor has been fixed. This release comes with an updated parameter validation. The code for parsing scaled numbers out of configuration files has been made more robust and easier to follow with this release. Few users seem to be happy about the new changes made, a user commented on HackerNews, “It's nice to hear that there appears to be progress being made in making git's tooling nicer and more consistent. Git's model itself is pretty simple, but the command line tools for working with it aren't and I feel that this fuels most of the "Git is hard" complaints.” Few others are still skeptical about the new commands, another user commented, “On the one hand I'm happy on the new "switch" and "restore" commands. On the other hand, I wonder if they truly add any value other than the semantic distinction of functions otherwise present in checkout.” To know more about this news in detail, read the official blog post on GitHub. GitHub has blocked an Iranian software developer’s account GitHub services experienced a 41-minute disruption yesterday iPhone can be hacked via a legit-looking malicious lightning USB cable worth $200, DefCon 27 demo shows
Read more
  • 0
  • 0
  • 10709

article-image-introducing-luna-worlds-first-programming-language-with-dual-syntax-representation-data-flow-modeling-and-much-more
Amrata Joshi
17 Jun 2019
3 min read
Save for later

Introducing Luna, world’s first programming language with dual syntax representation, data flow modeling and much more!

Amrata Joshi
17 Jun 2019
3 min read
Luna, a data processing and visualization environment, provides a library of highly tailored, domain-specific components as well as a framework for building new components. Luna focuses on domains related to data processing, such as IoT, bioinformatics, data science, graphic design and architecture. What’s so interesting about Luna? Data flow modeling Luna is a data flow modeling whiteboard that allows users to draw components and the way data flows between them. Components in Luna have simply nested data flow graphs and users can enter into any component or into its subcomponents to move from high to low levels of abstraction. It is also designed as a general purpose programming language with two equivalent representations, visual and textual. Data processing and visualizing Luna components can visualise their results and further use colors for indicating the type of data they exchange. Users can compare all the intermediate outcomes and also understand the flow of data by looking at the graph. Users can also work around the parameters and observe how they affect each step of the computation in real time. Debugging Luna can help in assisting and analyzing network service outages and data corruption. In case any errors occur, Luna tracks and display its path through the graph so that users can easily follow and understand where it comes from.  It also records and visualizes information about performance and memory consumption. Luna explorer, the search engine Luna comes with Explorer which is a context-aware fuzzy search engine that lets users query libraries for desired components as well as browse their documentation. Since the Explorer is context-aware, it can easily understand the flow of data and also predict users’ intentions and adjust the search results accordingly. Dual syntax representation Luna is also the world’s first programming language that features two equivalent syntax representations, that is visual and textual. Automatic parallelism Luna also features parallelism that uses the state of the art Haskell’s GHC runtime system which helps to run thousands of threads in a fraction of a second. It also automatically partitions a program and schedules its execution over available CPU cores. Users seem to be happy with Luna, a user commented on HackerNews, “Luna looks great. I've been doing work in this area myself and hope to launch my own visual programming environment next month or so.” Few others are happy because Luna features text syntax supports building functional blocks. Another user commented, “I like that Luna has a text syntax. I also like that Luna supports building graph functional blocks that can be nested inside other graphs. That's a missing link in other tools of this type that limits the scale of what you can do with them.” To know more about this, check out the official Luna website. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter Polyglot programming allows developers to choose the right language to solve tough engineering problems Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study
Read more
  • 0
  • 0
  • 9786
article-image-exploring-the%e2%80%afnew%e2%80%af-net-multi-platform-app-ui%e2%80%afmaui%e2%80%afwith-the-experts
Expert Network
25 May 2021
8 min read
Save for later

Exploring the new .NET Multi-Platform App UI (MAUI) with the Experts

Expert Network
25 May 2021
8 min read
During the 2020 edition of Build, Microsoft revealed its plan for a multi-platform framework called .NET MAUI. This latest framework appears to be an upgraded and transformed version of  Xamarin.Forms, enabling developers to build robust device applications and provide native features for Windows, Android, macOS, and iOS.   Microsoft has recently devoted efforts to unifying the .NET platform, in which MAUI plays a vital role. The framework helps developers access the native API (Application Programming Interface) for all modern operating systems by offering a single codebase with built-in resources. It paves the way for the development of multi-platform applications under the banner of one exclusive project structure with the flexibility of incorporating different source code files or resources for different platforms when needed.   .NET MAUI would bring the project structure to a sole source with single-click deployment for as many platforms as needed. Some of the prominent features in .NET MAUI will be XAML and Model-View-View-Model (MVVM). It will enable the developers to implement the Model-View-Update (MVU) pattern.  Microsoft also intends to offer ‘Try-N-Convert’ support and migration guides to help developers carry a seamless transition of existing apps to .NET MAUI. The performance continues to remain as the focal point in MAUI and the faster algorithms, advanced compilers, and advanced SDK Style project tooling experience.  Let us hear what our experts have to say about MAUI, a framework that holds the potential to streamline cross-platform app development. Which technology - native or cross-platform app development, is better and more prevalent? Gabriel: I always suggest that the best platform is the one that fits best with your team. I mean, if you have a C# team, for sure .NET development (Xamarin, MAUI, and so on) will be better. On the other hand, if you have a JavaScript / Typescript team, we do have several other options for native/cross-platform development.   Francesco: In general, saying “better” is quite difficult. The right choice always depends on the constraints one has, but I think that for most applications “cross-platform” is the only acceptable choice. Mobile and desktop applications have noticeably short lifecycles and most of them have lower budgets than server enterprise applications. Often, they are just one of the several ways to interact with an enterprise application, or with complex websites.  Therefore, both budget and time constraints make developing and maintaining several native applications unrealistic. However, no matter how smart and optimized cross-platform frameworks are, native applications always have better performance and take full advantage of the specific features of each device. So, for sure, there are critical applications that can be implemented just like natives.  Valerio: Both approaches have pros and cons: native mobile apps usually have higher performances and seamless user experience, thus being ideal for end-users and/or product owners with lofty expectations in terms of UI/UX. However, building them nowadays can be costly and time-consuming because you need to have a strong dev team (or multiple teams) that can handle both iOS, Android and Windows/Linux Desktop PCs. Furthermore, there is a possibility of having different codebases which can be quite cumbersome to maintain, upgrade and keep in synchronization. Cross-platform development can mitigate these downsides. However, everything that you will save in terms of development cost, time and maintainability will often be paid in terms of performance, limited functionalities and limited UI/UX; not to mention the steep learning curve that multi-platform development frameworks tend to have due to their elevated level of abstraction.   What are the prime differences between MAUI and the Uno Platform, if any?   Gabriel: I would also say that, considering MAUI has Xamarin.Forms, it will easily enable compatibility with different Operating Systems.  Francesco: Uno's default option is to style an application the same on all platforms, but gives an opportunity to make the application look and feel like a native app; whereas MAUI takes more advantage of native features. In a few words, MAUI applications look more like native applications. Uno also targets WASM in browsers, while MAUI does not target it, but somehow proposes Blazor. Maybe Blazor will still be another choice to unify mobile, desktop, and Web development, but not in the 6.0 .NET release.  Valerio: Both MAUI and Uno Platform try to achieve a similar goal, but they are based upon two different architectural approaches: MAUI, like Xamarin.Forms, will have their own abstraction layer above the native APIs, while Uno builds UWP interfaces upon them. Again, both approaches do have their pros and cons: abstraction layers can be costly in terms of performance (especially on mobile devices, since it will need to take care of the most layout-related tasks) but this will be useful to keep a small and versatile codebase.  Would MAUI be able to fulfill cross-platform app development requirements right from its launch, or will it take a few developments post-release for it to entirely meet its purpose?   Gabriel: The mechanism presented in this kind of technology will let us guarantee cross-platform even in cases where there are differences. So, my answer would be yes.  Francesco: Looking behind the story of all Microsoft platforms, I would say it is very unlikely that MAUI will fulfill all cross-platform app development requirements right from the time it is launched. It might be 80-90 percept effective and cater to the development needs. For MAUI to become a full-fledged platform equipped with all the tools for a cross-platform app, it might take another year.   Valerio: I hope so! Realistically speaking, I think this will be a tough task: I would not expect good cross-platform app compatibility right from the start, especially in terms of UI/UX. Such ambitious developments improvise and are gradually made perfect with accurate and relevant feedback that comes from the real users and the community.  How much time will it take for Microsoft to release MAUI?   Gabriel: Microsoft is continuously delivering versions of their software environments. The question is a little bit more complex because as a software developer you cannot only think about when Microsoft will release MAUI. You need to consider when it will be stable and with an LTS Version available. I believe this will take a little bit longer than the roadmap presented by Microsoft.  Francesco: According to the planned timeline, MAUI should be launched in conjunction with the November 2021 .NET 6 release. This timeline should be respected, but in the worst-case scenario, the release will be played and arrive a few months later. This is similar to what had happened with Blazor and the 3.1 .NET release.  Valerio: The MAUI official timeline sounds rather optimistic, but Microsoft seems to be investing a lot in that project and they have already managed to successfully deliver big releases without excessive delays (think of .NET 5): I think they will try their best to launch MAUI together with the first .NET 6 final release since it would be ideal in terms of marketing and could help to bring some additional early adopters.  Summary  The launch of Multi-Platform App UI (MAUI) will undoubtedly revolutionize the way developers build device applications. Developers can look forward to smooth and faster deployment and whether MAUI will offer platform-specific projects or it would be a shared code system, will eventually be revealed. It is too soon to estimate the extent of MAUI’s impact, but it will surely be worth the wait and now with MAUI moving into the dotnet Github, there is excitement to see how MAUI unfolds across the development platforms and how the communities receive and align with it. With every upcoming preview of .NET 6 we can expect numerous additions to the capabilities of .NET MAUI. For now, the developers are looking forward to the “dotnet new” experience.   About the authors  Gabriel Baptista is a software architect who leads technical teams across a diverse range of projects for retail and industry, using a significant array of Microsoft products. He is a specialist in Azure Platform-as-a-Service (PaaS) and a computing professor who has published many papers and teaches various subjects related to software engineering, development, and architecture. He is also a speaker on Channel 9, one of the most prestigious and active community websites for the .NET stack.  Francesco Abbruzzese has built the tool - MVC Controls Toolkit. He has also contributed to the diffusion and evangelization of the Microsoft web stack since the first version of ASP.NET MVC through tutorials, articles, and tools. He writes about .NET and client-side technologies on his blog, Dot Net Programming, and in various online magazines. His company, Mvcct Team, implements and offers web applications, AI software, SAS products, tools, and services for web technologies associated with the Microsoft stack.  Gabriel and Francesco are authors of the book Software Architecture with C# 9 and .NET 5, 2nd Edition. Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He is the founder and owner of Ryadel. Valerio De Sanctis is the author of ASP.NET Core 5 and Angular, 4th Edition
Read more
  • 0
  • 0
  • 9412

article-image-llvm-8-0-0-releases
Natasha Mathur
22 Mar 2019
3 min read
Save for later

LLVM 8.0.0 releases!

Natasha Mathur
22 Mar 2019
3 min read
LLVM team released LLVM 8.0, earlier this week. LLVM is a collection of tools that help develop compiler front ends and back ends. LLVM is written in C++ and has been designed for compile-time, link-time, run-time, and "idle-time" optimization of programs that are written in arbitrary programming languages. LLVM 8.0 explores known issues, major improvements and other changes in the subprojects of LLVM. There were certain issues in LLVM 8.0.0 that could not be fixed earlier (before this release). For instance, clang is getting miscompiled by trunk GCC, and “asan-dynamic” is not able to work on FreeBSD. Other than the issues, there is a long list of changes that have been made to LLVM 8.0.0. Non-comprehensive changes to LLVM 8.0.0 llvm-cov tool can export lcov trace files with the help of the -format=lcov option of the export command. The add_llvm_loadable_module CMake macro has been deprecated. The add_llvm_library macro with the MODULE argument can now help provide the same functionality. For MinGW, references to data variables that are to be imported from a dll can be now accessed via a stub. This will further allow the linker to convert it to a dllimport if needed. Support has been added for labels as offsets in .reloc directive. Windows support for libFuzzer (x86_64) has also been added. Other Changes LLVM IR:  The Function attribute named speculative_load_hardening has been introduced. This will indicate that Speculative Load Hardening should be enabled for the function body. JIT APIs: ORC (On Request Compilation) JIT APIs will now support concurrent compilation. The existing (non-concurrent) ORC layer classes, as well as the related APIs, have been deprecated. These have been renamed with a “Legacy” prefix (e.g. LegacyIRCompileLayer). All the deprecated classes will be removed in LLVM 9. AArch64 Target: Support has been added for Speculative Load Hardening. Also, initial support added for the Tiny code model, where code and the statically defined symbols should remain within 1MB. MIPS Target: Support forGlobalISel instruction selection framework has been improved. ORC JIT will now offer support for MIPS and MIPS64 architectures. There’s also newly added support for MIPS N32 AB. PowerPC Target: This has now been switched to non-PIC default in LLVM 8.0.0. Darwin support has also been deprecated. Also, Out-of-Order scheduling has been enabled for P9. SystemZ Target: These include various code-gen improvements related to improved auto-vectorization, inlining, as well as the instruction scheduling. Other than these, changes have also been made to X86 target, WebAssembly Target, Nios2 target, and LLDB. For a complete list of changes, check out the official LLVM 8.0.0 release notes. LLVM 7.0.0 released with improved optimization and new tools for monitoring LLVM will be relicensing under Apache 2.0 start of next year LLVM officially migrating to GitHub from Apache SVN
Read more
  • 0
  • 0
  • 9133

article-image-deno-attempt-to-fix-node-js-flaws-rewritten-in-rust
Prasad Ramesh
27 Aug 2018
2 min read
Save for later

Deno, an attempt to fix Node.js flaws, is rewritten in Rust

Prasad Ramesh
27 Aug 2018
2 min read
Deno is a runtime by creator of Node, Ryan Dahl. It aims at fixing some of the problems in Node. Originally written in Go, Deno is now rewritten in Rust and is in version 0.1. Node.js was developed nearly a decade ago. It was designed in 2009 to use server-side JavaScript. The implementation solved problems of 2009, for which Dahl has no regrets. But lately, he did have regrets elaborated in a talk on 10 things he regrets about Node in the JSConf 2018. Some of the regrets included packages, security issues, the entire build system, among others. Deno is a secure TypeScript run-time on Chrome V8. It was originally written in Go and now has been rewritten in Rust to avoid potential garbage collector issues. Deno is similar to Node.js but is focused on security. Deno takes full advantage of JavaScript being a secure sandbox. So, unlike Node.js, Deno is sandboxed. Scripts should run without any write access by default. Using untrusted utilities like linters will be optional. There is no package.json in Deno, no npm and it is not explicitly compatible with Node. An important thing to note is that the requirement is Python 2, not Python 3. This is because Chrome V8 scripts still use Python 2. There were plans to rewrite Deno in Rust when it was originally released in June this year. Dahl mentioned in a GitHub comment: “The reason for not using Go is that it has a rather complex runtime - including a GC. Although I haven't experienced any problems with that yet, it's not hard to imagine that down the road that might clash badly with V8's very complex runtime.” You can get the binaries here to get started and check out the Github repo. Deploying Node.js apps on Google App Engine is now easy Creating Macros in Rust [Tutorial] Rust Language Server, RLS 1.0 releases with code intelligence, syntax highlighting and more
Read more
  • 0
  • 0
  • 8918
article-image-golang-plans-to-add-a-core-implementation-of-an-internal-language-server-protocol
Prasad Ramesh
24 Sep 2018
3 min read
Save for later

Golang plans to add a core implementation of an internal language server protocol

Prasad Ramesh
24 Sep 2018
3 min read
Go, the popular programming language is adding an internal language server protocol (LSP). This is expected to bring features like code autocompletion and diagnostics available in Golang. LSP is used between a user and a server to integrate features such as autocomplete, go to definition, find all references and alike into the tool. It was created by Microsoft to define a common language for enabling programming language analyzers to communicate. It is growing in popularity with adoption from companies like Codenvy, Red Hat, and Sourcegraph. There is also a rapidly growing list of editor and language communities supporting LSP. Golang already has a language server available on GitHub. This version has support for Hover jump to def, workspace symbols, and find references. But, it does not support code completion and diagnostics. Sourcegraph CEO Quinn Slack stated in a comment on Hacker News: “The idea is that with a Go language server becoming a core part of Go, it will have a lot more resources invested into it and it will surpass where the current implementation is now.” The Go language server made by Sourcegraph available currently on GitHub is not a core part of Golang. It uses tools and custom extensions not maintained by the Go team. The hope is that the core LSP implementation will be good enough and that SourceGraph can re-use this implementation in the future to bring down the number of implementations to just one. Slack said in a comment that they are very happy with this implementation: “We are 10,000% supportive of this, as we've discussed openly in the golang-tools group and with the Go team. The Go team was commendably empathetic about the optics here, and we urged them very, very, very directly to do this.” This core implementation of LSP by the Golang team is also beneficial for Sourcegraph from a business perspective. Sourcegraph sells a product that lets you search and browse all your code, which involves using language servers for certain features like hovers, definitions and references. Since the core work will be done by the Golang team, Sourcegraph won’t have to invest more time into building their implementation of Go language server. For more information, visit the Googlesource website. Golang 1.11 is here with modules and experimental WebAssembly port among other updates Why Golang is the fastest growing language on GitHub Go 2 design drafts include plans for better error handling and generics
Read more
  • 0
  • 0
  • 8332

article-image-microsoft-build-2019-introducing-wsl-2-the-newest-architecture-for-the-windows-subsystem-for-linux
Amrata Joshi
07 May 2019
3 min read
Save for later

Microsoft Build 2019: Introducing WSL 2, the newest architecture for the Windows Subsystem for Linux

Amrata Joshi
07 May 2019
3 min read
Yesterday, on the first day of Microsoft Build 2019, the team at Microsoft introduced WSL 2, the newest architecture for the Windows Subsystem for Linux. With WSL 2, file system performance will increase and users will be able to run more Linux apps. The initial builds of WSL 2 will be available by the end of June, this year. https://twitter.com/windowsdev/status/1125484494616649728 https://twitter.com/poppastring/status/1125489352795201539 What’s new in WSL 2? Run Linux libraries WSL 2 powers Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. This new architecture brings changes to how these Linux binaries interact with Windows and computer’s hardware, but it will still manage to provide the same user experience as in WSL Linux distros With this release, the individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, and can be upgraded or downgraded at any time. Also, users can run WSL 1 and WSL 2 distros side by side. This new architecture uses an entirely new architecture that uses a real Linux kernel. Increases speed With this release, file-intensive operations like git clone, npm install, apt update, apt upgrade, and more will get faster. The initial tests that the team has run have WSL 2 running up to 20x faster as compared to WSL 1, when unpacking a zipped tarball. And it is around 2-5x faster while using git clone, npm install and cmake on various projects. Linux kernel with Windows The team will be shipping an open source real Linux kernel with Windows which will make full system call compatibility possible. This will also be the first time a Linux kernel is shipped with Windows. The team is building the kernel in house and in the initial builds they will ship version 4.19 of the kernel. This kernel is been designed in tune with WSL 2 and it has been optimized for size and performance. The team will service this Linux kernel through Windows updates, users will get the latest security fixes and kernel improvements without needing to manage it themselves. The configuration for this kernel will be available on GitHub once WSL 2 will release. The WSL kernel source will consist of links to a set of patches in addition to the long-term stable source. Full system call compatibility The Linux binaries use system calls for performing functions such as accessing files, requesting memory, creating processes, and more. In WSL 1 the team has created a translation layer that interprets most of these system calls and allow them to work on the Windows NT kernel. It is challenging to implement all of these system calls, where some of the apps don’t run properly in WSL 1. WSL 2 includes its own Linux kernel which has full system call compatibility. To know more about this news, check out Microsoft’s blog post. Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository      
Read more
  • 0
  • 0
  • 8193