Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Application Development

279 Articles
article-image-exploring%e2%80%afforms-in-angular-types-benefits-and-differences%e2%80%af%e2%80%af%e2%80%af-%e2%80%af
Expert Network
21 Jul 2021
11 min read
Save for later

Exploring Forms in Angular – types, benefits and differences     

Expert Network
21 Jul 2021
11 min read
While developing a web application, or setting dynamic pages and meta tags we need to deal with multiple input elements and value types, such limitations could seriously hinder our work – in terms of either data flow control, data validation, or user experience.    This article is an excerpt from the book, ASP.NET Core 5 and Angular, Fourth Edition by Valerio De Sanctis – A revised edition of a bestseller that includes coverage of the Angular routing module, expanded discussion on the Angular CLI, and detailed instructions for deploying apps on Azure, as well as both Windows and Linux.   Sure, we could easily work around most of the issues by implementing some custom methods within our form-based components; we could throw some errors such as isValid(), isNumber(), and so on here and there, and then hook them up to our template syntax and show/hide the validation messages with the help of structural directives such as *ngIf, *ngFor, and the like. However, it would be a horrible way to address our problem; we didn't choose a feature-rich client-side framework such as Angular to work that way.   Luckily enough, we have no reason to do that since Angular provides us with a couple of alternative strategies to deal with these common form-related scenarios:   Template-Driven Forms   Model-Driven Forms, also known as Reactive Forms   Both are highly coupled with the framework and thus extremely viable; they both belong to the @angular/forms library and share a common set of form control classes. However, they also have their own specific sets of features, along with their pros and cons, which could ultimately lead to us choosing one of them.   Let's try to quickly summarize these differences.   Template-Driven Forms   If you've come from AngularJS, there's a high chance that the Template-Driven approach will ring a bell or two. As the name implies, Template-Driven Forms host most of the logic in the template code; working with a Template-Driven Form means:   Building the form in the .html template file   Binding data to the various input fields using ngModel instance   Using a dedicated ngForm object related to the whole form and containing all the inputs, with each being accessible through their name.   These things need to be done to perform the required validity checks. To understand this, here's what a Template-Driven Form looks like:   <form novalidate autocomplete="off" #form="ngForm" (ngSubmit)="onSubmit(form)">  <input type="text" name="name" value="" required   placeholder="Insert the city name..."    [(ngModel)]="city.Name" #title="ngModel"   />  <span *ngIf="(name.touched || name.dirty) &&       name.errors?.required">           Name is a required field: please enter a valid city name.   </span>   <button type="submit" name="btnSubmit"          [disabled]="form.invalid">         Submit   </button>   </form>     Here, we can access any element, including the form itself, with some convenient aliases – the attributes with the # sign – and check for their current states to create our own validation workflow.   These states are provided by the framework and will change in real-time, depending on various things: touched, for example, becomes True when the control has been visited at least once; dirty, which is the opposite of pristine, means that the control value has changed, and so on. We used both touched and dirty in the preceding example because we want our validation message to only be shown if the user moves their focus to the <input name="name"> and then goes away, leaving it blank by either deleting its value or not setting it.   These are Template-Driven Forms in a nutshell; now that we've had an overall look at them, let's try to summarize the pros and cons of this approach. Here are the main advantages of Template-Driven Forms: Template-Driven Forms are very easy to write. We can recycle most of our HTML knowledge (assuming that we have any). On top of that, if we come from AngularJS, we already know how well we can make them work once we've mastered the technique.   They are rather easy to read and understand, at least from an HTML point of view; we have a plain, understandable HTML structure containing all the input fields and validators, one after another. Each element will have a name, a two-way binding with the underlying ngModel, and (possibly) Template-Driven logic built upon aliases that have been hooked to other elements that we can also see, or to the form itself.   Here are their weaknesses:   Template-Driven Forms require a lot of HTML code, which can be rather difficult to maintain and is generally more error-prone than pure TypeScript.   For the same reason, these forms cannot be unit tested. We have no way to test their validators or to ensure that the logic we implemented will work, other than running an end-to-end test with our browser, which is hardly ideal for complex forms.   Their readability will quickly drop as we add more and more validators and input tags. Keeping all their logic within the template might be fine for small forms, but it does not scale well when dealing with complex data items. Ultimately, we can say that Template-Driven Forms might be the way to go when we need to build small forms with simple data validation rules, where we can benefit more from their simplicity. On top of that, they are quite like the typical HTML code we're already used to (assuming that we do have a plain HTML development background); we just need to learn how to decorate the standard <form> and <input> elements with aliases and throw in some validators handled by structural directives such as the ones we've already seen, and we'll be set in (almost) no time.   For additional information on Template-Driven Forms, we highly recommend that you read the official Angular documentation at: https://angular.io/guide/forms   That being said; the lack of unit testing, the HTML code bloat that they will eventually produce, and the scaling difficulties will eventually lead us toward an alternative approach for any non-trivial form. Model-Driven/Reactive Forms   The Model-Driven approach was specifically added in Angular 2+ to address the known limitations of Template-Driven Forms. The forms that are implemented with this alternative method are known as Model-Driven Forms or Reactive Forms, which are the exact same thing.   The main difference here is that (almost) nothing happens in the template, which acts as a mere reference to a more complex TypeScript object that gets defined, instantiated, and configured programmatically within the component class: the form model.   To understand the overall concept, let's try to rewrite the previous form in a Model-Driven/Reactive way (the relevant parts are highlighted). The outcome of doing this is as follows:  <form [formGroup]="form" (ngSubmit)="onSubmit()">  <input formControlName="name" required />   <span *ngIf="(form.get('name').touched || form.get('name').dirty)            && form.get('name').errors?.required">           Name is a required field: please enter a valid city name.   </span>  <button type="submit" name="btnSubmit"           [disabled]="form.invalid">  Submit  </button>     </form>  As we can see, the amount of required code is much lower.  Here's the underlying form model that we will define in the component class file (the relevant parts are highlighted in the following code):   import { FormGroup, FormControl } from '@angular/forms';   class ModelFormComponent implements OnInit {   form: FormGroup;         ngOnInit() {       this.form = new FormGroup({          title: new FormControl()       });     }   }   Let's try to understand what's happening here:   The form property is an instance of FormGroup and represents the form itself.   FormGroup, as the name suggests, is a container of form controls sharing the same purpose. As we can see, the form itself acts as a FormGroup, which means that we can nest FormGroup objects inside other FormGroup objects (we didn't do that in our sample, though).   Each data input element in the form template – in the preceding code, name – is represented by an instance of FormControl.   Each FormControl instance encapsulates the related control's current state, such as valid, invalid, touched, and dirty, including its actual value.   Each FormGroup instance encapsulates the state of each child control, meaning that it will only be valid if/when all its children are also valid.   Also, note that we have no way of accessing the FormControls directly like we were doing in Template-Driven Forms; we have to retrieve them using the .get() method of the main FormGroup, which is the form itself.   At first glance, the Model-Driven template doesn't seem too different from the Template-Driven one; we still have a <form> element, an <input> element hooked to a <span> validator, and a submit button; on top of that, checking the state of the input elements takes a bigger amount of source code since they have no aliases we can use. What's the real deal, then?  To help us visualize the difference, let's look at the following diagrams: here's a schema depicting how Template-Driven Forms work:   [caption id="attachment_72453" align="alignnone" width="690"] Fig 1: Template-Driven Forms schematic[/caption] By looking at the arrows, we can easily see that, in Template-Driven Forms, everything happens in the template; the HTML form elements are directly bound to the DataModel component represented by a property filled with an asynchronous HTML request to the Web Server, much like we did with our cities and country table.   That DataModel will be updated as soon as the user changes something, that is, unless a validator prevents them from doing that. If we think about it, we can easily understand how there isn't a single part of the whole workflow that happens to be under our control; Angular handles everything by itself using the information in the data bindings defined within our template.   This is what Template-Driven actually means: the template is calling the shots.  Now, let's take a look at the Model-Driven Forms (or Reactive Forms) approach:   [caption id="attachment_72454" align="alignnone" width="676"] Fig 2: Model-Driven/Reactive Forms schematic[/caption] As we can see, the arrows depicting the Model-Driven Forms workflow tell a whole different story. They show how the data flows between the DataModel component – which we get from the Web Server – and a UI-oriented form model that retains the states and the values of the HTML form (and its children input elements) that are presented to the user. This means that we'll be able to get in-between the data and the form control objects and perform a number of tasks firsthand: push and pull data, detect and react to user changes, implement our own validation logic, perform unit tests, and so on.  Instead of being superseded by a template that's not under our control, we can track and influence the workflow programmatically, since the form model that calls the shots is also a TypeScript class; that's what Model-Driven Forms are about. This also explains why they are also called Reactive Forms – an explicit reference to the Reactive programming style that favors explicit data handling and change management throughout the workflow.   Summary    In this article, we focused on the Angular framework and the two form design models it offers: the Template-Driven approach, mostly inherited from AngularJS, and the Model-Driven or Reactive alternative. We took some valuable time to analyze the pros and cons provided by both, and then we made a detailed comparison of the underlying logic and workflow. At the end of the day, we chose the Reactive way, as it gives the developer more control and enforces a more consistent separation of duties between the Data Model and the Form Model.   About the author   Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He's the founder and owner of Ryadel and the author of many best-selling books on back-end and front-end web development.      
Read more
  • 0
  • 0
  • 10884

article-image-exploring-the%e2%80%afnew%e2%80%af-net-multi-platform-app-ui%e2%80%afmaui%e2%80%afwith-the-experts
Expert Network
25 May 2021
8 min read
Save for later

Exploring the new .NET Multi-Platform App UI (MAUI) with the Experts

Expert Network
25 May 2021
8 min read
During the 2020 edition of Build, Microsoft revealed its plan for a multi-platform framework called .NET MAUI. This latest framework appears to be an upgraded and transformed version of  Xamarin.Forms, enabling developers to build robust device applications and provide native features for Windows, Android, macOS, and iOS.   Microsoft has recently devoted efforts to unifying the .NET platform, in which MAUI plays a vital role. The framework helps developers access the native API (Application Programming Interface) for all modern operating systems by offering a single codebase with built-in resources. It paves the way for the development of multi-platform applications under the banner of one exclusive project structure with the flexibility of incorporating different source code files or resources for different platforms when needed.   .NET MAUI would bring the project structure to a sole source with single-click deployment for as many platforms as needed. Some of the prominent features in .NET MAUI will be XAML and Model-View-View-Model (MVVM). It will enable the developers to implement the Model-View-Update (MVU) pattern.  Microsoft also intends to offer ‘Try-N-Convert’ support and migration guides to help developers carry a seamless transition of existing apps to .NET MAUI. The performance continues to remain as the focal point in MAUI and the faster algorithms, advanced compilers, and advanced SDK Style project tooling experience.  Let us hear what our experts have to say about MAUI, a framework that holds the potential to streamline cross-platform app development. Which technology - native or cross-platform app development, is better and more prevalent? Gabriel: I always suggest that the best platform is the one that fits best with your team. I mean, if you have a C# team, for sure .NET development (Xamarin, MAUI, and so on) will be better. On the other hand, if you have a JavaScript / Typescript team, we do have several other options for native/cross-platform development.   Francesco: In general, saying “better” is quite difficult. The right choice always depends on the constraints one has, but I think that for most applications “cross-platform” is the only acceptable choice. Mobile and desktop applications have noticeably short lifecycles and most of them have lower budgets than server enterprise applications. Often, they are just one of the several ways to interact with an enterprise application, or with complex websites.  Therefore, both budget and time constraints make developing and maintaining several native applications unrealistic. However, no matter how smart and optimized cross-platform frameworks are, native applications always have better performance and take full advantage of the specific features of each device. So, for sure, there are critical applications that can be implemented just like natives.  Valerio: Both approaches have pros and cons: native mobile apps usually have higher performances and seamless user experience, thus being ideal for end-users and/or product owners with lofty expectations in terms of UI/UX. However, building them nowadays can be costly and time-consuming because you need to have a strong dev team (or multiple teams) that can handle both iOS, Android and Windows/Linux Desktop PCs. Furthermore, there is a possibility of having different codebases which can be quite cumbersome to maintain, upgrade and keep in synchronization. Cross-platform development can mitigate these downsides. However, everything that you will save in terms of development cost, time and maintainability will often be paid in terms of performance, limited functionalities and limited UI/UX; not to mention the steep learning curve that multi-platform development frameworks tend to have due to their elevated level of abstraction.   What are the prime differences between MAUI and the Uno Platform, if any?   Gabriel: I would also say that, considering MAUI has Xamarin.Forms, it will easily enable compatibility with different Operating Systems.  Francesco: Uno's default option is to style an application the same on all platforms, but gives an opportunity to make the application look and feel like a native app; whereas MAUI takes more advantage of native features. In a few words, MAUI applications look more like native applications. Uno also targets WASM in browsers, while MAUI does not target it, but somehow proposes Blazor. Maybe Blazor will still be another choice to unify mobile, desktop, and Web development, but not in the 6.0 .NET release.  Valerio: Both MAUI and Uno Platform try to achieve a similar goal, but they are based upon two different architectural approaches: MAUI, like Xamarin.Forms, will have their own abstraction layer above the native APIs, while Uno builds UWP interfaces upon them. Again, both approaches do have their pros and cons: abstraction layers can be costly in terms of performance (especially on mobile devices, since it will need to take care of the most layout-related tasks) but this will be useful to keep a small and versatile codebase.  Would MAUI be able to fulfill cross-platform app development requirements right from its launch, or will it take a few developments post-release for it to entirely meet its purpose?   Gabriel: The mechanism presented in this kind of technology will let us guarantee cross-platform even in cases where there are differences. So, my answer would be yes.  Francesco: Looking behind the story of all Microsoft platforms, I would say it is very unlikely that MAUI will fulfill all cross-platform app development requirements right from the time it is launched. It might be 80-90 percept effective and cater to the development needs. For MAUI to become a full-fledged platform equipped with all the tools for a cross-platform app, it might take another year.   Valerio: I hope so! Realistically speaking, I think this will be a tough task: I would not expect good cross-platform app compatibility right from the start, especially in terms of UI/UX. Such ambitious developments improvise and are gradually made perfect with accurate and relevant feedback that comes from the real users and the community.  How much time will it take for Microsoft to release MAUI?   Gabriel: Microsoft is continuously delivering versions of their software environments. The question is a little bit more complex because as a software developer you cannot only think about when Microsoft will release MAUI. You need to consider when it will be stable and with an LTS Version available. I believe this will take a little bit longer than the roadmap presented by Microsoft.  Francesco: According to the planned timeline, MAUI should be launched in conjunction with the November 2021 .NET 6 release. This timeline should be respected, but in the worst-case scenario, the release will be played and arrive a few months later. This is similar to what had happened with Blazor and the 3.1 .NET release.  Valerio: The MAUI official timeline sounds rather optimistic, but Microsoft seems to be investing a lot in that project and they have already managed to successfully deliver big releases without excessive delays (think of .NET 5): I think they will try their best to launch MAUI together with the first .NET 6 final release since it would be ideal in terms of marketing and could help to bring some additional early adopters.  Summary  The launch of Multi-Platform App UI (MAUI) will undoubtedly revolutionize the way developers build device applications. Developers can look forward to smooth and faster deployment and whether MAUI will offer platform-specific projects or it would be a shared code system, will eventually be revealed. It is too soon to estimate the extent of MAUI’s impact, but it will surely be worth the wait and now with MAUI moving into the dotnet Github, there is excitement to see how MAUI unfolds across the development platforms and how the communities receive and align with it. With every upcoming preview of .NET 6 we can expect numerous additions to the capabilities of .NET MAUI. For now, the developers are looking forward to the “dotnet new” experience.   About the authors  Gabriel Baptista is a software architect who leads technical teams across a diverse range of projects for retail and industry, using a significant array of Microsoft products. He is a specialist in Azure Platform-as-a-Service (PaaS) and a computing professor who has published many papers and teaches various subjects related to software engineering, development, and architecture. He is also a speaker on Channel 9, one of the most prestigious and active community websites for the .NET stack.  Francesco Abbruzzese has built the tool - MVC Controls Toolkit. He has also contributed to the diffusion and evangelization of the Microsoft web stack since the first version of ASP.NET MVC through tutorials, articles, and tools. He writes about .NET and client-side technologies on his blog, Dot Net Programming, and in various online magazines. His company, Mvcct Team, implements and offers web applications, AI software, SAS products, tools, and services for web technologies associated with the Microsoft stack.  Gabriel and Francesco are authors of the book Software Architecture with C# 9 and .NET 5, 2nd Edition. Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He is the founder and owner of Ryadel. Valerio De Sanctis is the author of ASP.NET Core 5 and Angular, 4th Edition
Read more
  • 0
  • 0
  • 9412

article-image-openjs-foundation-accepts-electron-js-in-its-incubation-program
Fatema Patrawala
12 Dec 2019
3 min read
Save for later

OpenJS Foundation accepts Electron.js in its incubation program

Fatema Patrawala
12 Dec 2019
3 min read
Yesterday, at the Node+JS Interactive in Montreal, the OpenJS Foundation announced the acceptance of Electron into the Foundation’s incubation program. The OpenJS Foundation provides vendor-neutral support for sustained growth within the open source JavaScript community. It's supported by 30 corporate and end-user members, including GoDaddy, Google, IBM, Intel, Joyent, and Microsoft. Electron is an open source framework created for building desktop apps using JavaScript, HTML, and CSS, it is based on Node.js and Chromium. Additionally, Electron is widely used on many well-known applications including Discord, Microsoft Teams, OpenFin, Skype, Slack, Trello, Visual Studio Code, etc. “We’re heading into 2020 excited and honored by the trust the Electron project leaders have shown through this significant contribution to the new OpenJS Foundation,” said Robin Ginn, Executive Director of the OpenJS Foundation. He further added, “Electron is a powerful development tool used by some of the most well-known companies and applications. On behalf of the community, I look forward to working with Electron and seeing the amazing contributions they will make.” Electron’s cross-platform capabilities make it possible to build and run apps on Windows, Mac, and Linux computers. Initially developed by GitHub in 2013, today the framework is maintained by a number of developers and organizations. Electron is suited for anyone who wants to ship visually consistent, cross-platform applications, fast and efficiently. “We’re excited about Electron’s move to the OpenJS Foundation and we see this as the next step in our evolution as an open source project,” said Jacob Groundwater, Manager at ElectronJS and Principal Engineering Manager at Microsoft. “With the Foundation, we’ll continue on our mission to play a prominent role in the adoption of web technologies by desktop applications and provide a path for JavaScript to be a sustainable platform for desktop applications. This will enable the adoption and development of JavaScript in an environment that has traditionally been served by proprietary or platform-specific technologies.” What this means for developers Electron joining the OpenJS Foundation does not change how Electron is made, released, or used — and does not directly affect developers building applications with Electron. Even though Electron was originally created at GitHub, it is currently maintained by a number of organizations and individuals. In 2019, Electron codified its governance structure and invested heavily into formalizing how decisions affecting the entire project are made. The Electron team believes that having multiple organizations and developers investing in and collaborating on Electron makes the project stronger. Hence, lifting Electron up from being owned by a single corporate entity and moving it into a neutral foundation focused on supporting the web and JavaScript ecosystem is a natural next step as they mature in the open-source ecosystem. To know more about this news, check out the official announcement from the OpenJS Foundation website. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger Node.js and JS Foundations are now merged into the OpenJS Foundation Denys Vuika on building secure and performant Electron apps, and more
Read more
  • 0
  • 0
  • 6851

article-image-wireguard-to-be-merged-with-linux-net-next-tree-and-will-be-available-by-default-in-linux-5-6
Savia Lobo
12 Dec 2019
3 min read
Save for later

WireGuard to be merged with Linux net-next tree and will be available by default in Linux 5.6

Savia Lobo
12 Dec 2019
3 min read
On December 9, WireGuard announced that its secure VPN tunnel kernel code will soon be included in Linux net-next tree. This indicates, “WireGuard will finally reach the mainline kernel with the Linux 5.6 cycle kicking off in late January or early February!”, reports Phoronix. WireGuard is a layer 3 secure networking tunnel made specifically for the kernel, that aims to be much simpler and easier to audit than IPsec. On December 8, Jason Donenfeld, WireGuard’s lead developer sent out patches for the net-next v2 WireGuard. “David Miller has already pulled in WireGuard as the first new feature in net-next that is destined for Linux 5.6 now that the 5.5 merge window is over,” the email thread mentions. While WireGuard was initiated as a Linux project, its Windows, macOS, BSD, iOS, and Android versions are already available. The reason behind the delay for Linux was that Donenfeld disliked Linux’s built-in cryptographic subsystem citing its API is too complex and difficult. Donenfeld had plans to introduce a new cryptographic subsystem — his own Zinc library. However, this didn’t go down well with several developers as they thought that rewriting the cryptographic subsystem was a waste of time. Fortunately for Donenfeld, Linus Torvalds was on his side. Torvalds stated, “I’m 1000% with Jason on this. The crypto/model is hard to use, inefficient, and completely pointless when you know what your cipher or hash algorithm is, and your CPU just does it well directly.” Finally, Donenfeld compromised saying, "WireGuard will get ported to the existing crypto API. So it's probably better that we just fully embrace it, and afterward work evolutionarily to get Zinc into Linux piecemeal." Hence a few Zine elements have been imported into the legacy crypto code in the next Linux 5.5 kernel. WireGuard would become the new standard for Linux VPNs This laid the foundation for WireGuard to finally ship in Linux early next year. WireGuard works by securely encapsulates IP packets over UDP. It's authentication and interface design has more to do with Secure Shell (SSH) than other VPNs. You simply configure the WireGuard interface with your private key and your peers' public keys, and you're ready to securely talk. After the arrival, WireGuard VPN can be expected to become the new standard for Linux VPNs with its key features, namely, tiny code-size, high-speed cryptographic primitives, and in-kernel design. With being super-fast, WireGuard for Linux would be secure too as it supports state-of-the-art cryptography technologies such as the Noise protocol framework, Curve25519, BLAKE2, SipHash24, ChaCha20, Poly1305, and HKD. Donenfeld in the email thread writes, “This is big news and very exciting. Thanks to all the developers, contributors, users, advisers, and mailing list interlocutors who have helped to make this happen. In the coming hours and days, I'll be sending followups on next steps.” ArsTechnica reports, “Although highly speculative, it's also possible that WireGuard could land in-kernel on Ubuntu 20.04 even without the 5.6 kernel—WireGuard founder Jason Donenfeld offered to do the work backporting WireGuard into earlier Ubuntu kernels directly. Donenfeld also stated today that a 1.0 WireGuard release is ‘on the horizon’." To know more about this news in detail, read the official email thread. WireGuard launches an official MacOS app Researchers find a new Linux vulnerability that allows attackers to sniff or hijack VPN connections. NCSC investigates several vulnerabilities in VPN products from Pulse secure, Palo Alto and Fortinet
Read more
  • 0
  • 0
  • 4360

Banner background image
article-image-elementary-os-5-1-hera-releases-with-flatpak-native-support-several-accessibility-improvements-and-more
Bhagyashree R
09 Dec 2019
3 min read
Save for later

elementary OS 5.1 Hera releases with Flatpak native support, several accessibility improvements, and more

Bhagyashree R
09 Dec 2019
3 min read
Last week, the CEO and CXO of elementary OS, Cassidy James Blaede announced the release of elementary OS 5.1, code named ‘Hera’. elementary OS is an Ubuntu-based desktop distribution, which promises to be a “fast, open, and privacy-respecting” replacement to macOS and Windows.  Building upon the solid foundations laid out by its predecessor Juno, Hera brings several new features including native support for Flatpak, a faster AppCentre storefront, accessibility features, among other updates. Key updates in elementary OS 5.1 Hera Brand new greeter and onboarding In elementary OS 5.1 Hera, the greeter and onboarding have seen major changes in order to give users an improved first-run experience. In addition to looking better, the redesigned greeter addresses some of the key reported issues including keyboard focus issues, HiDPI issues, and better localization. Hera also ships with a new Onboarding app that gives you a quick introduction to key features and also takes care of common first-run tasks like managing privacy settings. Native Flatpak support and AppCenter updates elementary OS 5.1 Hera comes with native support for Flatpack, an application sandboxing and distribution framework. It enables developers to create one application and distribute it to different Linux desktop distributions.  Hera includes a new core elementary OS utility called Sideload that allows users to sideload Flatpak apps. Any updates to the sideloaded apps will appear in AppCenter and apps from any user-added Flatpak remotes will show up in AppCenter as uncurated apps. Along with the Flatpak support, Blaede shared that it is now “up to 10× faster in Hera, loading the homepage and featured apps blazingly fast.” Accessibility improvements A bunch of accessibility features has landed in elementary OS 5.1 Hera. System Settings are now more accessible to all users. Discoverability of performance and keyboard shortcut has been improved. Sound settings has a new approach to handling external devices and there is a “Flash screen” option for event alerts to better manage whether alerts are audible, visual, both, or neither. The Mouse & Touchpad settings in elementary OS 5.1 Hera are now organized into sections based on different behavior. Several accessibility settings like long-press secondary click, reveal pointer, double-click speed, and control pointer using keypad have been exposed. Also, the touchpad settings now has an “Ignore when mouse is connected” toggle. Many developers have already started trying out this release. A Hacker News user shared their first impressions on a discussion regarding this release, “I installed this on my XPS 13 this morning, and it's really nice. It has a lot of overall polish that most DE's are missing, it looks and feels cohesive. It installed without any issues, and I had no problem with my Ubuntu-leaning dotfiles. I will probably keep this for the near future, it's very pleasant.” These were some of the updates in elementary OS 5.1 Hera. Check out the official announcement to know more about this release. Redox OS will soon permanently run rustc, the compiler for the Rust programming language, says Redox creator Jeremy Soller Nate Chamberlain talks about the Microsoft Enterprise Mobility and Security suite and becoming M365 certified Microsoft technology evangelist Matthew Weston on how Microsoft PowerApps is democratizing app development [Interview]
Read more
  • 0
  • 0
  • 4374

article-image-introducing-firefox-replay-a-tool-that-allows-firefox-tabs-to-record-replay-and-rewind-their-behavior
Bhagyashree R
02 Dec 2019
3 min read
Save for later

Introducing Firefox Replay, a tool that allows Firefox tabs to record, replay, and rewind their behavior

Bhagyashree R
02 Dec 2019
3 min read
Mozilla is constantly putting its efforts into improving Firefox’s devtools. One such effort is Firefox Replay, an experimental tool that allows Firefox content processes to record their behavior so that it can be replayed and rewound later. The main highlight of Firefox Replay is the “code timeline” that enables you to scan through every code execution at a glance. Along with execution points, the timeline also shows exceptions, events, and network requests in real-time. It also allows you to save your recordings and pick up where you left afterward. How Firefox Replay works The record and replay behavior is achieved by “controlling the non-determinism in the browser.” Initially, it records non-deterministic behaviors (intra-thread and inter-thread) and then replays it later to “force the browser to behave deterministically.” Firefox Replay includes IPC integration to enable communication between a recording or replaying process and the chrome process. Its rewind infrastructure allows a replaying process to restore a previous state. Its debugger integration enables the JS debugger to read the required information from a replaying process and control the process’s execution. Firefox Replay is not officially released yet, however, Mac users can give it try by downloading the nightly builds. Since it is still experimental, Firefox Replay is disabled by default. You can turn it on with the ‘devtools.recordreplay.enabled’ preference. Read also: Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature The team is working on support for other platforms as well. “Windows port work is underway but is not yet working.  The difficulties are in figuring out the set of system library APIs to intercept, in getting the memory management and dirty memory parts of the rewind infrastructure to work, and in handling the different graphics and IPC pathways on different platforms,” the official doc reads. In a discussion on Hacker News, many users were excited to try out this tool. A user commented, “This might be enough to get me to use Firefox to develop with. This could be huge for its market share, a big part of the reason chrome was able to become so popular was because of how good its devtools were (compared to the competition at the time). Firefox definitely managed to catch up but not before lots of devs switched to chrome and stopped checking for compatibility with Firefox.” “This will be an absolute game-changer for web development. I am currently working on a really simplified version of this but as a chrome extension. We deal with a lot of real-time data and have been facing some timing issues (network and user input) which is really hard to reproduce,” a user added. Check out Mozilla’s official docs to know more in detail. Firefox 70 released with better security, CSS, and JavaScript improvements The new WebSocket Inspector will be released in Firefox 71 Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70
Read more
  • 0
  • 0
  • 5073
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-redox-os-will-soon-permanently-run-rustc-the-compiler-for-the-rust-programming-language-says-redox-creator-jeremy-soller
Vincy Davis
29 Nov 2019
4 min read
Save for later

Redox OS will soon permanently run rustc, the compiler for the Rust programming language, says Redox creator Jeremy Soller

Vincy Davis
29 Nov 2019
4 min read
Two days ago, Jeremy Soller, the Redox OS BDFL (Benevolent dictator for life) shared recent developments in Redox which is a Unix-like operating system written in Rust. The Redox OS team is pretty close to running rustc, the compiler for the Rust programming language on Redox. However, dynamic libraries are a remaining area that needs to be improved. https://twitter.com/redox_os/status/1199883423797481473 Redox is a Unix-like Operating System written in Rust, aiming to bring the innovations of Rust to a modern microkernel and full set of applications. In March this year, Redox OS 0.50 was released with support for Cairo, Pixman, and other libraries and packages. Ongoing developments in Redox OS Soller says that he has been running the Redox OS on a System76 Galago Pro (galp3-c) along with the System76 Open Firmware and has found the work satisfactory till now. “My work on real hardware has improved drivers and services, added HiDPI support to a number of applications, and spawned the creation of new projects such as pkgar to make it easier to install Redox from a live disk,” quotes Soller in the official Redox OS news page. Furthermore, he notified users that Redox has also become easier to cross-compile since the redoxer tool can now build, run, and test. It can also automatically manage a Redox toolchain and run executables for Redox inside of a container on demand. However, “compilation of Rust binaries on Redox OS”, is one of the long-standing issues in Redox OS, that has garnered much attention for the longest time. According to Soller, through the excellent work done by ids1024, a member of the GSoC Project, Readox OS had almost achieved self-hosting. Later, the creation of the relibc (a C library written in Rust) library and the subsequent work done by the contributors of this project led to the development of the POSIX C compatibility library. This gave rise to a significant increase in the number of available packages. With a large number of Rust crates suddenly gaining Redox OS support, “it seemed that as though the dream of self-hosting would soon be reality”, however, after finding some errors in relibc, Soller realized, “rustc is no longer capable of running statically linked!”  Read More: Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more Finally, the team shifted its focus to relibc’s ld_so which provides dynamic linking support for executables. However, this has caused a temporary halt to porting rustc to Redox OS. Building Redox OS on Redox OS is one of the highest priorities of the Redox OS project. Soller has assured its users that Rustc is a few months away from being run permanently. He also adds that with Redox OS being a microkernel, it is possible that even the driver level could be recompiled and respawned without downtime, which will make the operating system exceedingly fast to develop. In the coming months, he will be working on increasing the efficiency of porting more software and tackling more hardware support issues. Eventually, Soller hopes that he will be able to successfully develop Redox OS which would be a fully self-hosted, microkernel operating system written in Rust. Users are excited about the new developments in Redox OS and have thanked Soller for it. One Redditor commented, “I cannot tell you how excited I am to see the development of an operating system with greater safety guarantees and how much I wish to dual boot with it when it is stable enough to use daily.” Another Redditor says, “This is great! Love seeing updates to this project 👍” https://twitter.com/flukejones/status/1200225781760196609 Head over to the official Redox OS news page for more details. AWS will be sponsoring the Rust Project A Cargo vulnerability in Rust 1.25 and prior makes it ignore the package key and download a wrong dependency Rust 1.38 releases with pipelined compilation for better parallelism while building a multi-crate project Homebrew 2.2 releases with support for macOS Catalina ActiveState adds thousands of curated Python packages to its platform
Read more
  • 0
  • 0
  • 5964

article-image-nvidia-announces-cuda-10-2-will-be-the-last-release-to-support-macos
Bhagyashree R
25 Nov 2019
3 min read
Save for later

NVIDIA announces CUDA 10.2 will be the last release to support macOS

Bhagyashree R
25 Nov 2019
3 min read
NVIDIA announced the release of CUDA 10.2 last week. This is the last version to have macOS support for developing CUDA applications and will be completely dropped in the next release. Other updates include libcu++, new interoperability APIs, and more. Key updates in CUDA 10.2 General CUDA 10.2 updates New APIs: CUDA 10.2 ships with CUDA Virtual Memory Management APIs. New interoperability APIs are added for buffer allocation, synchronization, and streaming. However, these are in beta and may change in future releases. Support for new operating systems: This release adds support for a few new operating systems including Fedora 29, Red Hat Enterprise Linux (RHEL) 7.x and 8.x, OpenSUSE 15.x, SUSE SLES 12.4 and SLES 15.x, Ubuntu 16.04.6 LTS and Ubuntu 18.04.3 LTS. In CUDA 10.2, RHEL 6.x is deprecated and support will be dropped in the next release of CUDA. Increased texture size limit for Maxwell+ GPUs: The 1D linear texture size limit for Maxwell+ GPUs in CUDA is now increased to 2^28. Updates in CUDA tools The Nvidia CUDA Compiler (NVCC) now has support for Clang 8.0 and Xcode 10.2 as host compilers. There is a new -forward-unknown-to-host-compiler option that allows forwarding options not recognized by NVCC to the host compiler. Visual Profiler and NVProf now allow tracing features for non-root and non-admin users on desktop platforms. The events and metrics profiling is still restricted to non-root and non-admin users. Also, starting with CUDA 10.2, Visual Profiler and NVProf use dynamic/shared CUPTI library. Users are required to set the path to the CUPTI library before launching Visual Profiler and NVProf. Updates in CUDA libraries cuBLAS: The cuBLAS library is a fast GPU-accelerated implementation of the standard basic linear algebra subroutines (BLAS). In CUDA 10.2, performance is further improved on some large and other GEMM sizes due to increased internal workspace size. cuSOLVER: This library includes a collection of direct solvers that deliver significant acceleration for computer vision, CFD, and linear optimization apps. In this release, a new Tensor Cores Accelerated Iterative Refinement Solver (TCAIRS) is introduced. The cusolverMg library includes ‘cusolverMgGetrf’ and ‘cusolverMgGetrs’ to support multi-GPU LU. cuFFT: This library provides GPU-accelerated FFT implementations that perform up to 10x faster than CPU-only alternatives. This release comes with improved performance and scalability for these use cases: multi-GPU non-power of 2 transforms, R2C and Z2D odd-sized transforms, 2D transforms with small sizes and large batch counts These were a few updates in CUDA 10.2. Read the official release notes to know what else has shipped with this release. CUDA 10.1 released with new tools, libraries, improved performance and more Implementing color and shape-based object detection and tracking with OpenCV and CUDA [Tutorial] NVIDIA releases Kaolin, a PyTorch library to accelerate research in 3D computer vision and AI
Read more
  • 0
  • 0
  • 7753

article-image-racket-7-5-releases-with-relicensing-to-apache-mit-standard-json-mime-dark-mode-interface-and-more
Fatema Patrawala
22 Nov 2019
3 min read
Save for later

Racket 7.5 releases with relicensing to Apache/MIT, standard JSON MIME, dark mode interface and more

Fatema Patrawala
22 Nov 2019
3 min read
On Tuesday, Racket, a general-purpose programming language announced Racket 7.5. Racket is based on the Scheme dialect of Lisp programming language and is designed to be a platform for programming language design and implementation. Racket is also used to refer to the family of Racket programming languages and the set of tools supporting development on and with Racket. Key features in Racket 7.5 This new release will be distributed under a new and less-restrictive license, either the Apache 2.0 or the MIT license Racket CS will remain in beta for the v7.5, but the compatibility and performance continue to improve. It is expected to be ready for production use by the next release In this release of Racket 7.5 the Web Server provides a standard JSON MIME type, including a response/jsexpr form for HTTP responses bearing JSON In this release GNU MPFR operations run about 3x faster Typed Racket supports definitions of new struct type properties and type checks uses existing struct type properties in struct definitions. Previously, these were ignored by the type checker, so type errors may have been hidden The performance bug in v7.4’s big bang has been repaired DrRacket supports Dark Mode for interface elements. With this release plot can display parametric 3d surfaces and redex supports modeless judgment forms Additionally with the above changes, in the Racket 7.5 MacOS Catalina 10.15 includes a new requirement that executables be “notarized”, to give Apple the ability to prevent certain kinds of malware. In this release, all of the disk images (.dmg’s) are notarized, along with the applications that they contain (.app’s). Many users may not notice any difference, but two groups of Catalina users will be affected; First those who use the “racket” binary directly, and second, those that download the .tgz bundles. In both cases, the operating system is likely to inform that the given executable is not trusted, or that the developer can’t be verified. Fortunately, both groups of users are probably also running commands in a shell, hence the solution for both groups will be the same that is to disable the quarantine flag using the xattr command, for example, xattr -d com.apple.quarantine /path/to/racket. To know more about this news, check out the official announcement on the Racket page. Matthew Flatt’s proposal to change Racket’s s-expressions based syntax to infix representation creates a stir in the community Racket 7.3 releases with improved Racket-on-Chez, refactored IO system, and more Racket 7.2, a descendent of Scheme and Lisp, is now out! Racket v7.0 is out with overhauled internals, updates to DrRacket, TypedRacket among others  
Read more
  • 0
  • 0
  • 3622

article-image-facebook-mandates-visual-studio-code-as-default-development-environment-and-partners-with-microsoft-for-remote-development-extensions
Fatema Patrawala
21 Nov 2019
4 min read
Save for later

Facebook mandates Visual Studio Code as default development environment and partners with Microsoft for remote development extensions

Fatema Patrawala
21 Nov 2019
4 min read
On Tuesday, Facebook mandates Visual Studio Code, the source code editor developed by Microsoft, as their default development environment. Additionally, they also stated that the company will work with Microsoft to expand the remote development extension for Visual Studio Code so that engineers can do large-scale remote development. As per the official announcement page, Facebook engineers have written millions of lines of codes and there is no mandated development environment. Till now Facebook developers used Vim or Emacs  and the development environment was disjointed. And certain developers also used Nuclide, an integrated development environment developed by Facebook. But in late 2018, they announced to their internal engineers that they would move Nuclide to Visual Studio Code. They have also done plenty of development work to migrate the current Nuclide functionality, along with new features to Visual Studio Code and currently it is used extensively across the company in beta. Why Visual Studio Code? The Visual Studio Code is a very popular development tool, with great support from Microsoft and the open source community. It runs on macOS, Windows, and Linux, and has a robust and well-defined extension API that enables to continue building the important capabilities required for the large-scale development done at Facebook. The company believes that it is a platform on which they can safely bet their development platform future. They have also partnered with Microsoft for remote development. At present, Facebook engineers install Visual Studio Code on a local PC, but the actual development is done directly on the development server in the data center. Therefore, it aims to improve efficiency and productivity by making the code on the server accessible in a seamless and high-performance manner. The company believes that using remote extensions will provide many benefits like: Work with larger, faster, or more specialized hardware than what’s available on local machine Create tailored, dedicated environments for each project’s specific dependencies, without worrying about errors due to mixed or conflicting configurations Support the flexibility of being able to quickly switch between multiple running development environments without impacting local resources or tool performance Facebook mandates Visual Studio Code as an integrated development environment which can be used internally, specifically, because Facebook uses various programming languages. It also uses Mercurial as the source control infrastructure, it will work on the development of extensions to allow direct source control operations within Visual Studio Code. Facebook states, “VS Code is now an established part of Facebook’s development future. In teaming with Microsoft, we’re looking forward to being part of the community that helps Visual Studio Code continue to be a world class development tool.” On Hacker News, developers are discussing various issues related to remote development extensions in VS Code, one of them is it is not open-source and Facebook should take efforts to make it an open project. One comment reads, “Just an FYI for people - The Remote Development extensions are not open source. I'd hope if Facebook were joining efforts, they'd do so on a more open project. 1: https://code.visualstudio.com/docs/remote/faq#_why-arent-the... 2: https://github.com/microsoft/vscode/wiki/Differences-between... 3: https://github.com/VSCodium/vscodium/issues/240 (aka, on-the-wire DRM to make sure the remote components only talk to a licensed VS Code build from Microsoft) MS edited the licensing terms many moons ago, to prepare for VS Code in browser using these remote extensions/apis that no one else can use)- https://github.com/microsoft/vscode/issues/48279 Finally, this is the thread where you will see regular users being negatively impacted by the DRM (a closed source, non-statically linked proprietary binary downloaded at runtime) that implements this proprietary-ness: https://github.com/microsoft/vscode-remote-release/issues/10... (of course, also with enough details to potentially patch around this issue if you were so inclined). Further, MS acknowledged that statically linking would help in May, and yet it appears to still be an issue. I just hope they don't come after Eclipse Theia…” Microsoft releases Cascadia Code version 1909.16, the latest monospaced font for Windows Terminal and Visual Studio Code 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 developers explain why they use Visual Studio Code [Sponsored by Microsoft] 5 useful Visual Studio Code extensions for Angular developers Facebook releases PyTorch 1.3 with named tensors, PyTorch Mobile, 8-bit model quantization, and more
Read more
  • 0
  • 0
  • 6215
article-image-debian-10-2-buster-linux-distribution-releases-with-the-latest-security-and-bug-fixes
Bhagyashree R
18 Nov 2019
3 min read
Save for later

Debian 10.2 Buster Linux distribution releases with the latest security and bug fixes

Bhagyashree R
18 Nov 2019
3 min read
Last week, the Debian team released Debian 10.2 as the latest point release to the "Buster" series. This release includes a number of bug fixes and security updates. In addition, starting this release Firefox ESR (Extended Support Release) is no longer supported on the ARMEL variant of Debian. Key updates in Debian 10.2 Security updates Some of the security fixes added in Debian 10.2 are: Apache2: These five vulnerabilities reported in the Apache HTTPD server are fixed:  CVE-2019-9517, CVE-2019-10081, CVE-2019-10082, CVE-2019-10092, CVE-2019-10097, CVE-2019-10098. Nghttp2: Two vulnerabilities, CVE-2019-9511 and CVE-2019-9513 found in the HTTP/2 code of the nghttp2 HTTP server are fixed. PHP 7.3: In PHP five security issues were fixed that could result in information disclosure or denial of service. These were CVE-2019-11036, CVE-2019-11039, CVE-2019-11040, CVE-2019-11041, CVE-2019-11042. Linux: In the Linux kernel five security issues were fixed that may have otherwise lead to a privilege escalation, denial of service, or information leaks. These were CVE-2019-14821, CVE-2019-14835, CVE-2019-15117, CVE-2019-15118, CVE-2019-15902. Thunderbird: The security issues reported in Thunderbird could have potentially resulted in the execution of arbitrary code, cross-site scripting, and information disclosure. These are tracked as CVE-2019-11739, CVE-2019-11740, CVE-2019-11742, CVE-2019-11743, CVE-2019-11744, CVE-2019-11746, CVE-2019-11752. Bug fixes Debian 10.2 brings several new bug fixes for some popular packages, some of which are: Emacs: The European Patent Litigation Agreement (EPLA) key is now updated. Flatpak: Debian 10.2 includes the new upstream stable release of Flatpak, a tool for building and distributing desktop applications on Linux. GNOME Shell: In addition to including the new upstream stable release of GNOME Shell, this release fixes truncation of long messages in Shell-modal dialogs and avoids crash on the reallocation of dead actors LibreOffice: The PostgreSQL driver with PostgreSQL 12 is now fixed. Systemd: Starting from Debian 10.2, the reload failure does not get propagated to service results. The ‘sync_file_range’ failures in nspawn containers on ARM and PPC systems are fixed. uBlock: The uBlock adblocker is updated to its new upstream version and is compatible with Firefox ESR68. These were some of the updates in Debian 10.2. Check out the official announcement by the Debian team to know what else has shipped in this release. Severity issues raised for Python 2 Debian packages for not supporting Python 3 Debian 10 codenamed ‘buster’ released, along with Debian GNU/Hurd 2019 as a port Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap
Read more
  • 0
  • 0
  • 2460

article-image-qt-releases-the-technical-preview-of-webassembly-based-qml-open-source-design-viewer
Vincy Davis
25 Oct 2019
2 min read
Save for later

Qt releases the technical preview of WebAssembly based QML open source design viewer

Vincy Davis
25 Oct 2019
2 min read
Two days ago, the Qt team released the technical preview of an open source QML design viewer based on the Qt for WebAssembly. This design viewer will enable the QML application to be run on web browsers like Chrome, Safari, FireFox and Edge. The Qt for WebAssembly is a platform plugin which allows the user to build Qt applications with web page integrations.  For running a custom QML application, a user will have to define the main QML file and the import paths with a .qmlproject file. The project folder then has to be compressed as a ZIP file and uploaded to the design viewer. Users can also generate a resource file out of their project and upload the package. Image source: Qt blog Read More: Qt introduces Qt for MCUs, a graphics toolkit for creating a fluid user interface on microcontrollers The Qt team has tested the design viewer with the latest versions of Chrome, Safari, FireFox and Edge and has found that the QML application runs well on all the web browsers. “The startup and compilation time depends on your browser and configuration, but the actual performance of the application, once it is started, is indistinguishable from the same application running on the desktop,” states the official blog. This design viewer also runs on Android and iOS and is shipped with most QML modules  and is based on a snapshot of Qt 5.14. Many users have liked the web based design viewer A user on Hacker News comments, “One of the most beautiful things I have seen in 2019. Brilliant!” Another comment read, “This looks pretty cool! I am actually shopping for a GUI framework for a new project and WebAssembly support is a potential critical feature.” Qt and LG Electronics partner to make webOS as the platform of choice for embedded smart devices GitLab retracts its privacy invasion policy after backlash from community Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim
Read more
  • 0
  • 0
  • 3284

article-image-electron-7-0-releases-in-beta-with-windows-on-arm-64-bit-faster-ipc-methods-nativetheme-api-and-more
Fatema Patrawala
24 Oct 2019
3 min read
Save for later

Electron 7.0 releases in beta with Windows on Arm 64 bit, faster IPC methods, nativetheme API and more

Fatema Patrawala
24 Oct 2019
3 min read
Last week the team at Electron announced the release of Electron 7.0 in beta. It includes upgrades to Chromium 78, V8 7.8, and Node.js 12.8.1. The team has added a Window on Arm 64 release, faster IPC methods, a new nativeTheme API, and much more. This release is published to npm under the beta tag and can be installed via npm install electron@beta, or npm i electron@7.0.0-beta.7. It is packed with upgrades, fixes, and new features. Notable changes in Electron 7.0 There are stack upgrades in this release, Electron 7.0 will be compatible on Chromium 78, V8 7.8 and Node.js 12.8.1. In this release they have added Windows on Arm (64 bit). The team has added ipcRenderer.invoke() and ipcMain.handle() for asynchronous request/response-style IPC. These are strongly recommended over the remote module. They have added nativeTheme API to read and respond to changes in the OS's theme and color scheme. In this release they have switched to a new TypeScript Definitions generator, which generates more precise definitions files (d.ts) from C# model classes to build strongly typed web application where the server- and client-side models are in sync. Earlier Electron used Doc Linter and Doc Parser but it had a few issues and hence shifted to TypeScript to make definition files better without losing any information on docs. Other breaking changes The team has removed deprecated APIs in this release: Callback-based versions of functions that now use Promises. Tray.setHighlightMode() (macOS). app.enableMixedSandbox() app.getApplicationMenu(), app.setApplicationMenu(), powerMonitor.querySystemIdleState(), powerMonitor.querySystemIdleTime(), webFrame.setIsolatedWorldContentSecurityPolicy(), webFrame.setIsolatedWorldHumanReadableName(), webFrame.setIsolatedWorldSecurityOrigin() Session.clearAuthCache() no longer allows filtering the cleared cache entries. Native interfaces on macOS (menus, dialogs, etc.) now automatically match the dark mode setting on the user's machine. The team has updated the electron module to use @electron/get. Node 8 is the minimum supported node version in this release. The electron.asar file no longer exists. Any packaging scripts that depend on its existence should be updated by the developers. Additionally the team announced that Electron 4.x.y has reached end-of-support as per the project's support policy. Developers and applications are encouraged to upgrade to a newer version of Electron. To know more about this release, check out the Electron 7.0 GitHub page and the official blog post. Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more Electron 5.0 ships with new versions of Chromium, V8, and Node.js The Electron team publicly shares the release timeline for Electron 5.0
Read more
  • 0
  • 0
  • 3503
article-image-openbsd-6-6-comes-with-gcc-disabled-in-base-for-armv7-and-i386-smp-improvements-and-more
Bhagyashree R
18 Oct 2019
3 min read
Save for later

OpenBSD 6.6 comes with GCC disabled in base for ARMv7 and i386, SMP Improvements, and more

Bhagyashree R
18 Oct 2019
3 min read
Yesterday, the team behind OpenBSD, a Unix-like operating system, announced the release of OpenBSD 6.6. This release has GNU Compiler Collection (GCC) disabled in its base packages for i386 and ARMv7 and expanded LLVM Clang platform support. OpenBSD 6.6 also features various SMP improvements, improved Linux compatibility with ACPI interfaces, a number of new hardware drivers, and more. It ships with OpenSSH 8.1, LibreSSL 3.0.2, OpenSMTPD 6.6, and other updated packages. Read also: OpenSSH code gets an update to protect against side-channel attacks Key updates in OpenBSD 6.6 Unlocked system calls OpenBSD 6.6 comes with unlocked ‘getrlimit’ and ‘setrlimit’ system calls. These are used for controlling the maximum system resource consumption. There are also unlocked read and write system calls for reading input and writing output respectively. Improved hardware support OpenBSD 6.6 comes with Linux compatible ACPI interfaces. Also, the ACPI support is enabled in ‘radeon’ and ‘amdgpu’. Time Stamp Counter (TSC) is re-enabled as the default AMD64 time source and TSC synchronization is added for multiprocessor machines. This release supports the cryptographic coprocessor found on newer AMD Ryzen CPUs/APUs. IEEE 802.11 wireless stack improvements The ifconfig ‘nwflag’ is now repaired. A new stayauth ‘nwflag’ is added, which you can set to ignore deauth frames to prevent your system from a spoofing attack. Support for 802.11n Tx aggregation is added to net80211 and the ‘iwn’ driver. Starting with OpenBSD 6.6, all wireless drives submit a batch of received packets to the network stack during one interrupt, instead of submitting them individually. Security improvements The unveil command is updated to improve application behavior when encountering hidden filesystem paths. OpenBSD 6.6 has improved mitigations against a number of vulnerabilities including Spectre side-channel vulnerability in Intel CPUs and Intel's Microarchitectural Data Sampling vulnerability. This release introduces 'malloc_conceal' and 'calloc_conceal', which return the memory in pages marked ‘MAP_CONCEAL’ and call ‘freezero’ on ‘free’. Read also: Seven new Spectre and Meltdown attacks found In a discussion on Hacker News, many users expressed their excitement. A user commented, “Just keeps getting better and better every release. I wish they would add an easy encryption option in the installer. You can enable full-disk encryption, but you have to mess with the bioctl settings, which potentially scares off new users.” A few users also had some doubt that why this release has U2F support and Bluetooth disabled for security. A user explained, “I'm not sure why U2F would be "disabled for security". I guess it's just that nobody has implemented all the required things. For the USB tokens, you need userspace USB HID access and hotplug notifications. I did that in Firefox for FreeBSD.” These were some of the updates in OpenBSD 6.6. Check out the official announcement to know more. OpenBSD 6.4 released OpenSSH code gets an update to protect against side-channel attacks OpenSSH 8.0 released; addresses SCP vulnerability and new SSH additions  
Read more
  • 0
  • 0
  • 3013

article-image-ubuntu-19-10-releases-with-microk8s-add-ons-gnome-3-34-zfs-on-root-nvidia-specific-improvements-and-much-more
Vincy Davis
18 Oct 2019
4 min read
Save for later

Ubuntu 19.10 releases with MicroK8s add-ons, GNOME 3.34, ZFS on root, NVIDIA-specific improvements, and much more!

Vincy Davis
18 Oct 2019
4 min read
Yesterday, Canonical announced the release of Ubuntu 19.10 which is the fastest Ubuntu release with significant performance improvements to accelerate developer productivity in AI/ML. This release brings enhanced edge computing capabilities with the addition of strict confinement to MicroK8s, which will safeguard the complete isolation and presents a secured production-grade Kubernetes environment. This allows MicroK8s add-ons like Istio, Knative, CoreDNS, Prometheus, and Jaeger to be deployed securely at the edge with a single command. Ubuntu 19.10 also delivers other features like NVIDIA drivers which are embedded in the ISO image to improve the performance of gamers and AI/ML users. The CEO of Canonical, Mark Shuttleworth says, “With the 19.10 release, Ubuntu continues to deliver strong support, security and superior economics to enterprises, developers and the wider community.” The Ubuntu team has notified users that Ubuntu 19.10 will be only supported for 9 months, until July 2020. Users are also advised to use Ubuntu 18.04 for Long Term Support. What’s new in Ubuntu 19.10? Updated Packages Linux kernel: The new release is based on the Linux 5.3 series and will support AMD Navi GPUs, ARM SoCs, ARM Komeda display, and Intel speed select on Xeon servers. In order to improve the boot speed, the default kernel compression algorithm is moved to lz4 on most architectures. The default initramfs compression algorithm has also changed to lz4 on all architectures. Toolchain Upgrades: It also brings new upstream releases of glibc 2.30, OpenJDK 11, Rust 1.37, GCC 9.2, updated Python 3.7.5, Python 3.8.0 (interpreter only), ruby 2.5.5, php 7.3.8, perl 5.28.1, golang 1.12.10. Read More: Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards Security Improvements Ubuntu 19.10 explores additional default hardening options that are enabled in GGC like support for both stack clash protection and control-flow integrity protection. Ubuntu Desktop GNOME 3.34 desktop: Ubuntu 19.10 includes GNOME 3.34 which includes a lot of bug fixes, some new features and a significant improvement in responsiveness and speed. It allows to group icons in the Activities overview, has improved wallpaper and wi-fi settings. ZFS on root: This feature is included as an experimental feature in this release. Users can create the ZFS file system and also partition the layout from the installer directly. Read More: Ubuntu 19.10 will now support experimental ZFS root file-system install option NVIDIA-specific improvements: The driver is now included in the ISO and has improved startup reliability when the NVIDIA driver is in use.  Ubuntu 19.10 also brings improved smoothness and frame rates for NVIDIA. Other new features in Ubuntu 19.10 A USB drive can be plugged in and accessed directly from the dock. New themes like Yaru light and dark variants are now available. Support for DLNA sharing is now available by default. Ubuntu Server Images: Ubuntu 19.10 prefers the production-ready ppc64el and arm64 live-server ISO images to install Ubuntu Server on bare metal on the two architectures. Raspberry Pi: The Raspberry Pi 32-bit and 64-bit preinstalled images (raspi3) are supported in this release. Also, Ubuntu images will now support almost all the devices of the Raspberry family Pi 2, Pi 3B, Pi 3B+, CM3, CM3+, Pi 4. Users have appreciated the new features in Ubuntu 19.10. https://twitter.com/dont39350/status/1184902506238926850 https://twitter.com/ImpWarfare/status/1184844081576456193 https://twitter.com/robinjuste/status/1183891524242857986 These are some of the selected updates in Ubuntu 19.10, read the release notes for more information. You can also check out the Ubuntu blog for more details on the release. Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices What to expect from D programming language in the near future Canonical, the company behind the Ubuntu Linux distribution, was hacked; Ubuntu source code unaffected Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more
Read more
  • 0
  • 0
  • 3049