Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Languages

135 Articles
article-image-understanding-the-foundation-of-protocol-oriented-design
Expert Network
30 Jun 2021
7 min read
Save for later

Understanding the Foundation of Protocol-oriented Design

Expert Network
30 Jun 2021
7 min read
When Apple announced Swift 2 at the World Wide Developers Conference (WWDC) in 2016, they also declared that Swift was the world’s first protocol-oriented programming (POP) language. From its name, we might assume that POP is all about protocol; however, that would be a wrong assumption. POP is about so much more than just protocol; it is actually a new way of not only writing applications but also thinking about programming. This article is an excerpt from the book Mastering Swift, 6th Edition by Jon Hoffman. In this article, we will discuss a protocol-oriented design and how we can use protocols and protocol extensions to replace superclasses. We will look at how to define animal types for a video game in a protocol-oriented way. Requirements When we develop applications, we usually have a set of requirements that we need to develop against. With that in mind, let’s define the requirements for the animal types that we will be creating in this article: We will have three categories of animals: land, sea, and air. Animals may be members of multiple categories. For example, an alligator can be a member of both the land and sea categories. Animals may attack and/or move when they are on a tile that matches the categories they are in. Animals will start off with a certain number of hit points, and if those hit points reach 0 or less, then they will be considered dead. POP Design We will start off by looking at how we would design the animal types needed and the relationships between them. Figure 1 shows our protocol-oriented design: Figure 1: Protocol-oriented design In this design, we use three techniques: protocol inheritance, protocol composition, and protocol extensions. Protocol inheritance Protocol inheritance is where one protocol can inherit the requirements from one or more additional protocols. We can also inherit requirements from multiple protocols, whereas a class in Swift can have only one superclass. Protocol inheritance is extremely powerful because we can define several smaller protocols and mix/match them to create larger protocols. You will want to be careful not to create protocols that are too granular because they will become hard to maintain and manage. Protocol composition Protocol composition allows types to conform to more than one protocol. With protocol-oriented design, we are encouraged to create multiple smaller protocols with very specific requirements. Let’s look at how protocol composition works. Protocol inheritance and composition are really powerful features but can also cause problems if used wrongly. Protocol composition and inheritance may not seem that powerful on their own; however, when we combine them with protocol extensions, we have a very powerful programming paradigm. Let’s look at how powerful this paradigm is. Protocol-oriented design — putting it all together We will begin by writing the Animal superclass as a protocol: protocol Animal { var hitPoints: Int { get set } } In the Animal protocol, the only item that we are defining is the hitPoints property. If we were putting in all the requirements for an animal in a video game, this protocol would contain all the requirements that would be common to every animal. We only need to add the hitPoints property to this protocol. Next, we need to add an Animal protocol extension, which will contain the functionality that is common for all types that conform to the protocol. Our Animal protocol extension would contain the following code: extension Animal { mutating func takeHit(amount: Int) { hitPoints -= amount } func hitPointsRemaining() -> Int { return hitPoints } func isAlive() -> Bool { return hitPoints > 0 ? true : false } } The Animal protocol extension contains the same takeHit(), hitPointsRemaining(), and isAlive() methods. Any type that conforms to the Animal protocol will automatically inherit these three methods. Now let’s define our LandAnimal, SeaAnimal, and AirAnimal protocols. These protocols will define the requirements for the land, sea, and air animals respectively: protocol LandAnimal: Animal { var landAttack: Bool { get } var landMovement: Bool { get } func doLandAttack() func doLandMovement() } protocol SeaAnimal: Animal { var seaAttack: Bool { get } var seaMovement: Bool { get } func doSeaAttack() func doSeaMovement() } protocol AirAnimal: Animal { var airAttack: Bool { get } var airMovement: Bool { get } func doAirAttack() func doAirMovement() } These three protocols only contain the functionality needed for their particular type of animal. Each of these protocols only contains four lines of code. This makes our protocol design much easier to read and manage. The protocol design is also much safer because the functionalities for the various animal types are isolated in their own protocols rather than being embedded in a giant superclass. We are also able to avoid the use of flags to define the animal category and, instead, define the category of the animal by the protocols it conforms to. In a full design, we would probably need to add some protocol extensions for each of the animal types, but we do not need them for our example here. Now, let’s look at how we would create our Lion and Alligator types using protocol-oriented design: struct Lion: LandAnimal { var hitPoints = 20 let landAttack = true let landMovement = true func doLandAttack() { print(“Lion Attack”) } func doLandMovement() { print(“Lion Move”) } } struct Alligator: LandAnimal, SeaAnimal { var hitPoints = 35 let landAttack = true let landMovement = true let seaAttack = true let seaMovement = true func doLandAttack() { print(“Alligator Land Attack”) } func doLandMovement() { print(“Alligator Land Move”) } func doSeaAttack() { print(“Alligator Sea Attack”) } func doSeaMovement() { print(“Alligator Sea Move”) } } Notice that we specify that the Lion type conforms to the LandAnimal protocol, while the Alligator type conforms to both the LandAnimal and SeaAnimal protocols. As we saw previously, having a single type that conforms to multiple protocols is called protocol composition and is what allows us to use smaller protocols, rather than one giant monolithic superclass. Both the Lion and Alligator types originate from the Animal protocol; therefore, they will inherit the functionality added with the Animal protocol extension. If our animal type protocols also had extensions, then they would also inherit the function added by those extensions. With protocol inheritance, composition, and extensions, our concrete types contain only the functionality needed by the particular animal types that they conform to. Since the Lion and Alligator types originate from the Animal protocol, we can use polymorphism. Let’s look at how this works: var animals = [Animal]() animals.append(Alligator()) animals.append(Alligator()) animals.append(Lion()) for (index, animal) in animals.enumerated() { if let _ = animal as? AirAnimal { print(“Animal at \(index) is Air”) } if let _ = animal as? LandAnimal { print(“Animal at \(index) is Land”) } if let _ = animal as? SeaAnimal { print(“Animal at \(index) is Sea”) } } In this example, we create an array that will contain Animal types named animals. We then create two instances of the Alligator type and one instance of the Lion type that are added to the animals array. Finally, we use a for-in loop to loop through the array and print out the animal type based on the protocol that the instance conforms to. Upgrade your knowledge and become an expert in the latest version of the Swift programming language with Mastering Swift 5.3, 6th Edition by Jon Hoffman. About Jon Hoffman has over 25 years of experience in the field of information technology. He has worked in the areas of system administration, network administration, network security, application development, and architecture. Currently, Jon works as an Enterprise Software Manager for Syn-Tech Systems.
Read more
  • 0
  • 0
  • 6314

article-image-powershell-basics-for-it-professionals
Savia Lobo
16 Dec 2019
6 min read
Save for later

PowerShell Basics for IT Professionals

Savia Lobo
16 Dec 2019
6 min read
PowerShell is Microsoft’s automation platform for IT Pros. Of late, there have been a lot of questions around the complexity of this latest automation tool by Microsoft. At Microsoft Ignite 2018, Jason Himmelstein, Director of Technical Strategy and Strategic Partnerships, Office Apps & Services MVP, explained the basics of PowerShell and how to truly optimize your SharePoint implementation using this powerful IT pro toolset. While in this post we look at the big picture, you can check out the complete video here: ‘Introduction to PowerShell for the anxious IT pro’. Want to do more with PowerShell? After learning the basics, you can learn how to use PowerShell to automate complex Windows server tasks. You can also improve PowerShell's usability, and control and manage Windows-based environments by working through exciting recipes given in Windows Server 2019 Automation with PowerShell Cookbook - Third Edition written by Thomas Lee.  Himmelstein starts off by saying PowerShell isn’t a packaged executable, nor it is developer-centric that needs one to understand code, and it is easy for an IT Pro to understand. What is PowerShell? Windows PowerShell is Microsoft’s task automation framework, consisting of a command-line shell and associated scripting language built on .NET Framework. It provides full access to COM and WMI, enabling administrators to perform administrative tasks on both local and remote Windows systems. In simple words, PowerShell is an object-based, not a text-based, command-line interface for Microsoft Technologies. This means results in PowerShell can be acted upon and not just read from. One can cause huge damage to an environment using PowerShell as there is no back button in PowerShell. However, to check what must have gone wrong, you can check the logs but can not undo actions. Why PowerShell matters Regardless of the platform a person uses such as Office 365, Azure, etc., PowerShell can be easily implemented due to its cross-platform capability. Himmelstein also highlights one can also get started with Azure PowerShell by trying it out in an Azure Cloud Shell environment, an interactive, authenticated, browser-accessible shell for managing Azure resources.  Azure Cloud Shell comes equipped with commonly used CLI tools including Linux shell interpreters, PowerShell modules, Azure tools, text editors, source control, build tools, container tools, database tools and more. Cloud Shell also includes language support for several popular programming languages such as Node.js, .NET and Python. Cloud Shell also securely authenticates automatically for instant access to your resources through the Azure CLI or Azure PowerShell cmdlets. Users can use PowerShell in Cloud Shell. One can also develop applications using PowerShell or can use PowerShell via Source Control Management (SCM). Basics of PowerShell PowerShell Hardware There are two ways one can use PowerShell; one is via the PowerShell Console, which is similar to a command line. The other is PowerShell ISE (Integrated Scripting Environment). One thing Himmelstein encourages is, “we run PowerShell in the Console and we write PowerShell in the ISE.” The reason is there are certain functionalities that do not work in the ISE when one hits the ‘Run’ command. In such cases, the user will have to take that PowerShell out, copy it, save the file and run it in a command window. cmdlets Cmdlets are the main building blocks of PowerShell. These are mini commands that perform one action. These have the ability to pipe the output of one cmdlet into further cmdlets. These can also perform equality tests with expressions such as -eq, -lt, -match; one can diff easily within a PowerShell. Modules There are four types of Modules in PowerShell: Script: A Script module is a file (.psm1) that contains any valid Windows PowerShell code. Binary: A binary module is a .NET framework assembly (.dll) that contains compiled code. Manifest: A module Manifest is a Windows PowerShell data file (.psd1) that describes the contents of a module and determines how a module is processed. Dynamic: A dynamic module does not persist to disk. It is created using New Module, is intended to be short-lived, and cannot be accessed by Get-Module. Himmelstein prefers not to use the Dynamic module as it persists for just one session. Objects and Members Objects are instances of classes and have properties and methods. Members are properties and methods of an object. Properties define what an Object is and Methods define what you can do with the object. Himmelstein puts together all these terms in a simple way: Objects = stuff Cmdlets = things you can do with the stuff Modules = list of things you can do with the stuff Properties = details about the stuff Methods = instructions for things you can do with the stuff PipeLine Using PipeLines one can chain objects together for processing. The output of a pipelined object becomes the object itself. Functional Explanation Get-command: Gets all the cmdlet installed on your computer. Get-help: Displays additional information about a cmdlet Get-member: Listing the Properties and Methods of a Command or Object Get-verb: Gets approved Windows PowerShell verbs Start-transcript: Logs everything you do in that PowerShell window to a file Get- history: If you didn’t start transcript, you can still review your history before closing your Shell or ISE window. Tips for PowerShell beginners Use Variables: You can use any variables except the ones that are reserved by the system, which you will be prompted when you try to enter a reserved variable. Call one thing at a time Comment your scripts as this may save you a lot of time. Create scripts using an ISE/IDE, you can also use the Visual Studio Code and then execute in Shell. Dispose of your objects. Close the command window by typing Exit. Test before using in Production Write reusable scripts. What Powershell beginners should avoid Rewriting your variables Hard coding your scripts such as Password as it may get fired by PowerShell Taking code from the internet or vendor and just Run in your environment (You should read every code before you run it in your environment). Assuming the code is not harmful; it is. There is no back button in PowerShell and you cannot undo things. Running your code in an IDE/ISE and expect everything to work. PowerShell Syntax and Bracketology Syntax ‘#’ is for Comment ‘+’ is for Add ‘=’, ‘-eq’, are for Equal ‘!’, ‘-ne’, ‘-not’ are for ‘not equal’ Brackets ‘()’ Curved brackets also known as Parentheses are used for required options, compulsory arguments, or control structures. ‘{}’ Curly brackets are used for block expression within a command block and is also used to open a code block ‘[]’ Square brackets are used to denote optional elements or parameters and also used for match functions. Now that you know the basics of PowerShell, you can start performing key admin tasks on Windows Server 2019. To further learn how to employ best practices for writing PowerShell scripts and configuring Windows Server 2019 and leverage PowerShell to automate complex Windows server tasks, check out our book, Windows Server 2019 Automation with PowerShell Cookbook - Third Edition written by Thomas Lee. Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial] Scripting with Windows Powershell Desired State Configuration [Video] Automate tasks using Azure PowerShell and Azure CLI [Tutorial]
Read more
  • 0
  • 0
  • 7226

article-image-understanding-result-type-in-swift-5-with-daniel-steinberg
Sugandha Lahoti
16 Dec 2019
4 min read
Save for later

Understanding Result Type in Swift 5 with Daniel Steinberg

Sugandha Lahoti
16 Dec 2019
4 min read
One of the first things many programmers add to their Swift projects is a Result type. From Swift 5 onwards, Swift included an official Result type. In his talk at iOS Cong SG 2019, Daniel Steinberg explained why developers would need a Result type, how and when to use it, and what map and flatMap bring for Result. Swift 5, released in March this year hosts a number of key features such as concurrency, generics, and memory management. If you want to learn and master Swift 5, you may like to go through Mastering Swift 5, a book by Packt Publishing. Inside this book, you'll find the key features of Swift 5 easily explained with complete sets of examples. Handle errors in Swift 5 easily with Result type Result type gives a simple clear way of handling errors in complex code such as asynchronous APIs. Daniel describes the Result type as a hybrid of optionals and errors. He says, “We've used it like optionals but we've got the power of errors we know what went wrong and we can pull that error out at any time that we need it. The idea was we have one return type whether we succeeded or failed. We get a record of our first error and we are able to keep going if there are no errors.” In Swift 5, Swift’s Result type is implemented as an enum that has two cases: success and failure. Both are implemented using generics so they can have an associated value of your choosing, but failure must be something that conforms to Swift’s Error type. Due to the addition of Standard Library, the Error protocol now conforms to itself and makes working with errors easier. Image taken from Daniel’s presentation Result type has four other methods namely map(), flatMap(), mapError(), and flatMapError(). These methods enables us to do many other kinds of transformations using inline closures and functions. The map() method looks inside the Result, and transforms the success value into a different kind of value using a closure specified. However, if it finds failure instead, it just uses that directly and ignores the transformation. Basically, it enables the automatic transformation of a value (error) through a closure, but only in case of success (failure), otherwise, the Result is left unmodified. flatMap() returns a new result, mapping any success value using the given transformation and unwrapping the produced result. Daniel says, “If I need recursion I'm often reaching for flat map.” Daniel adds, “Things that can’t fail use map() and things that can fail use flatmap().” mapError(_:) returns a new result, mapping any failure value using the given transformation and flatMapError(_:) returns a new result, mapping any failure value using the given transformation and unwrapping the produced result. flatMap() (flatMapError()) is useful when you want to transform your value (error) using a closure that returns itself a Result to handle the case when the transformation fails. Using a Result type can be a great way to reduce ambiguity when dealing with values and results of asynchronous operations. By adding convenience APIs using extensions we can also reduce boilerplate and make it easier to perform common operations when working with results, all while retaining full type safety. You can watch Daniel Steinberg’s full video on YouTube where he explains Result Type with detailed code examples and points out common mistakes. If you want to learn more about all the new features of Swift 5 programming language then check out our book, Mastering Swift 5 by Jon Hoffman. Swift 5 for Xcode 10.2 is here! Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta.
Read more
  • 0
  • 0
  • 5385
Banner background image

article-image-openjdk-project-valhallas-head-shares-how-they-plan-to-enhance-the-java-language-and-jvm-with-value-types-and-more
Bhagyashree R
10 Dec 2019
4 min read
Save for later

OpenJDK Project Valhalla’s head shares how they plan to enhance the Java language and JVM with value types, and more

Bhagyashree R
10 Dec 2019
4 min read
Announced in 2014, Project Valhalla is an experimental OpenJDK project to bring major new language features to Java 10 and beyond. It primarily focuses on enabling developers to create and utilize value types, or non-reference values. Last week, the project’s head Brian Goetz shared the goal, motivation, current status, and other details about the project in a set of documents called “State of Valhalla”. Goetz shared that in the span of five years, the team has come up with five distinct prototypes of the project. Sharing the current state of the project, he wrote, “We believe we are now at the point where we have a clear and coherent path to enhance the Java language and virtual machine with value types, have them interoperate cleanly with existing generics, and have a compatible path for migrating our existing value-based classes to inline classes and our existing generic classes to specialized generics.” The motivation behind Project Valhalla One of the main motivations behind Project Valhalla was adapting the Java language and runtime to modern hardware. It’s almost been 25 years since Java was introduced and a lot has changed since then. At that time, the cost of a memory fetch and an arithmetic operation was roughly the same, but this is not the case now. Today, the memory fetch operations have become 200 to 1,000 times more expensive as compared to arithmetic operations. Java is often considered to be a pointer-heavy language as most Java data structures in an application are objects or reference types. This is why Project Valhalla aims to introduce value types to get rid of the type overhead both in memory as well as in computation. Goetz wrote, “We aim to give developers the control to match data layouts with the performance model of today’s hardware, providing Java developers with an easier path to flat (cache-efficient) and dense (memory-efficient) data layouts without compromising abstraction or type safety.” The language model for incorporating inline types Goetz further moved on to talk about how the team is accommodating inline classes in the language type system. He wrote, “The motto for inline classes is: codes like a class, works like an int; the latter part of this motto means that inline types should align with the behaviors of primitive types outlined so far.” This means that inline classes will enable developers to write types that behave more like Java's inbuilt primitive types. Inline classes are similar to current classes in the sense that they can have properties, methods, constructors, and so on. However, the difference that Project Valhalla brings is that instances of inline classes or inline objects do not have an identity, the property that distinguishes them from other objects. This is why operations that are identity-sensitive like synchronization are not possible with inline objects. There are a bunch of other differences between inline and identity classes. Goetz wrote, “Object identity serves, among other things, to enable mutability and layout polymorphism; by giving up identity, inline classes must give up these things. Accordingly, inline classes are implicitly final, cannot extend any other class besides Object...and their fields are implicitly final.” In Project Valhalla, the types are divided into inline and reference types where inline types include primitives and reference types are those that are not inline types such as declared identity classes, declared interfaces, array types, etc. He further listed a few migration scenarios including value-based classes, primitives, and specialized generics. Check out Goetz’s post to know more in detail about the Project Valhalla. OpenJDK Project Valhalla is ready for developers working in building data structures or compiler runtime libraries OpenJDK Project Valhalla’s LW2 early access builds are now available for you to test OpenJDK Project Valhalla is now in Phase III
Read more
  • 0
  • 0
  • 5278

article-image-rust-1-39-releases-with-stable-version-of-async-await-syntax-better-ergonomics-for-match-guards-attributes-on-function-parameters-and-more
Vincy Davis
08 Nov 2019
4 min read
Save for later

Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more

Vincy Davis
08 Nov 2019
4 min read
Less than two months after announcing Rust 1.38, the Rust team announced the release of Rust 1.39 yesterday. The new release brings the stable version of the async-await syntax, which will allow users to not only define async functions, but also block and .await them. The other improvements in Rust 1.39 include shared references to by-move bindings in match guards and attributes on function parameters. The stable version of async-await syntax The stable async function can be utilized (by writing async fn instead of fn) to return a Future when called. A Future is a suspended computation which is used to drive a function to conclusion “by .awaiting it.” Along with async fn, the async { ... } and async move { ... } blocks can also be used to define async literals. According to Nicholas D. Matsakis, a member of the release team, the first stable support of async-await kicks-off the commencement of a “Minimum Viable Product (MVP)”, as the Rust team will now try to improve the syntax by polishing and extending it for future operations. “With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async /.await, which we'll tell you more about in the future,” states the official Rust blog. Some of the major developments in the async ecosystem The tokio runtime will be releasing a number of scheduler improvements with support to async-await syntax in this month. The async-std runtime library will be releasing their first stable release in a few days. The async-await support has already started to become available in higher-level web frameworks and other applications like the futures_intrusive crate. Other improvements in Rust 1.39 Better ergonomics for match guards In the earlier versions, Rust would disallow taking shared references to by-move bindings in the if guards of match expressions. Starting from Rust 1.39, the compiler will allow binding in the following two ways- by-reference: either immutably or mutably which can be achieved through ref my_var or ref mut my_var respectively. by-value: either by-copy, if the bound variable's type implements Copy or otherwise by-move. The Rust team hopes that this feature will give developers a smoother and consistent experience with expressions. Attributes on function parameters Unlike the previous versions, Rust 1.39 will enable three types of attributes on parameters of functions, closures, and function pointers. Conditional compilation: cfg and cfg_attr Controlling lints: allow, warn, deny, and forbid Helper attributes which are used for procedural macro attributes Many users are happy with the Rust 1.39 features and are especially excited about the stable version of async-await syntax. A user on Hacker News comments, “Async/await lets you write non-blocking, single-threaded but highly interweaved firmware/apps in allocation-free, single-threaded environments (bare-metal programming without an OS). The abstractions around stack snapshots allow seamless coroutines and I believe will make rust pretty much the easiest low-level platform to develop for.” Another comment read, “This is big! Turns out that syntactic support for asynchronous programming in Rust isn't just syntactic: it enables the compiler to reason about the lifetimes in asynchronous code in a way that wasn't possible to implement in libraries. The end result of having async/await syntax is that async code reads just like normal Rust, which definitely wasn't the case before. This is a huge improvement in usability.” Few have already upgraded to Rust 1.39 and shared their feedback on Twitter. https://twitter.com/snoyberg/status/1192496806317481985 Check out the official announcement for more details. You can also read the blog on async-await for more information. AWS will be sponsoring the Rust Project A Cargo vulnerability in Rust 1.25 and prior makes it ignore the package key and download a wrong dependency Fastly announces the next-gen edge computing services available in private beta Neo4j introduces Aura, a new cloud service to supply a flexible, reliable and developer-friendly graph database Yubico reveals Biometric YubiKey at Microsoft Ignite
Read more
  • 0
  • 0
  • 5049

article-image-microsoft-releases-typescript-3-7-with-much-awaited-features-like-optional-chaining-assertion-functions-and-more
Savia Lobo
06 Nov 2019
3 min read
Save for later

Microsoft releases TypeScript 3.7 with much-awaited features like Optional Chaining, Assertion functions and more

Savia Lobo
06 Nov 2019
3 min read
Yesterday, Microsoft announced the release of TypeScript 3.7 with new tooling features, optional chaining, nullish coalescing, assertion functions, and much more. This release also includes breaking features; a few changes in the DOM where the types in lib.dom.d.ts have been updated; the typeArguments property has been removed from the TypeReference interface. Also, TypeScript 3.7 emits get/set accessors in .d.ts files which can cause breaking changes for consumers on older versions of TypeScript like 3.5 and prior. TypeScript 3.6 users will not be impacted as the version was future-proofed for this feature. Let us have a look at other new features in TypeScript 3.7. What’s new in TypeScript 3.7? Optional Chaining TypeScript 3.7 implements Optional Chaining, one of the most highly-demanded ECMAScript features that was filed 5 years ago. Optional chaining lets one write code that can immediately stop running some expressions if it is run into a null or undefined. The star of the show in optional chaining is the new ?. operator for optional property accesses. Optional chaining also includes two other operations; optional element access, which acts similarly to optional property accesses, but allows us to access non-identifier properties (e.g. arbitrary strings, numbers, and symbols). The second one is an optional call, which allows to conditionally call expressions if they’re not null or undefined. Assertion Functions Assertion functions are a specific set of functions that throw an error if something unexpected happens. Assertions in JavaScript are often used to guard against improper types being passed in. Unfortunately in TypeScript, these checks could never be properly encoded. For loosely-typed code, this meant TypeScript was checking less, and for slightly conservative code it often forced users to use type assertions. Another alternative was to rewrite the code such that the language could analyze it. However, this was not convenient. To solve this, TypeScript 3.7 introduces a new concept called “assertion signatures” which models these assertion functions. The first type of assertion signature ensures that whatever condition is being checked must be true for the remainder of the containing scope. The other type of assertion signature doesn’t check for a condition but instead tells TypeScript that a specific variable or property has a different type. Build-Free Editing with Project References In TypeScript 3.7, when opening a project with dependencies, TypeScript will automatically use the source .ts/.tsx files instead. This means projects using project references will now see an improved editing experience where semantic operations are up-to-date. Website and Playground Updates TypeScript playground now includes awesome new features like quick fixes to fix errors, dark/high-contrast mode, and automatic type acquisition so you can import other packages. Each feature here is explained through interactive code snippets under the “what’s new” menu. Many users and developers are excited to try out TypeScript 3.7. https://twitter.com/kmsaldana1/status/1191768934648729600 https://twitter.com/mgechev/status/1191769805952438272 To know more about other new features in TypeScript 3.7, read the official release notes. Announcing Feathers 4, a framework for real-time apps and REST APIs with JavaScript or TypeScript Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more
Read more
  • 0
  • 0
  • 3848
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-3-programming-languages-some-people-think-are-dead-but-definitely-arent
Richard Gall
24 Oct 2019
11 min read
Save for later

3 programming languages some people think are dead but definitely aren’t

Richard Gall
24 Oct 2019
11 min read
Recently I looked closely at what it really means when a certain programming language, tool, or trend is declared to be ‘dead’. It seems, I argued, that talking about death in respect of different aspects of the tech industry is as much a signal about one’s identity and values as a developer as it is an accurate description of a particular ‘thing’s’ reality. To focus on how these debates and conversations play out in practice I decided to take a look at 3 programming languages, each of which has been described as dead or dying at some point. What I found might not surprise you, but it nevertheless highlights that the different opinions a certain person or community has about a language reflects their needs and challenges as software engineers. Is Java dead? One of the biggest areas of debate in terms of living, thriving or dying, is Java. There are a number of reasons for this. The biggest is the simple fact that it’s so widely used. With so many developers using the language for a huge range of reasons, it’s not surprising to find such a diversity of opinion across its developer community. Another reason is that Java is so well-established as a programming language. Although it’s a matter of debate whether it’s declining or dying, it certainly can’t be said to be emerging or growing at any significant pace. Java is part of the industry mainstream now. You’d think that might mean it’s holding up. But when you consider that this is an industry that doesn’t just embrace change and innovation, but one that depends on it for its value, you can begin to see that Java has occupied a slightly odd space for some time. Why do people think Java is dead? Java has been on the decline for a number of years. If you look at the TIOBE index from the mid to late part of this decade it has been losing percentage points. From May 2016 to May 2017, for example, the language declined 6% - this indicates that it’s losing mindshare to other languages. A further reason for its decline is the rise of Kotlin. Although Java has for a long time been the defining language of Android development, in recent years its reputation has taken a hit as Kotlin has become more widely adopted. As this Medium article from 2018 argues, it’s not necessarily a great idea to start a new Android project with Java. The threat to Java isn’t only coming from Kotlin - it’s coming from Scala too. Scala is another language based on the JVM (Java Virtual Machine). It supports both object oriented and functional programming, offering many performance advantages over Java, and is being used for a wide range of use cases - from machine learning to application development. Reasons why Java isn’t dead Although the TIOBE index has shown Java to be a language in decline, it nevertheless remains comfortably at the top of the table. It might have dropped significantly between 2016 and 2017, but more recently its decline has slowed: it has dropped only 0.92% between October 2018 and October 2019. From this perspective, it’s simply bizarre to suggest that Java is ‘dead’ or ‘dying’: it’s de facto the most widely used programming language on the planet. When you factor in everything else that that entails - the massive community means more support, an extensive ecosystem of frameworks, libraries and other tools (note Spring Boot’s growth as a response to the microservice revolution). So, while Java’s age might seem like a mark against it, it’s also a reason why there’s still a lot of life in it. At a more basic level, Java is ubiquitous; it’s used inside a massive range of applications. Insofar as it’s inside live apps it’s alive. That means Java developers will be in demand for a long time yet. The verdict: is Java dead or alive? Java is very much alive and well. But there are caveats: ultimately, it’s not a language that’s going to help you solve problems in creative or innovative ways. It will allow you to build things and get projects off the ground, but it’s arguably a solid foundation on which you will need to build more niche expertise and specialisation to be a really successful engineer. Is JavaScript dead? Although Java might be the most widely used programming language in the world, JavaScript is another ubiquitous language that incites a diverse range of opinions and debate. One of the reasons for this is that some people seriously hate JavaScript. The consensus on Java is a low level murmur of ‘it’s fine’, but with JavaScript things are far more erratic. This is largely because of JavaScript’s evolution. For a long time it was playing second fiddle to PHP in the web development arena because it was so unstable - it was treated with a kind of stigma as if it weren’t a ‘real language.’ Over time that changed, thanks largely to HTML5 and improved ES6 standards, but there are still many quirks that developers don’t like. In particular, JavaScript isn’t a nice thing to grapple with if you’re used to, say, Java or C. Unlike those languages its an interpreted not a compiled programming language. So, why do people think it’s dead? Why do people think JavaScript is dead? There are a number of very different reasons why people argue that JavaScript is dead. On the one hand, the rise of templates, and out of the box CMS and eCommerce solutions mean the use of JavaScript for ‘traditional’ web development will become less important. Essentially, the thinking goes, the barrier to entry is lower, which means there will be fewer people using JavaScript for web development. On the other hand people look at the emergence of Web Assembly as the death knell for JavaScript. Web Assembly (or Wasm) is “a binary instruction format for a stack-based virtual machine” (that’s from the project’s website), which means that code can be compiled into a binary format that can be read by a browser. This means you can bring high level languages such as Rust to the browser. To a certain extent, then, you’d think that Web Assembly would lead to the growth of languages that at the moment feel quite niche. Read next: Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust Reasons why JavaScript isn’t dead First, let’s counter the arguments above: in the first instance, out of the box solutions are never going to replace web developers. Someone needs to build those products, and even if organizations choose to use them, JavaScript is still a valuable language for customizing and reshaping purpose-built solutions. While the barrier to entry to getting a web project up and running might be getting lower, it’s certainly not going to kill JavaScript. Indeed, you could even argue that the pool is growing as you have people starting to pick up some of the basic elements of the web. On the Web Assembly issue: this is a slightly more serious threat to JavaScript, but it’s important to remember that Web Assembly was never designed to simply ape the existing JavaScript use case. As this useful article explains: “...They solve two different issues: JavaScript adds basic interactivity to the web and DOM while WebAssembly adds the ability to have a robust graphical engine on the web. WebAssembly doesn’t solve the same issues that JavaScript does because it has no knowledge of the DOM. Until it does, there’s no way it could replace JavaScript.” Web Assembly might even renew faith in JavaScript. By tackling some of the problems that many developers complain about, it means the language can be used for problems it is better suited to solve. But aside from all that, there are a wealth of other reasons that JavaScript is far from dead. React continues to grow in popularity, as does Node.js - the latter in particular is influential in how it has expanded what’s possible with the language, moving from the browser to the server. The verdict: Is JavaScript dead or alive? JavaScript is very much alive and well, however much people hate it. With such a wide ecosystem of tools surrounding it, the way that it’s used might change, but the language is here to stay and has a bright future. Is C dead? C is one of the oldest programming languages around (it’s approaching its 50th birthday). It’s a language that has helped build the foundations of the software world as we know it today, including just about every operating system. But although it’s a fundamental part of the technology landscape, there are murmurs that it’s just not up to the job any more… Why do people think that C is dead? If you want to get a sense of the division of opinion around C you could do a lot worse than this article on TechCrunch. “C is no longer suitable for this world which C has built,” explains engineer Jon Evans. “C has become a monster. It gives its users far too much artillery with which to shoot their feet off. Copious experience has taught us all, the hard way, that it is very difficult, verging on ‘basically impossible,’ to write extensive amounts of C code that is not riddled with security holes.” The security concerns are reflected elsewhere, with one writer arguing that “no one is creating new unsafe languages. It’s not plausible to say that this is because C and C++ are perfect; even the staunchest proponent knows that they have many flaws. The reason that people are not creating new unsafe languages is that there is no demand. The future is safe languages.” Added to these concerns is the rise of Rust - it could, some argue, be an alternative to C (and C++) for lower level systems programming that is more modern, safer and easier to use. Reasons why C isn’t dead Perhaps the most obvious reason why C isn’t dead is the fact that it’s so integral to so much software that we use today. We’re not just talking about your standard legacy systems; C is inside the operating systems that allow us to interface with software and machines. One of the arguments often made against C is that ‘the web is taking over’, as if software in general is moving up levels of abstraction that make languages at a machine level all but redundant. Aside from that argument being plain stupid (ie. what’s the web built on?), with IoT and embedded computing growing at a rapid rate, it’s only going to make C more important. To return to our good friend the TIOBE Index: C is in second place, the same position it held in October 2018. Like Java, then, it’s holding its own in spite of rumors. Unlike Java, moreover, C’s rating has actually increased over the course of a year. Not a massive amount admittedly - 0.82% - but a solid performance that suggests it’s a long way from dead. Read next: Why does the C programming language refuse to die? The verdict: Is C dead or alive? C is very much alive and well. It’s old, sure, but it’s buried inside too much of our existing software infrastructure for it to simply be cast aside. This isn’t to say it isn’t without flaws. From a security and accessibility perspective we’re likely to see languages like Rust gradually grow in popularity to tackle some of the challenges that C poses. But an equally important point to consider is just how fundamental C is for people that want to really understand programming in depth. Even if it doesn’t necessarily have a wide range of use cases, the fact that it can give developers and engineers an insight into how code works at various levels of the software stack means it will always remain a language that demands attention. Conclusion: Listen to multiple perspectives on programming languages before making a judgement The obvious conclusion to draw from all this is that people should just stop being so damn opinionated. But I don't actually think that's correct: people should keep being opinionated and argumentative. There's no place for snobbery or exclusion, but anyone that has a view on something's value then they should certainly express it. It helps other people understand the language in a way that's not possible through documentation or more typical learning content. What's important is that we read opinions with a critical eye: what's this persons agenda? What's their background? What are they trying to do? After all, there are things far more important than whether something is dead or alive: building great software we can be proud of being one of them.
Read more
  • 0
  • 0
  • 23003

article-image-node-js-13-releases-upgraded-v8-full-icu-support-stable-worker-threads-api
Fatema Patrawala
23 Oct 2019
4 min read
Save for later

Node.js 13 releases with an upgraded V8, full ICU support, stable Worker Threads API and more

Fatema Patrawala
23 Oct 2019
4 min read
Yesterday was a super exciting day for Node.js developers as Node.js foundation announced of Node.js 12 transitions to Long Term Support (LTS) with the release of Node.js 13. As per the team, Node.js 12 becomes the newest LTS release along with version 10 and 8. This release marks the transition of Node.js 12.x into LTS with the codename 'Erbium'. The 12.x release line now moves into "Active LTS" and will remain so until October 2020. Then it will move into "Maintenance" until the end of life in April 2022. The new Node.js 13 release will deliver faster startup and better default heap limits. It includes updates to V8, TLS and llhttp and new features like diagnostic report, bundled heap dump capability and updates to Worker Threads, N-API, and more. Key features in Node.js 13 Let us take a look at the key features included in Node.js 13. V8 gets an upgrade to V8 7.8 This release is compatible with the new version V8 7.8. This new version of the V8 JavaScript engine brings performance tweaks and improvements to keep Node.js up with the ongoing improvements in the language and runtime. Full ICU enabled by default in Node.js 13 As of Node.js 13, full-icu is now available as default, which means hundreds of other local languages are now supported out of the box. This will simplify development and deployment of applications for non-English deployments. Stable workers API Worker Threads API is now a stable feature in both Node.js 12 and Node.js 13. While Node.js already performs well with the single-threaded event loop, there are some use-cases where additional threads can be leveraged for better results. New compiler and platform support Node.js and V8 continue to embrace newer C++ features and take advantage of newer compiler optimizations and security enhancements. With the release of Node.js 13, the codebase will now require a minimum of version 10 for the OS X development tools and version 7.2 of the AIX operating system. In addition to this there has been progress on supporting Python 3 for building Node.js applications. Systems that have Python 2 and Python 3 installed will still be able to use Python 2, however, systems with only Python 3 should now be able to build using Python 3. Developers discuss pain points in Node.js 13 On Hacker News, users discuss various pain-points in Node.js 13 and some of the functionalities missing in this release. One of the users commented, “To save you the clicks: Node.js 13 doesn't support top-level await. Node includes V8 7.8, released Sep 27. Top-level await merged into V8 on Sep 24, but didn't make it in time for the 7.8 release.” Response on this comment came in from V8 team, they say, “TLA is only in modules. Once node supports modules, it will also have TLA. We're also pushing out a version with 7.9 fairly soonish.” Other users discussed how Node.js performs with TypeScript, “I've been using node with typescript and it's amazing. VERY productive. The key thing is you can do a large refactoring without breaking anything. The biggest challenge I have right now is actually the tooling. Intellij tends to break sometimes. I'm using lerna for a monorepo with sub-modules and it's buggy with regular npm. For example 'npm audit' doesn't work. I might have to migrate to yarn…” If you are interested to know more about this release, check out the official Node.js blog post as well as the GitHub page for release notes. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more Google is planning to bring Node.js support to Fuchsia
Read more
  • 0
  • 0
  • 8368

article-image-python-3-8-available-walrus-operator-positional-only-parameters-vectorcall
Sugandha Lahoti
15 Oct 2019
6 min read
Save for later

Python 3.8 is now available with walrus operator, positional-only parameters support for Vectorcall, and more

Sugandha Lahoti
15 Oct 2019
6 min read
Yesterday, the latest version of the Python programming language, Python 3.8 was made available with multiple new improvements and features. Features include the new walrus operator and positional-only parameters, runtime audit Hooks, Vectorcall, a fast calling protocol for CPython and more. Earlier this month, the team behind Python announced the release of Python 3.8b2, the second of four planned beta releases. What’s new in Python 3.8 PEP 572: New walrus operator in assignment expressions Python 3.8 has a new walrus operator := that assigns values to variables as part of a larger expression. It is useful when matching regular expressions where match objects are needed twice. It can also be used with while-loops that compute a value to test loop termination and then need that same value again in the body of the loop. It can also be used in list comprehensions where a value computed in a filtering condition is also needed in the expression body. The walrus operator was proposed in PEP 572 (Assignment Expressions) by Chris Angelico, Tim Peters, and Guido van Rossum last year. Since then it has been heavily discussed in the Python community with many questioning whether it is a needed improvement. Others are excited as the operator does make the code more readable. One user commented on HN, “The "walrus operator" will occasionally be useful, but I doubt I will find many effective uses for it. Same with the forced positional/keyword arguments and the "self-documenting" f-string expressions. Even when they have a use, it's usually just to save one line of code or a few extra characters.” https://twitter.com/reynoldsnlp/status/1183498971425042433 https://twitter.com/jakevdp/status/1140071525870997504 PEP 570: New function parameter syntax in positional-only parameters Python 3.8 has a new function parameter syntax / to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments. This notation allows pure Python functions to fully emulate behaviors of existing C-coded functions. It can be used to preclude keyword arguments when the parameter name is not helpful. It also allows the parameter name to be changed in the future without the risk of breaking client code. As with PEP 572, this proposal also got mixed reactions from Python developers. In support, one developer said, “Position-only parameters already exist in cpython builtins like range and min. Making their support at the language level would make their existence less confusing and documented.” While others think that this will allow authors to “dictate” how their methods could be used. “Not the biggest fan of this one because it allows library authors to overly dictate how their functions can be used, as in, mark an argument as positional merely because they want to. But cool all the same,” a Redditor commented. PEP 578: Python Audit Hooks and Verified Open Hook Python 3.8 now has an Audit Hook and Verified Open Hook. These hooks allow applications and frameworks written in pure Python code to take advantage of extra notifications. They also allow embedders or system administrators to deploy builds of Python where auditing is always enabled. These are available from Python and native code. PEP 587: New C API to configure the Python Initialization Though Python is highly configurable, its configuration seems scattered all around the code. Python 3.8 adds a new C API to configure the Python Initialization providing finer control on the whole configuration and better error reporting. This PEP also adds _PyRuntimeState.preconfig (PyPreConfig type) and PyInterpreterState.config (PyConfig type) fields to internal structures. PyInterpreterState.config becomes the new reference configuration, replacing global configuration variables and other private variables. PEP 590: Provisional support for Vectorcall, a fast calling protocol for CPython A currently provisional Vectorcall protocol is added to the Python/C API. It is meant to formalize existing optimizations that were already done for various classes. Any extension type implementing a callable can use this protocol. It will be made fully public in Python 3.9. PEP 574: Pickle protocol 5 supports out-of-band data buffers The pickle protocol 5 now introduces support for out-of-band buffers. This means PEP 3118 compatible data can be transmitted separately from the main pickle stream, at the discretion of the communication layer. Parallel filesystem cache for compiled bytecode files There is a new PYTHONPYCACHEPREFIX setting that configures the implicit bytecode cache to use a separate parallel filesystem tree, rather than the default __pycache__ subdirectories within each source directory. Python uses the same ABI whether it’s built-in release or debug mode With Python 3.8, Python uses the same ABI whether it’s built-in release or debug mode. On Unix, when Python is built in debug mode, it is now possible to load C extensions built-in release mode and C extensions built using the stable ABI. On Unix, C extensions are no longer linked to libpython except on Android and Cygwin. Also, on Unix, when Python is built in debug mode, import now also looks for C extensions compiled in release mode and for C extensions compiled with the stable ABI. f-strings now have a = specifier Formatted strings (f-strings) were introduced in Python 3.6 with PEP 498. It enables you to evaluate an expression as part of the string along with inserting the result of function calls and so on. Python 3.8 adds a = specifier to f-strings for self-documenting expressions and debugging. An f-string such as f'{expr=}' will expand to the text of the expression, an equal sign, then the representation of the evaluated expression. One developer expressed their delight on Hacker News, “F strings are pretty awesome. I’m coming from JavaScript and partially java background. JavaScript’s String concatenation can become too complex and I have difficulty with large strings.” Another developer said, “The expansion of f-strings is a welcome addition. The more I use them, the happier I am that they exist.” Someone added to this, “This makes clean string interpolation so much easier to do, especially for print statements. It's almost hard to use python < 3.6 now because of them. New metadata module Python 3.8 has a new importlib.metadata module that provides (provisional) support for reading metadata from third-party packages. It can, for instance, extract an installed package’s version number, list of entry points, and more. You can go through other improved modules, language changes, Build and C API changes, API and Feature removals in Python 3.8 on Python docs. For full details, see the changelog. Python 3.8b2 new features: the walrus operator, positional-only parameters, and much more Python 3.8 beta 1 is now ready for you to test Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble. PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3.
Read more
  • 0
  • 0
  • 5432

article-image-how-quarkus-brings-java-into-the-modern-world-of-enterprise-tech
Guest Contributor
22 Sep 2019
6 min read
Save for later

How Quarkus brings Java into the modern world of enterprise tech

Guest Contributor
22 Sep 2019
6 min read
What is old is new again, even - and maybe especially - in the world of technology. To name a few milestones that are being celebrated this year: Java is roughly 25 years old, it is the 10th anniversary of Minecraft, and Nintendo is back in vogue. None of these three examples are showing signs of slowing down anytime soon. I would argue that they are continuing to be leaders in innovation because of the simple fact that there are still people behind them that are creatively breathing new life into what otherwise could have been “been there, done that” technologies. With Java, in particular, it is so widely used, that from an enterprise efficiency perspective, it simply does not make sense NOT to have Java be a key language in the development of emerging tech apps. In fact, more and more apps are being developed with a Java-first approach. But, how can this be done, especially when apps are being built using new architectures like serverless and microservices? One technology that shows promise is Quarkus, a newly introduced Kubernetes-native platform that addresses many of the barriers hindering Java’s ability to shine in the modern world of emerging tech. Why does Java still matter Even though its continued relevance has been questioned for years, I believe Java still matters and is not likely to go away anytime soon. This is because of two reasons. First, there is a  whole list of programming languages that are based on Java and the Java Virtual Machine (JVM), such as Kotlin, Groovy, Clojure and JRuby. Also, Java continues to be one of the most popular programming languages for Android apps, as well as for the development of edge devices and the internet of things. In fact, according to SlashData’s State of the Developer Nation Q4 2018 report, there are 7.6 million active Java developers worldwide. Other factors that I think are contributing to Java’s continued popularity include network portability, the fact that it is object-oriented, converts data to bytecode so that it can be read and run on any platform which has a JVM installed, and, maybe most importantly, has a syntax similar to C++, making it a relatively easy language for developers to learn. Additionally, SlashData’s research suggested that newer and niche languages do not seem to be adding many new developers, if any, per year, begging the question of whether or not it is easy for newer languages to scale beyond their niche and become the next big thing. It also makes it clear that while there is value for newer programming languages that do not serve as wide a purpose, they may not be able to or need to overtake languages like Java. In fact, the success of Java relies on the entire ecosystem surrounding it, including the editors, third party libraries, CI/CD pipelines, and systems. Each aspect of the ecosystem is something that is so easy to take for granted in established languages but are things that need to be created from scratch in new languages if they want to catch up to or overtake Java. How Quarkus brings Java into modern enterprise tech Quarkus is more than just a cool name. It is a Kubernetes Native Java framework that is tailored for GraalVM and HotSpot, and crafted by best-of-breed Java libraries and standards. The overall goal of Quarkus is to make Java one of the leading platforms in Kubernetes and serverless environments, while also enabling developers to work within what they know and in a reactive and imperative programming model. Put simply, Quarkus works to bring Java into the modern microservices and serverless modes of developing. This is important because Java continues to be a top programming language for back-end enterprise developers. Many organizations have tied both time and money into Java, which has been a dominant force in the development landscape for a number of years. As enterprises increasingly shift toward cloud computing, it is important for Java to carry over into these new programming methods. Why a “Java First” approach Java has been a top programming language for enterprises for over a decade. We should not lose sight of that fact, and that there are many developers with excellent Java skills, as well as existing applications that run on Java. Furthermore, because Java has been around so long it has not only matured as a language but also as an ecosystem. There are editors, logging systems, debuggers, build systems, unit testing environments, QA testing environments, and more--all tuned for Java, if not also implemented in Java. Therefore, when starting a new Java application it can be easier to find third-party components or entire systems that can help the developer gain productivity advancements over other languages that have not yet grown to have the breadth and depth of the Java ecosystem. Using a full-stack framework such as Quarkus, and taking advantage of libraries that use Java, such as Eclipse MicroProfile and Eclipse Vert.x, makes this easier, and also encourages the use of different combinations of tools and dependencies. With Quarkus in particular, it also includes an extension framework that third party authors can use to build native executables and expand the functionality of Java in the enterprise. Quarkus not only brings Java into the modern world of containers, but it also does so quickly with short start-up times. Java is not looking like it will go away anytime soon. Between the number of developers who still use Java as their first language and the number of apps that run almost entirely from it, Java’s take in the game is as solid as ever. Through new tools like Quarkus, it can continue to evolve in the modern app dev world. Author Bio Mark Little works at RedHat where he leads the JBoss Technical Direction and research & development. Prior to this, he was SOA Technical Development Manager and Director of Standards. He also has experience with two successful startup companies. Other interesting news in Tech Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data and Society report. Open AI researchers advance multi-agent competition by training AI agents in a hide and seek environment. France and Germany reaffirm blocking Facebook’s Libra cryptocurrency
Read more
  • 0
  • 0
  • 4876
article-image-is-scala-3-0-a-new-language-all-together-martin-odersky-its-designer-says-yes-and-no
Bhagyashree R
10 Sep 2019
6 min read
Save for later

Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no”

Bhagyashree R
10 Sep 2019
6 min read
At Scala Days Lausanne 2019 in July, Martin Odersky, the lead designer of Scala, gave a tour of the upcoming major version, Scala 3.0. He talked about the roadmap to Scala 3.0, its new features, how its situation is different from Python 2 vs 3, and much more. Roadmap to Scala 3.0 Odersky announced that “Scala 3.0 has almost arrived” since all the features are fleshed out, with implementations in the latest releases of Dotty, the next generation compiler for Scala. The team plans to go into feature freeze and release Scala 3.0 M1 in fall this year. Following that the team will focus on stabilization, complete SIP process, and write specs and user docs. They will also work on community build, compatibility, and migration tasks. All these tasks will take about a year, so we can expect Scala 3.0 release in fall 2020. Scala 2.13 was released in June this year. It was shipped with redesigned collections, updated futures implementation, language changes including literal types, partial unification on by default, by-name implicits, macro annotations, among others. The team is also working simultaneously on its next release, Scala 2.14. Its main focus will be to ease out the migration process from Scala 2 to 3 by defining migration tools, shim libraries, targeted deprecations, and more. What’s new in this major release There is a whole lot of improvements coming in Scala 3.0, some of which Odersky discussed in his talk: Scala will drop the ‘new’ keyword: Starting with Scala 3.0, you will be able to omit ‘new’ from almost all instance creations. With this change, developers will no longer have to define a case class just to get nice constructor calls. Also, this will prevent accidental infinite loops in the cases when ‘apply’ has the same arguments as the constructor. Top-level definitions: In Scala 3.0, top-level definitions will be added as a replacement for package objects. This is because only one package object definition is allowed per package. Also, a trait or class defined in the package object is different from the one defined in the package, which can lead to unexpected behavior. Redesigned enumeration support: Previously, Scala did not provide a very straightforward way to define enums. With this update, developers will have a simple way to define new types with a finite number of values or constructions. They will also be able to add parameters and define fields and methods. Union types: In previous versions, union types can be defined with the help of constructs such as Either or subtyping hierarchies, but these constructs are bulkier. Adding union types to the language will fix Scala’s least upper bounds problem and provide added modelling power. Extension methods: With extension methods, you can define methods that can be used infix without any boilerplate. These will essentially replace implicit classes. Delegates: Implicit is a “bedrock of programming” in Scala. However, they suffer from several limitations. Odersky calls implicit conversions “recipe for disaster” because they tend to interact very badly with each other and add too much implicitness. Delegates will be their simpler and safer alternative. Functions everywhere: In Scala, functions and methods are two different things. While methods are members of classes and objects, functions are objects themselves. Until now, methods were quite powerful as compared to functions. They are defined by properties like dependent, polymorphic, and implicit. With Scala 3.0, these properties will be associated with functions as well. Recent discussions regarding the updates in Scala 3.0 A minimal alternative for scala-reflect and TypeTag Scala 3.0 will drop support for ‘scala-reflect’ and ‘TypeTag’. Also, there hasn't been much discussion about its alternative. However, some developers believe that it is an important feature and is currently in use by many projects including Spark and doobie. Explaining the reason behind dropping the support, a SIP committee member, wrote on the discussion forum, “The goal in Scala 3 is what we had before scala-reflect. For use-cases where you only need an 80% solution, you should be able to accomplish that with straight-up Java reflection. If you need more, TASTY can provide you the basics. However, we don’t think a 100% solution is something folks need, and it’s unclear if there should be a “core” implementation that is not 100%.” Odersky shared that Scala 3.0 has quoted.Type as an alternative to TypeTag. He commented, “Scala 3 has the quoted package, with quoted.Expr as a representation of expressions and quoted.Type as a representation of types. quoted.Type essentially replaces TypeTag. It does not have the same API but has similar functionality. It should be easier to use since it integrates well with quoted terms and pattern matching.” Follow the discussion on Scala Contributors. Python-style indentation (for now experimental) Last month, Odersky proposed to bring indentation based syntax in Scala while also supporting the brace-based. This is because when it was first created, most of the languages used braces. However, with time indentation-based syntax has actually become the conventional syntax. Listing the reasons behind this change, Odersky wrote, The most widely taught language is now (or will be soon, in any case) Python, which is indentation based. Other popular functional languages are also indentation based (e..g Haskell, F#, Elm, Agda, Idris). Documentation and configuration files have shifted from HTML and XML to markdown and yaml, which are both indentation based. So by now indentation is very natural, even obvious, to developers. There's a chance that anything else will increasingly be considered "crufty" Odersky on whether Scala 3.0 is a new language Odersky answers this with both yes and no. Yes, because Scala 3.0 will include several language changes including feature removals. The introduced new constructs will improve user experience and on-boarding dramatically. There will also be a need to rewrite current Scala books to reflect the recent developments. No, because it will still be Scala and all core constructs will still be the same. He concludes, "Between yes and no, I think the fairest answer is to say it is really a process. Scala 3 keeps most constructs of Scala 2.13, alongside the new ones. Some constructs like old implicits and so on will be phased out in the 3.x release train. So, that requires some temporary duplication in the language, but the end result should be a more compact and regular language.” Comparing with Python 2 and 3, Odersky believes that Scala's situation is better because of static typing and binary compatibility. The current version of Dotty can be linked with Scala 2.12 or 2.13 files. He shared that in the future, it will be possible to have a Dotty library module that can then be used by both Scala 2 and 3 modules. Read also: Core Python team confirms sunsetting Python 2 on January 1, 2020 Watch Odersky’s talk to know more in detail. https://www.youtube.com/watch?v=_Rnrx2lo9cw&list=PLLMLOC3WM2r460iOm_Hx1lk6NkZb8Pj6A Other news in programming Implementing memory management with Golang’s garbage collector Golang 1.13 module mirror, index, and Checksum database are now production-ready Why Perl 6 is considering a name change?  
Read more
  • 0
  • 0
  • 12047

article-image-developers-from-the-swift-for-tensorflow-project-propose-adding-first-class-differentiable-programming-to-swift
Bhagyashree R
09 Sep 2019
5 min read
Save for later

Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift

Bhagyashree R
09 Sep 2019
5 min read
After working for over 1.5 years on the Differentiable Programming Mega-Proposal, Richard Wei, a developer at Google Brain, and his team submitted the proposal on the Swift Evolution forum on Thursday last week. This proposal aims to “push Swift's capabilities to the next level in numerics and machine learning” by introducing differentiable programming as a new language feature in Swift. It is a part of the Swift for TensorFlow project under which the team is integrating TensorFlow directly into the language to offer developers a next-generation platform for machine learning. What is differentiable programming With the increasing sophistication in deep learning models and the introduction of modern deep learning frameworks, many researchers have started to realize that building neural networks is very similar to programming. Yann LeCun, VP and Chief AI Scientist at Facebook, calls differentiable programming “a little more than a rebranding of the modern collection Deep Learning techniques, the same way Deep Learning was a rebranding of the modern incarnations of neural nets with more than two layers.” He compares it with regular programming, with the only difference that the resulting programs are “parameterized, automatically differentiated, and trainable/optimizable.” Many also say that differentiable programming is a different name for automatic differentiation, a collection of techniques to numerically evaluate the derivative of a function. It can be seen as a new programming paradigm in which programs can be differentiated throughout. Check out the paper “Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagation” to get a better understanding of differentiable programming. Why differential programming is proposed in Swift Swift is an expressive, high-performance language, which makes it a perfect candidate for numerical applications. According to the proposal authors, first-class support for differentiable programming in Swift will allow safe and powerful machine learning development. The authors also believe that this is a “big step towards high-level numerical computing support.” With this proposal, they aim to make Swift a “real contender in the numerical computing and machine learning landscape.” Here are some of the advantages of adding first-class support for differentiable programming in Swift: Better language coverage: First-class differentiable programming support will enable differentiation to work smoothly with other Swift features. This will allow developers to code normally without being restricted to a subset of Swift. Enable extensibility: This will provide developers an “extensible differentiable programming system.” They will be able to create custom differentiation APIs by leveraging primitive operators defined in the standard library and supported by the type system. Static warnings and errors: This will enable the compiler to statically identify the functions that cannot be differentiated or will give a zero derivative. It will then be able to give a non-differentiability error or warning. This will improve productivity by making common runtime errors in machine learning directly debuggable without library boundaries. Some of the components that will be added in Swift under this proposal are: The Differentiable protocol: This is a standard library protocol that will generalize all data structures that can be a parameter or result of a differentiable function. The @differentiable declaration attribute: This will be used to mark all the function-like declarations as differentiable. The @differentiable function types: This is a subtype of normal function types with a different runtime representation and calling convention. Differentiable function types will have differentiable parameters and results. Differential operators: These are the core differentiation APIs that take ‘@differentiable’ functions as inputs and return derivative functions or compute derivative values. @differentiating and @transposing attributes: These attributes are for declaring custom derivative function for some other function declaration. This proposal sparked a discussion on Hacker News. Many developers were excited about bringing differentiable programming support in the Swift core. A user commented, “This is actually huge. I saw a proof of concept of something like this in Haskell a few years back, but it's amazing it see it (probably) making it into the core of a mainstream language. This may let them capture a large chunk of the ML market from Python - and hopefully, greatly improve ML APIs while they're at it.” Some felt that a library could have served the purpose. “I don't see why a well-written library could not serve the same purpose. It seems like a lot of cruft. I doubt, for example, Python would ever consider adding this and it's the de facto language that would benefit the most from something like this - due to the existing tools and communities. It just seems so narrow and not at the same level of abstraction that languages typically sit at. I could see the language supporting higher-level functionality so a library could do this without a bunch of extra work (such as by some reflection),” a user added. Users also discussed another effort that goes along the lines of this project: Julia Zygote, which is a working prototype for source-to-source automatic differentiation. A user commented, “Yup, work is continuing apace with Julia’s next-gen Zygote project. Also, from the GP’s thought about applications beyond DL, my favorite examples so far are for model-based RL and Neural ODEs.” To know more in detail, check out the proposal: Differentiable Programming Mega-Proposal. Other news in programming Why Perl 6 is considering a name change? The Julia team shares its finalized release process with the community TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more
Read more
  • 0
  • 0
  • 4554

article-image-go-1-13-releases-with-error-wrapping-tls-1-3-enabled-by-default-improved-number-literals-and-more
Bhagyashree R
04 Sep 2019
5 min read
Save for later

Go 1.13 releases with error wrapping, TLS 1.3 enabled by default, improved number literals, and more

Bhagyashree R
04 Sep 2019
5 min read
Yesterday, the Go team announced the release of Go 1.13. This version comes with uniform and modernized number literals, support for error wrapping, TLS 1.3 on by default, improved modules support, and much more. Also, the go command now uses module mirror and checksum database by default. Want to learn Go? Get started or get up to date with our new edition of Mastering Go. Learn more. https://twitter.com/golang/status/1168966214108033029   Key updates in Go 1.13 TLS 1.3 enabled by default Go 1.13 comes with Transportation Layer Security (TLS) 1.3 support in the crypto/tls package enabled by default. If you wish to disable it, add tls13=0 to the GODEBUG environment variable. The team shared that the option to opt-out will be removed in Go 1.14. Uniform and modernized number literal prefixes Go 1.13 introduces optional prefixes for binary and octal integer literals, hexadecimal floating-point literals, and imaginary literals. The 0b or 0B prefix will indicate a binary integer literal, for instance, ‘0b1011.’ The 0o or 0O prefix will indicate an octal integer literal such as 0o660. The 0 prefix used in previous versions is also valid. The 0x or 0X prefix will be used to represent the mantissa of a floating-point number in hexadecimal format such as 0x1.0p-1021. The imaginary suffix (i) can be used with any integer or floating-point literal. To improve readability, the digits of any number literal can now be separated using underscores, for instance,  1_000_000, 0b_1010_0110. It can also be used between the literal prefix and the first digit. Read also: The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14 Go 1.13 features module mirror to download modules Starting with Go 1.13, the go command will download and authenticate modules using the module mirror and checksum database by default. A module mirror is a module proxy that fetches modules from origin servers and then caches them for use in future requests. This enables the mirror to serve the source code even when the origin servers are down. The go command will use the module mirror maintained by the Go team, which is served at https://proxy.golang.org. In case you are using an earlier version of the go command, you can use this service by setting ‘GOPROXY=https://proxy.golang.org’ in your local environment. Checksum database to authenticate modules The go command maintains two Go modules files called go.mod and go.sum. Introduced in version 1.11, Go modules are an alternative to GOPATH with support for versioning and package distribution. They are basically a collection of Go packages stored in a file with a go.mod file at its root. The go.sum file consists of SHA-256 hashes of the source code that the go command can use to identify any misbehavior by an origin server or proxy. However, the drawback of this method is that it “works entirely by the trust on your first use.” When a version of a dependency is added for the first time, the go command will fetch the code and add lines to go.sum file on the fly. “The problem is that those go.sum lines aren’t being checked against anyone else’s: they might be different from the go.sum lines that the go command just generated for someone else, perhaps because a proxy intentionally served malicious code targeted to you,” the team explains. To solve this the Go team has come up with a global source of go.sum lines which they call a checksum database. This will ensure that the go command always adds the same lines to everyone’s go.sum file. So, whenever the go command receives new source code, it can verify its hash against the global database, which is served by sum.golang.org. Read also: Golang 1.13 module mirror, index, and Checksum database are now production-ready Support for error wrapping As per the error value proposal, Go 1.13 comes with support for error wrapping. This provides a standard way to wrap errors to the standard library. Now, an error can wrap another error by defining an Unwrap method that returns the wrapped error. Explaining the working, the team wrote, “An error e can wrap another error w by providing an Unwrap method that returns w. Both e and w are available to programs, allowing e to provide additional context to w or to reinterpret it while still allowing programs to make decisions based on w.” To support this behavior, the fmt.Errorf function now has a new %w verb for creating wrapped errors. There are also three new functions in the errors package namely errors.Unwrap, errors.Is and errors.As to simplify unwrapping and inspecting wrapped errors. These were some of the updates in Go 1.13. To read the entire list of features, check out its official release notes. What’s new in programming languages The Julia team shares its finalized release process with the community Introducing Nushell: A Rust-based shell TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more
Read more
  • 0
  • 0
  • 3508
article-image-implementing-memory-management-with-golang-garbage-collector
Packt Editorial Staff
03 Sep 2019
10 min read
Save for later

Implementing memory management with Golang's garbage collector

Packt Editorial Staff
03 Sep 2019
10 min read
Did you ever think of how bulk messages are pushed in real-time that fast? How is it possible? Low latency garbage collector (GC) plays an important role in this. In this article, we present ways to look at certain parameters to implement memory management with the Golang GC. Garbage collection is the process of freeing up memory space that is not being used. In other words, the GC sees which objects are out of scope and cannot be referenced anymore and frees the memory space they consume. This process happens in a concurrent way while a Go program is running and not before or after the execution of the program. This article is an excerpt from the book Mastering Go - Third Edition by Mihalis Tsoukalos. Mihalis runs through the nuances of Go, with deep guides to types and structures, packages, concurrency, network programming, compiler design, optimization, and more.  Implementing the Golang GC The Go standard library offers functions that allow you to study the operation of the GC and learn more about what the GC does secretly. These functions are illustrated in the gColl.go utility. The source code of gColl.go is presented here in chunks. Package main import (    "fmt"    "runtime"    "time" ) You need the runtime package because it allows you to obtain information about the Go runtime system, which, among other things, includes the operation of the GC. func printStats(mem runtime.MemStats) { runtime.ReadMemStats(&mem) fmt.Println("mem.Alloc:", mem.Alloc) fmt.Println("mem.TotalAlloc:", mem.TotalAlloc) fmt.Println("mem.HeapAlloc:", mem.HeapAlloc) fmt.Println("mem.NumGC:", mem.NumGC, "\n") } The purpose of the printStats() function is to avoid writing the same Go code all the time. The runtime.ReadMemStats() call gets the latest garbage collection statistics for you. func main() {    var mem runtime.MemStats    printStats(mem)    for i := 0; i < 10; i++ { // Allocating 50,000,000 bytes        s := make([]byte, 50000000)        if s == nil {            fmt.Println("Operation failed!")          }    }    printStats(mem) In this part, we have a for loop that creates 10-byte slices with 50,000,000 bytes each. The reason for this is that by allocating large amounts of memory, we can trigger the GC. for i := 0; i < 10; i++ { // Allocating 100,000,000 bytes      s := make([]byte, 100000000)       if s == nil {           fmt.Println("Operation failed!")       }       time.Sleep(5 * time.Second)   } printStats(mem) } The last part of the program makes even bigger memory allocations – this time, each byte slice has 100,000,000 bytes. Running gColl.go on a macOS Big Sur machine with 24 GB of RAM produces the following kind of output: $ go run gColl.go mem.Alloc: 124616 mem.TotalAlloc: 124616 mem.HeapAlloc: 124616 mem.NumGC: 0 mem.Alloc: 50124368 mem.TotalAlloc: 500175120 mem.HeapAlloc: 50124368 mem.NumGC: 9 mem.Alloc: 122536 mem.TotalAlloc: 1500257968 mem.HeapAlloc: 122536 mem.NumGC: 19 The value of mem.Alloc is the bytes of allocated heap objects — allocated are all the objects that the GC has not yet freed. mem.TotalAlloc shows the cumulative bytes allocated for heap objects—this number does not decrease when objects are freed, which means that it keeps increasing. Therefore, it shows the total number of bytes allocated for heap objects during program execution. mem.HeapAlloc is the same as mem.Alloc. Last, mem.NumGC shows the total number of completed garbage collection cycles. The bigger that value is, the more you have to consider how you allocate memory in your code and if there is a way to optimize that. If you want even more verbose output regarding the operation of the GC, you can combine go run gColl.go with GODEBUG=gctrace=1. Apart from the regular program output, you get some extra metrics—this is illustrated in the following output: $ GODEBUG=gctrace=1 go run gColl.go gc 1 @0.021s 0%: 0.020+0.32+0.015 ms clock, 0.16+0.17/0.33/0.22+0.12 ms cpu, 4->4->0 MB, 5 MB goal, 8 P gc 2 @0.041s 0%: 0.074+0.32+0.003 ms clock, 0.59+0.087/0.37/0.45+0.030 ms cpu, 4->4->0 MB, 5 MB goal, 8 P . . . gc 18 @40.152s 0%: 0.065+0.14+0.013 ms clock, 0.52+0/0.12/0.042+0.10 ms cpu, 95->95->0 MB, 96 MB goal, 8 P gc 19 @45.160s 0%: 0.028+0.12+0.003 ms clock, 0.22+0/0.13/0.081+0.028 ms cpu, 95->95->0 MB, 96 MB goal, 8 P mem.Alloc: 120672 mem.TotalAlloc: 1500256376 mem.HeapAlloc: 120672 mem.NumGC: 19 Now, let us explain the 95->95->0 MB triplet in the previous line of output. The first value (95) is the heap size when the GC is about to run. The second value (95) is the heap size when the GC ends its operation. The last value is the size of the live heap (0). Go garbage collection is based on the tricolor algorithm The operation of the Go GC is based on the tricolor algorithm, which is the subject of this subsection. Note that the tricolor algorithm is not unique to Go and can be used in other programming languages as well. Strictly speaking, the official name for the algorithm used in Go is the tricolor mark-and-sweep algorithm. It can work concurrently with the program and uses a write barrier. This means that when a Go program runs, the Go scheduler is responsible for the scheduling of the application and the GC. This is as if the Go scheduler has to deal with a regular application with multiple goroutines! The core idea behind this algorithm came from Edsger W. Dijkstra, Leslie Lamport, A. J. Martin, C. S. Scholten, and E. F. M. Steffens and was first illustrated in a paper named On-the-Fly Garbage Collection: An Exercise in Cooperation. The primary principle behind the tricolor mark-and-sweep algorithm is that it divides the objects of the heap into three different sets according to their color, which is assigned by the algorithm. It is now time to talk about the meaning of each color set. The objects of the black set are guaranteed to have no pointers to any object of the white set. However, an object of the white set can have a pointer to an object of the black set because this has no effect on the operation of the GC. The objects of the gray set might have pointers to some objects of the white set. Finally, the objects of the white set are the candidates for garbage collection. So, when the garbage collection begins, all objects are white, and the GC visits all the root objects and colors them gray. The roots are the objects that can be directly accessed by the application, which includes global variables and other things on the stack. These objects mostly depend on the Go code of a program. After that, the GC picks a gray object, makes it black, and starts looking at whether that object has pointers to other objects of the white set or not. Therefore, when an object of the gray set is scanned for pointers to other objects, it is colored black. If that scan discovers that this particular object has one or more pointers to a white object, it puts that white object in the gray set. This process keeps going for as long as objects exist in the gray set. After that, the objects in the white set are unreachable and their memory space can be reused. Therefore, at this point, the elements of the white set are said to be garbage collected. Please note that no object can go directly from the black set to the white set, which allows the algorithm to operate and be able to clear the objects on the white set. As mentioned before, no object of the black set can directly point to an object of the white set. Additionally, if an object of the gray set becomes unreachable at some point in a garbage collection cycle, it will not be collected at this garbage collection cycle but in the next one! Although this is not an optimal situation, it is not that bad. During this process, the running application is called the mutator. The mutator runs a small function named write barrier that is executed each time a pointer in the heap is modified. If the pointer of an object in the heap is modified, which means that this object is now reachable, the write barrier colors it gray and puts it in the gray set. The mutator is responsible for the invariant that no element of the black set has a pointer to an element of the white set. This is accomplished with the help of the write barrier function. Failing to accomplish this invariant will ruin the garbage collection process and will most likely crash your program in a pretty bad and undesirable way! So, there are three different colors: black, white, and gray. When the algorithm begins, all objects are colored white. As the algorithm keeps going, white objects are moved into one of the other two sets. The objects that are left in the white set are the ones that are going to be cleared at some point. The next figure displays the three color sets with objects in them. Figure 1: The Go GC represents the heap of a program as a graph In the presented graph, you can see that while object E, which is in the white set, can access object F, it cannot be accessed by any other object because no other object points to object E, which makes it a perfect candidate for garbage collection! Additionally, objects A, B, and C are root objects and are always reachable; therefore, they cannot be garbage collected. Graph comprehended Can you guess what will happen next in that graph? Well, it is not that difficult to realize that the algorithm will have to process the remaining elements of the gray set, which means that both objects A and F will go to the black set. Object A will go to the black set because it is a root element and F will go to the black set because it does not point to any other object while it is in the gray set. After object A is garbage collected, object F will become unreachable and will be garbage collected in the next cycle of the GC because an unreachable object cannot magically become reachable in the next iteration of the garbage collection cycle. Note: The Go garbage collection can also be applied to variables such as channels. When the GC finds out that a channel is unreachable, that is when the channel variable cannot be accessed anymore, it will free its resources even if the channel has not been closed. Go allows you to manually initiate a garbage collection by putting a runtime.GC() statement in your Go code. However, have in mind that runtime.GC() will block the caller and it might block the entire program, especially if you are running a very busy Go program with many objects. This mainly happens because you cannot perform garbage collections while everything else is rapidly changing, as this will not give the GC the opportunity to clearly identify the members of the white, black, and gray sets. This garbage collection status is also called garbage collection safe-point. You can find the long and relatively advanced Go code of the GC at https://github.com/golang/go/blob/master/src/runtime/mgc.go, which you can study if you want to learn even more information about the garbage collection operation. You can even make changes to that code if you are brave enough! Understanding Go Internals: defer, panic() and recover() functions [Tutorial] Implementing hashing algorithms in Golang [Tutorial] Is Golang truly community driven and does it really matter?
Read more
  • 0
  • 0
  • 51890

article-image-rust-is-the-future-of-systems-programming-c-is-the-new-assembly-intel-principal-engineer-josh-triplett
Bhagyashree R
27 Aug 2019
10 min read
Save for later

“Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett

Bhagyashree R
27 Aug 2019
10 min read
At Open Source Technology Summit (OSTS) 2019, Josh Triplett, a Principal Engineer at Intel gave an insight into what Intel is contributing to bring the most loved language, Rust to full parity with C. In his talk titled Intel and Rust: the Future of Systems Programming, he also spoke about the history of systems programming, how C became the “default” systems programming language, what features of Rust gives it an edge over C, and much more. Until now, OSTS was Intel's closed event where the company's business and tech leaders come together to discuss the various trends, technologies, and innovations that will help shape the open-source ecosystem. However, this year was different as the company welcomed non-Intel attendees including media, partners, and developers for the first time. The event hosts keynotes, more than 50 technical sessions, panels, demos covering all the open source technologies Intel is involved in. These include integrated software stacks (edge, AI, infrastructure), firmware, embedded and IoT projects, and cloud system software. This year the event happened from May 14-16 at Stevenson, Washington. What is systems programming Systems programming is the development and management of software that serves as a platform for other software to be built upon. The system software also directly or closely interfaces with computer hardware in order to gain necessary performance and expose abstractions. Unlike application programming where software is created to provide services to the user, it aims to produce software that provides services to the computer hardware. Triplett broadly defines systems programming as “anything that isn't an app.” It includes things like BIOS, firmware, boot loaders, operating systems kernels, embedded and similar types of low-level code, virtual machine implementations. Triplett also counts a web browser as a system software as it is more than “just an app,” they are actually “platforms for websites and web apps,” he says. How C became the “default” systems programming language Previously, most system software including BIOS, boot loaders, and firmware were written in Assembly. In the 1960s, experiments to bring hardware support in high-level languages started, which resulted in the creation of languages such as PL/S, BLISS, BCPL, and extended ALGOL. Then in the 1970s, Dennis Ritchie created the C programming language for the Unix operating system. Derived from the typeless B programming language, C was packed with powerful high-level functionalities and detailed features that were best suited for writing an operating system. Several UNIX components including its kernel were eventually rewritten in C. Many other system software including the Oracle database, a large portion of Windows source code, Linux operating system, were all written in C. C was seeing a huge adoption at this point. But, what exactly made developers comfortable moving to C? Triplett believes that in order to make this move from one language to another, developers have to be comfortable in terms of two things: features and parity. First, the language should offer “sufficiently compelling” features. “It can’t just be a little bit better. It has to be substantially better to warrant the effort and engineering time needed to move,” he adds. As compared to Assembly, C had a lot to offer. It had some degree of type safety, provided portability, better productivity with high-level constructs, and much more readable code. Second, the language has to provide parity, which means developers had to be confident that it is no less capable than Assembly. He states, “It can’t just be better, it also has to be no worse.” In addition to being faster and expressing any type of data that Assembly was able to, it also had what Triplett calls “escape hatch.”  This means you are allowed to make the move incrementally and also combine Assembly if required. Triplett believes that C is now becoming what Assembly was years ago. “C is the new Assembly,” he concludes. Developers are looking for a high-level language that not only addresses the problems in C that can’t be fixed but also leverage other exciting features that these languages provide. Such a language that aims to be compelling enough to make developers move from C should be memory safe, provide automatic memory management,  security, and much more. “Any language that wants to be better than C has to offer a lot more than just protection from buffer overflows if it's actually going to be a compelling alternative. People care about usability and productivity. They care about writing code that is self-explanatory, which accomplishes more work in less code. It also needs to address security issues. Usability and productivity go hand in hand with security. The less code you need to write to accomplish something, the less chance you have of introducing bugs security bugs or otherwise,” he explains. Comparing Rust with C Back in 2006, Graydon Hoare, a Mozilla employee started writing Rust as a personal project. Mozilla, in 2009, started sponsoring the project and also expanded the team to drive further development of the language. One of the reasons why Mozilla got interested is that Firefox was written in more than 4 million lines of C++ code and had quite a bit of highly critical vulnerabilities. Rust was built with safety and concurrency in mind making it the perfect choice for rewriting many components of Firefox under Project Quantum. It is also using Rust to develop Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Many other companies have also started using Rust for their projects including Microsoft, Google, Facebook, Amazon, Dropbox, Fastly, Chef, Baidu, and more. Rust addresses the memory management problem in C. It offers automatic memory management so that developers do not have to manually call free on every object. What sets it apart from other modern languages is that it does not have a garbage collector or runtime system of any kind. Rust instead has the concepts of ownership, borrowing, references, and lifetimes. “Rust has a system of declaring whether any given use of an object is the owner of that object or whether it's just borrowing that object temporarily. If you're just borrowing an object the compiler will keep track of that. It'll make sure that the original sticks around as long as you reference it. Rust makes sure that the owner of the object frees it when it's done and it inserts the call to free at compile time with no extra runtime overhead,” Triplett explains. Not having a runtime is also a plus for Rust. Triplett believes that languages that have a runtime are difficult to use as a system programming language. He adds, “You have to initialize that runtime before you can call any code, you have to use that runtime to call functions, and the runtime itself might run extra code behind your back at unexpected times.” Rust also aims to provide safe concurrent programming. The same features that make it memory safe, keep track of things like which thread own which object, which objects can be passed between threads, and which objects require acquiring locks. These features make Rust compelling enough for developers to choose for systems programming. However, talking about the second criteria, Rust does not have enough parity with C yet. “Achieving parity with C is exactly what got me involved in Rust,” says Triplett Teaching Rust about C compatible unions Triplett's first contribution to the Rust programming language was in the form of the 1444 RFC, which was started in 2015 and got accepted in 2016. This RFC proposed to bring native support for C-compatible unions in Rust that would be defined via a new "contextual keyword" union. Triplett understood the need for this proposal when he wanted to build a virtual machine in Rust and the Linux kernel interface for that /dev/kvm required unions. "I worked with the Rust community and with the language team to get unions into Rust and because of that work I'm actually now part of the Rust language governance team helping to evaluate and guide other changes into the language," he adds. He talked about this RFC in much detail at the very first RustConf in 2016: https://www.youtube.com/watch?v=U8Gl3RTXf88 Support for unnamed struct and union types Another feature that Triplett worked on was the support for unnamed struct and union types in Rust. This has been a widespread C compiler extension for decades and was also included in the C11 standard. This allowed developers to group and layout fields in arbitrary ways to match C data structures used in the Foreign Function Interface (FFI). With this proposal implemented, Rust will be able to represent such types using the same names as the structures without interposing artificial field names that will confuse users of well-established interfaces from existing platforms. A stabilized support for inline Assembly in Rust Systems programming often involves low-level manipulations and requires low-level details of the processors such as privileged instructions. For this, Rust supports using inline Assembly via the ‘asm!’ macro. However, it is only present in the nightly compiler and not yet stabilized. Triplett in a collaboration with other Rust developers is writing a proposal to introduce more robust syntax for inline Assembly. To know more in detail about support for inline Assembly, check out this pre-RFC. BFLOAT16 support into Rust Many Intel processors including Xeon Scalable ‘Cooper Lake-SP’ now support BFLOAT16, a new floating-point format. This truncated 16-bit version of the 32-bit IEEE 754 single-precision floating-point format was mainly designed for deep learning. This format is also used in machine learning libraries like Tensorflow that work with huge datasets. It also makes interoperating with existing systems, functions, and storage much easier. This is why Triplett is working on adding support for BFLOAT16 in Rust so that developers would be able to use the full capabilities of their hardware. FFI/C Parity Working Group This was one of the important announcements that Triplett made. He is starting a working group that will focus on achieving full parity with C. Under this group, he aims to collaborate with both the Rust community and other Intel developers to develop the specifications for the remaining features that need to be implemented in Rust for system programming. This group will also focus on bringing support for systems programming using the stable releases of Rust, not just experimental nightly releases of the compiler. In last week’s Reddit discussion, Triplett shared the current status of the working group, “To pre-answer, one question: the FFI / C Parity working group is in the process of being launched, and hasn't quite kicked off yet. I'll be posting about it here and elsewhere when it is, along with the initial goals.” Watch Josh Triplett’s full OSTS talk to know more about Intel’s contribution to Rust: https://www.youtube.com/watch?v=l9hM0h6IQDo [box type="shadow" align="" class="" width=""]Update: We have made the following corrections based on feedback from Josh Triplett: This year OSTS was open to Intel's partners and press. Previously, the article read 'escape patch', but it is 'escape hatch.' RFC 1444 wasn't last year, it was started in 2015 and accepted in 2016. 'dev KVM' is now corrected to '/dev/kvm'[/box] AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity
Read more
  • 0
  • 0
  • 21745