Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Languages

202 Articles
article-image-why-perl-6-is-considering-a-name-change
Bhagyashree R
30 Aug 2019
4 min read
Save for later

Why Perl 6 is considering a name change?

Bhagyashree R
30 Aug 2019
4 min read
There have been several discussions around renaming Perl 6. Earlier this month, another such discussion started when Elizabeth Mattijsen, one of the Perl 6 core developers submitted the "Perl" in the name "Perl 6" is confusing and irritating issue. She suggested changing its name to Camelia, which is also the name of Perl’s mascot. In the year 2000, the Perl team basically decided to break everything and came up with a whole new set of design principles. Their goal was to remove the “historical warts” from the language including the confusion surrounding sigil usage for containers, the ambiguity between the select functions, and more. Based on these principles Perl was redesigned into Perl 6. For Perl 6, Wall and his team envisioned to make it a better object-oriented as well as a better functional programming language. There are many differences between Perl 5 and Perl 6. For instance, in Perl 5 you need to choose things like concurrency system and processing utilities, but in Perl 6 these features are part of the language itself. In an interview with the I Programmer website, when asked about how the two languages differ, Moritz Lenz, a Perl and Python developer, said, “They are distinct languages from the same family of languages. On the surface, they look quite similar and they are designed using the same principles.” Why developers want to rename Perl 6 Because of the aforementioned differences, many developers find the “Perl 6” name very confusing. This name does not convey the fact that it is a brand new language. Developers may instead think that it is the next version of the Perl language. Some others may believe that it is faster, more stable, or better compared to the earlier Perl language. Also, many search engines will sometimes show results for Perl 5 instead of Perl 6. “Having two programming languages that are sufficiently different to not be source compatible, but only differ in what many perceive to be a version number, is hurting the image of both Perl 5 and Perl 6 in the world. Since the word "Perl" is still perceived as "Perl 5" in the world, it only seems fair that "Perl 6" changes its name,” Mattijsen wrote in the submitted issue. To avoid this confusion Mattijsen suggests an alternative name: Camelia. Many developers agreed with her suggestion. A developer commented on the issue, “The choice of Camelia is simple: search for camelia and language already takes us to Perl 6 pages. We can also keep the logo. And it's 7 characters long, 6-ish. So while ofun and all the others have their merits, I prefer Camelia.” In addition to Camelia, Raku is also a strong contender for the new name for Perl 6, which was suggested by Larry Wall, the creator of Perl. A developer supporting Raku said, “In particular, I think we need to discuss whether "Raku", the alternative name Larry proposed, is a viable possibility. It is substantially shorter than "Camelia" (and hits the 4-character sweet spot), it's slightly more searchable, has pleasant associations of "comfort" or "ease" in its original Japanese, in which language it even looks a little like our butterfly mascot.” Some developers were not much convinced with the idea of renaming the language and think that this rather adds more to the confusion. A developer added, “I don't see how Perl 5 is going to benefit from this. We're freeing the name, yes. They're free to reuse the versions now in however way they like, yes. Are they going to name the successor to 5.30 “Perl 6”? Of course not – that would cause more confusion, make them look stupid and make whatever spiritual successor of Perl 6 we could think of look obsolete. Would they go up to Perl 7 with the next major change? Perhaps, but they can do that anyway: they're another grown-up language that can make its own decisions :) I'm not convinced it would do anything to improve Perl 6's image either. Being Perl 6 is “standing on the shoulders of giants”. Perl is a strong brand. Many people have left it because of the version confusion, yes. But I don't imagine these people coming back to check out some new Camelia language that came out. They might, however, decide to give Perl 6 a shot if they start seeing some news about it – “oh, I was using Perl 15 years ago... is this still a thing? Is that new famous version finally being out and useful? I should check it out!” You can read the submitted issue and discussion on GitHub for more details. What’s new in programming this week Introducing Nushell: A Rust-based shell React.js: why you should learn the front end JavaScript library and how to get started Julia v1.2 releases with support for argument splatting, Unicode 12, new star unary operator, and more
Read more
  • 0
  • 0
  • 4762

article-image-the-julia-team-shares-its-finalized-release-process-with-the-community
Bhagyashree R
29 Aug 2019
4 min read
Save for later

The Julia team shares its finalized release process with the community

Bhagyashree R
29 Aug 2019
4 min read
The discussions regarding the Julia release process started last year when it hit Julia 1.0. Yesterday, Stefan Karpinski, one of Julia's core developers shared its finalized release process giving details on the kind of releases, the stages of the release process, the phases of a release, and more. “This information is collected from a small set of posts on discourse and conversations on Slack, so the information exists “out there”, but this blog post brings it all together in a single place. We may turn this post into an official document if it’s well-received,” Stefan wrote. Types of Julia releases As with most programming languages that follow Semantic Versioning (SemVer), Julia has three types of releases: Patch, Minor, and Major. A patch release will be represented by the last digit of Julia’s version number. It will include things like bug fixes, low-risk performance improvements, and documentation updates. The team plans to release a patch every month for the current active release branches, however, this will depend on the number of bug fixes. The team also plans to run PackageEvaluator (PkgEval) on the backports five days prior to the patch release. PkgEval is used to run tests for every registered package, update the web pages of Julia packages, and create status badges. A minor release will be represented by the middle digit of Julia’s version number. Along with some bug fixes and new features, it will include changes that are unlikely to break your code and the package ecosystem. Any significant refactoring of the internals will also be included in the minor release. Since minor releases are branched every four months, developers can expect three minor releases every year. A major release will be represented by the first digit of Julia’s version number. Typically, major releases consist of breaking changes, but the team assures to introduce them only when there is an absolute need, for instance, fixing API design mistakes. It will also include low-level changes that can end up breaking some libraries but are essential for fundamental improvements to the language. Julia’s release process There are three phases in the Julia release process. The development phase takes up 1-4 months where new features are introduced, bugs are fixed, and more. Before the feature freeze, alpha (early preview) and beta (later preview) versions are released for developers to test them and to share their feedback. After the feature freeze, a new unstable release branch is created. In the development phase, the new features will be merged onto the master branch, while the bug fixes will go on the release branch. The second phase, stabilization, also takes up 1-4 months where all known release-blocking bugs are fixed and release candidates are built. Then they are checked for any more release-blocking bugs for one week and if there are none a final release is announced. After this, starts the maintenance phase where bug fixes are backported to the release branch. This continues till a particular release branch is declared to be unmaintained. To ensure the quality of releases and maintaining a predictable release rate the Julia team overlaps the development and stabilization phases. “The development phase of each release is time-boxed at four months and the development phase of x.(y+1) starts as soon as the development phase for x.y is over. Come rain or shine we have a new feature freeze every four months: we pick a day and you’ve got to get your features merged by that day. If new features aren’t merged, they’re not going in the release. But that’s ok, they’ll go in the next one,” explains Karpinski. Talking about long term support, Karpinski wrote that there will be four active branches. The master branch is where all the new features, bug fixes, and breaking changes will go. The unstable release branch will include all the active bug fixing and performance work that happens prior to the next minor release. The stable release branch is where the most recently released minor or major version exists. The fourth one is the long term support (LTS) branch, which is currently Julia 1.0. This branch continues to get applicable bug fixes until it is announced to be unmaintained. Karpinski also shared the different fault tolerance personas in Julia. Check out his post on the Julia blog to get a better understanding of the Julia release process. Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Julia Angwin fired as Editor-in-Chief of The Markup prompting mass resignations in protest Creating a basic Julia project for loading and saving data [Tutorial]  
Read more
  • 0
  • 0
  • 3419

article-image-typescript-3-6-releases-with-stricter-generators-new-functions-in-typescript-playground-better-unicode-support-for-identifiers-and-more
Vincy Davis
29 Aug 2019
4 min read
Save for later

TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more

Vincy Davis
29 Aug 2019
4 min read
Yesterday, the Program Manager at Typescript, Daniel Rosenwasser announced the release of TypeScript 3.6. This is a major release of TypeScript as it contains many new features in Language and Compiler such as stricter generators, more accurate array spread, improved UX around Promises, better Unicode support for identifiers, and more. TypeScript 3.6 also explores a new TypeScript playground, new Editor features, and many breaking changes. TypeScript 3.6 beta was released last month. Language and Compiler improvements Stricter checking to Iterators and Generators Previously, generator users in TypeScript could not differentiate if a value was yielded or returned from a generator. In TypeScript 3.6, due to changes in the Iterator and IteratorResult type declarations, a new type called the Generator type has been introduced. It is an Iterator that will have both the return and throw methods present. This will allow a stricter generator checker to easily understand the difference between the values from their iterators. TypeScript 3.6 also infers certain uses of yield within the body of a generator function. The yield expression can be used explicitly to enforce the type of values that can be returned, yielded, and evaluated. More accurate array spread In pre-ES2015 targets, TypeScript uses a by default --downlevelIteration flag to use iterative constructs with arrays. However, many users found it undesirable that emits produced by it had no defined property slots. To address this problem, TypeScript 3.6 presents a new __spreadArrays helper. It will “accurately model what happens in ECMAScript 2015 in older targets outside of --downlevelIteration.” Improved UX around Promises TypeScript 3.6 explores new improvements in the Promise API, which is one of the most common ways to work with asynchronous data. TypeScript’s error messages will now inform the user if a then() or await content of a Promise API is not written before passing it to another function. The Promise API will also provide quick fixes in some cases. Better Unicode support for Identifiers TypeScript 3.6 contains better support for Unicode characters in identifiers when emitting to ES2015 and later targets. import.meta support in SystemJS: The new version supports the transformation of import.meta to context.meta when the module target is set to system. get and set accessors are allowed in ambient contexts: The previous versions of TypeScript did not allow the use of get and set accessors in ambient contexts. This feature has been changed in TypeScript 3.6, since the ECMAScript’s class fields proposal have differing behavior from an existing version of TypeScript. The official post also adds, “In TypeScript 3.7, the compiler itself will take advantage of this feature so that generated .d.ts files will also emit get/set accessors.” Read Also: Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript New functions in TypeScript playground The TypeScript playground allows users to compile TypeScript and check the JavaScript output. It has more compiler options than typescriptlang and all the strict options are turned on by default in the playground. Following new functions are added in TypeScript Playground: The target option which allows users to switch out of es5 to es3, es2015, esnext, etc All the strictness flags Support for plain JavaScript files The post also states that in the future versions of TypeScript, more features like JSX support, and polishing automatic type acquisition can be expected. Breaking Changes Class members named "constructor" are now simply constructor functions. DOM updates like the global window will no longer be defined as type Window. Instead, it is defined as type Window & typeof globalThis. In JavaScript files, TypeScript will only consult immediately preceding JSDoc comments to figure out declared types. TypeScript 3.6 will not allow the escape sequences. Developers have liked the new features in TypeScript 3.6. https://twitter.com/zachcodes/status/1166840093849473024 https://twitter.com/joshghent/status/1167005999204638722 https://twitter.com/FlorianRappl/status/1166842492718899200 Interested users can check out TypeScript’s 6-month roadmap. Visit the Microsoft blog for full updates of TypeScript 3.6. Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more Babel 7.5.0 releases with F# pipeline operator, experimental TypeScript namespaces support, and more TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more
Read more
  • 0
  • 0
  • 3552

article-image-npm-install-funding-experiment-to-sustain-open-source-projects-with-ads-on-the-cli-terminal-faces-community-backlash
Fatema Patrawala
28 Aug 2019
5 min read
Save for later

‘Npm install funding’, an experiment to sustain open-source projects with ads on the CLI terminal faces community backlash

Fatema Patrawala
28 Aug 2019
5 min read
Last week, one of the npm open source authors and maintainers, software developer Feross announced an “npm install funding” experiment. Essentially, this enabled sponsors to “advertise on the Npm package install terminals”. In turn, the money raised from these ads would ensure npm maintainers are paid for their important contributions to the project, ensuring that packages remain up to date, reliable, and secure. Feross wrote on the GitHub page, “I think that the current model of sustaining open source is not working and we need more experimentation. This is one such experiment.” He further wrote that if this experiment works, then they can help make all open source healthier, too. For complex reasons, companies are generally hesitant or unwilling to fund OSS directly. When it does happen, it's never enough and it never reaches packages which have transitive dependencies (i.e. packages that no one installs explicitly and therefore no one knows exists). Feross believes that npm essentially is a public good as it is consumed by huge numbers of users, but no one pays for it. And he viewed a funding model that usually works for public goods like this are ads. But how does it work? Read Also: Surprise NPM layoffs raise questions about the company culture How was the project ‘Npm install funding’ planned to work? Feross’s idea was that when developers install the library via the npm JavaScript package manager, they get a giant banner advertisement in their terminal as shown below: Source: GitHub thread Feross asked companies to promote ads on the installation terminals of JavaScript packages that have expressed interest in participating in the funding experiment. The idea behind funding is that companies buy ad space in people's terminals, and the funding project then shares its profits with open-source projects who signed-up to show the ads, as per ZDNet.  Linode and LogRocket agreed to participate in this funding experiment. The experiment did run on a few open source projects that Feross maintains: One of them was StandardJS 14. Feross raised $2000 as Npm install funds Feross had so far earned $2,000 for his time to release Standard 14 which took him five days. If he was able to raise additional funds, his next focus was the TypeScript support in StandardJS (one of the most common feature requests) and modernizing the various text editor plugins (many of which are currently unmaintained). Community did not support promoting ads on CLI and finally it came to a halt As per ZDNet reports, the developer community has been debating on this idea. There are arguments from both sides, one who see it is a good idea to finance their projects. And there are others who are completely against seeing ads on their terminals. Most of the negative comments for this new funding scheme came from developers who are dissatisfied that these post-install ad banners will now be making their way into logs, making app debugging unnecessarily complicated. Robert Hafner, a developer from California commented on a GitHub thread, "I don't want to have to view advertisements in my CI logs, and I hate what this would mean if other packages started doing this. Some JS packages have dozens, hundreds, or even more dependencies- can you imagine what it would look like if every package did this?" Some of the developers took a step further and created the world’s first ad blocker for a command line interface. https://twitter.com/dawnerd/status/1165330723923849216 They also put pressure on Linode and LogRocket to remove showing up the ads, and Linode eventually decide to drop out. https://twitter.com/linode/status/1165421512633016322 Additionally on Hacker News, users are confused about this initiative. They are curious to know about how this will actually work out? One of them commented, “The sponsorship pays directly for maintainer time. That is, writing new features, fixing bugs, answering user questions, and improving documentation. As far as I can tell, this project is literally just a 200 line configuration file for a linter. Not even editor integrations for the linter, just a configuration file for it. Is it truly something that requires funding to 'add new features'? How much time does it take out of your day to add a new line of JSON to a configuration file, or is the sponsorship there to pay for all the bikeshedding that's probably happening in the issues and comments on the project? What sort of bugs are there in a linter configuration file? I'm really confused by all of this. > The funds raised so far ($2,000) have paid for Feross's time to release Standard 14 which has taken around five days. Five days to do what? Five full 8 hour days? Does it take 5 days to cut a GitHub release and push it to NPM? What about the other contributors that give up their time for free, are their contributions worthless? Rather than feeling like a way to support FOSS developers or FOSS projects, it feels like a rather backhanded attempt at monetization by the maintainer where Standard was picked out because it was his most popular project, and therefore would return the greatest advertising revenue. Do JavaScript developers, or people that use this project, have a more nuanced opinion than me? I do zero web development, is this type of stuff normal?” After continuous backlash from developer community, the project has come to a halt and there are no messages promoted on the CLI. It is clear that while open-source funding still remains a major pain point for developers and maintainers, people don't really like ads in their CLI terminals. What's new in tech this week! “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Kotlin 1.3.50 released with ‘duration and time Measurement’ API preview, Dukat for npm dependencies, and much more! React.js: why you should learn the front end JavaScript library and how to get started
Read more
  • 0
  • 0
  • 4080

article-image-kotlin-1-3-50-released-with-duration-and-time-measurement-api-preview-dukat-for-npm-dependencies-and-much-more
Savia Lobo
27 Aug 2019
6 min read
Save for later

Kotlin 1.3.50 released with ‘duration and time Measurement’ API preview, Dukat for npm dependencies, and much more!

Savia Lobo
27 Aug 2019
6 min read
On August 22, the JetBrains team announced the release of Kotlin 1.3.50. Some of the major improvements in this Kotlin version include a preview of the new Duration and Time Measurement API in the standard library, using Dukat for the experimental generation of external declarations for npm dependencies in Gradle Kotlin/JS projects, a separate plugin for debugging Kotlin/Native code in IntelliJ IDEA Ultimate, and much more. The team has also worked on improving Java-to-Kotlin converter and on Java compilation support in multi-platform projects. Let us have a look at these improvements in brief. Major improvements in Kotlin 1.3.50 Changes in the standard library Experimental preview of duration and time measurement API A new duration and time measurement API is available for preview. The researchers say that if the API expects the duration stored as primitive value like Long, one can erroneously pass the value in the wrong unit, and unfortunately, the type system doesn’t help prevent that. Hence, the team created a regular class to store duration solves this problem. However, this brings another problem, i.e., additional allocations. Now the API can use the Duration type, and all the clients will need to specify the time in the desired units explicitly. This release brings support for MonoClock which represents the monotonic clock, which doesn’t depend on the system time. The monotonic clock can only measure the time difference between given time points, but doesn’t know the “current time.” The Clock interface provides a general API for measuring time intervals. MonoClock is an object implementing Clock; it provides the default source of monotonic time on different platforms. When using the Clock interface, the user explicitly marks the time of action start, and later the time elapsed from the start point. It is especially convenient if one wants to start and finish measuring time from different functions. To know more about this feature in detail, read Kotlin/KEEP on GitHub. Experimental API for bit manipulation The standard library now contains an experimental API for bit manipulation. Similar extension functions for Int, Long, Short, Byte, and their unsigned counterparts have also been added. IntelliJ IDEA support in Kotlin 1.3.50 Improvements in Java to Kotlin converter This release includes a preview of Java to Kotlin converter to minimize the amount of “red code” one has to fix manually after the conversion. This improved version of the converter tries to infer nullability more correctly based on the Java type usages in the code. The goal is to decrease the number of compilation errors and to make the produced Kotlin code more convenient to use. The new converter fixes many other known bugs, too; for instance, it now correctly handles implicit Java type casts. This new converter may become the default one in the future. To turn it on, specify the Use New J2K (experimental) flag in settings. Debugging improvements In Kotlin 1.3.50, the team has improved how the Kotlin “Variables” view chooses variables to display. As there’s a lot of additional technical information in the bytecode, the Kotlin “Variables” view highlights only the relevant variables. Local variables inside the lambda, as well as captured variables from the outer context and parameters of the outer function, are correctly displayed: Source: jetbrains.com Kotlin 1.3.50 also adds improved support for the “Evaluate expression” functionality in the debugger for many non-trivial language features, such as local extension functions or accessors of member extension properties. Users can now modify variables via “Evaluate expression”: Source: jetbrains.com Added new intentions and inspections This release includes the addition of new intentions and inspections. One of the goals of intentions is to help users learn how to write idiomatic Kotlin code. The following intention, for instance, suggests using the indices property rather than building a range of indices manually: Source: jetbrains.com Updates to Kotlin/JS Kotlin 1.3.50 adds support for building and running Kotlin/JS Gradle projects using the org.jetbrains.kotlin.js plugin on Windows. Users can now build and run projects using Gradle tasks, dependencies from NPM required in the Gradle configuration are resolved and included. Users can also try out their applications using webpack-dev-server and much more. The team has also added performance improvements for Kotlin/JS by improving the incremental compilation time for projects. With this users expect speedups of up to 30% when compared to 1.3.41. This version also shows an improved integration with NPM, which means that projects are now resolved lazily and in parallel, and support for projects with transitive dependencies between compilations in the same project has been added. Kotlin 1.3.50 also brings changes in the structure and naming of generated artifacts. Generated artifacts are now bundled in the distributions folder, and they include the version number of the project and archiveBaseName (which defaults to the project name), e.g. projectName-1.0-SNAPSHOT.js. Using Dukat for automatic conversion of TypeScript declaration files Dukat allows the automatic conversion of TypeScript declaration files (.d.ts) into Kotlin external declarations. This makes it more comfortable to use libraries from the JavaScript ecosystem in a type-safe manner in Kotlin, thus, reducing the need for manually writing wrappers for JS libraries. Kotlin/JS now ships with experimental support for Dukat integration for Gradle projects. With this integration, by running the build task in Gradle, typesafe wrappers are automatically generated for npm dependencies and can be used from Kotlin. As Dukat is still in a very early stage, its integration is disabled by default. The team has prepared an example project, which demonstrates the use of dukat in Kotlin/JS projects. Updates to Kotlin/ Native Previously, the version of Kotlin/Native differed from the version of Kotlin. However, in this version, schemes for Kotlin and Kotlin/Native are now aligned. This release uses version 1.3.50 for both Kotlin and Kotlin/Native binaries, reducing the complexity. This release brings more pre-imported Apple frameworks for all platforms, including macOS and iOS. The Kotlin/Native compiler now includes actual bitcode in produced frameworks. Several performance improvements have also made in the interop tool. The team has also announced that Null-check optimizations have been planned for Kotlin 1.4. Starting from Kotlin 1.4, "all runtime null checks will throw a java.lang.NullPointerException instead of a KotlinNullPointerException, IllegalStateException, IllegalArgumentException, and TypeCastException. This applies to: the !! operator, parameter null checks in the method preamble, platform-typed expression null checks, and the as operator with a non-null type. This doesn’t apply to lateinit null checks and explicit library function calls like checkNotNull or requireNotNull." Apart from the changes mentioned, Java compilation can now be included in Kotlin/JVM targets of a multiplatform project by calling the newly added withJava() function of the DSL. This release also adds multiple features and improvements in scripting and REPL support. To know more about these changes and other changes in detail, read Kotlin 1.3.50 official blog post. Introducing Coil, an open-source Android image loading library backed by Kotlin Coroutines Introducing Kweb: A Kotlin library for building rich web applications How to avoid NullPointerExceptions in Kotlin [Video]
Read more
  • 0
  • 0
  • 3442

article-image-introducing-nushell-a-rust-based-shell
Savia Lobo
26 Aug 2019
3 min read
Save for later

Introducing Nushell: A Rust-based shell

Savia Lobo
26 Aug 2019
3 min read
On August 23, Jonathan Turner, an Azure SDK developer introduced a new shell written in Rust, called Nushell or ‘Nu’. This Rust-based shell is inspired by the “classic Unix philosophy of pipelines, the structured data approach of PowerShell, functional programming, systems programming, and more,” Turner writes in his official blog. The idea of Nushell struck when Turner’s friend Yehuda Yatz demonstrated the working of Powershell. Yatz asked Turner if he could join in his project “we could take the ideas of a structured shell and make it more functional (as opposed to object-oriented)? What if, like PowerShell, it worked on Windows, Linux, and macOS? What if it had great error messages?” Turner highlights the fact that “everything in Nu is data”; this means when a user tries other commands and realize that they are using the same commands to filter, to sort, etc. Rather than having the need to remember all the parameters to all the commands, they can just use the same verbs to act over our data, regardless of where the data came from. Nu also understands structured text files like JSON, TOML, YAML, and allows users to manipulate their data, and much more. “You get used to using the verbs, and then you can use them on anything. When you’re ready, you can write it back to disk,” Turner writes. Nu also supports opening and looking at the text and binary data. On opening a source file, users can scroll around in a syntax-highlighted file. Further on opening an xml, they can look at its data. They can even open a binary file and look at what’s inside. Turner mentions that there is a lot one might want to explore with Nushell. Hence, the team has released Nu with the ability to extend it with plugins. Nu will look for these plugins in your path, and load them up on startup. Rust language is the major backbone for this project and Nushell would not have been possible without Rust, Turner exclaims. Nu internally uses async/await, async streams, and employs liberal use of “serde” to manage serializing and deserializing into the common data format and to communicate with plugins. Nushell GitHub page reads, “This project has reached a minimum-viable product level of quality. While contributors dogfood it as their daily driver, it may be instable for some commands. Future releases will work fill out missing features and improve stability. Its design is also subject to change as it matures.” The team will further work towards stability, the ability to use Nu as the main shell, the ability to write functions and scripts in Nu, and much more. Users can also read the book on Nu, available in both English and Spanish language. To know more about this news in detail, head over to Jonathan Turner’s official blog post or visit Nushell’s GitHub page. Announcing ‘async-std’ beta release, an async port of Rust’s standard library Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial]
Read more
  • 0
  • 0
  • 5259
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-julia-v1-2-releases-with-support-for-argument-splatting-unicode-12-new-star-unary-operator-and-more
Vincy Davis
21 Aug 2019
3 min read
Save for later

Julia v1.2 releases with support for argument splatting, Unicode 12, new star unary operator, and more

Vincy Davis
21 Aug 2019
3 min read
Yesterday, the team behind Julia announced the release of Julia v1.2. It is the second minor release in the 1.x series and has new features such as argument splatting, support for Unicode 12 and a new ⋆ (star) unary operator. Julia v1.2 also has many performance improvements with marginal and undisruptive changes. The post states that Julia v1.2 will not have a long term support and “As of this release, 1.1 has been effectively superseded by 1.2, which means there will not likely be any further 1.1.x releases. Our good friend 1.0 is still currently the only long-term support version.” What’s new in Julia v1.2 This version supports Argument splatting (x...). It can be used in calls to the new pseudo-function in constructors. Support for Unicode 12 has been added. A new unary operator ⋆ (star) has been added. New library functions A new argument !=(x), >(x), >=(x), <(x), <=(x) has been added to assist in returning the partially-applied versions of the functions A new getipaddrs() function is added to return all the IP addresses of the local machine with the IPv4 addresses New library function Base.hasproperty and Base.hasfield  Other improvements in Julia v1.2 Multi-threading changes It will now be possible to schedule and switch tasks during @threads loops, and perform limited I/O. A new thread-safe replacement has been added to the Condition type. It can now be accessed as Threads.Condition. Standard library changes The extrema function now accepts a function argument in the same way like minimum and maximum. The hasmethod method can now check for matching keyword argument names. The mapreduce function will accept multiple iterators. Functions that invoke commands like run(::Cmd), will get a ProcessFailedException rather than an ErrorException. A new no-argument constructor for Ptr{T} has been added to construct a null pointer. Jeff Bezanson, Julia co-creator says, “If you maintain any packages, this is a good time to add CI for 1.2, check compatibility, and tag new versions as needed.” Users are happy with the Julia v1.2 release and are all praises for the Julia language. A user on Hacker News comments, “Julia has very well thought syntax and runtime I hope to see it succeed in the server-side web development area.” Another user says, “I’ve recently switched to Julia for all my side projects and I’m loving it so far! For me the killer feature is the seamless GPUs integration.” For more information on Julia v1.2, head over to its release notes. Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment
Read more
  • 0
  • 0
  • 4084

article-image-announcing-async-std-beta-release-an-async-port-of-rusts-standard-library
Bhagyashree R
20 Aug 2019
3 min read
Save for later

Announcing ‘async-std’ beta release, an async port of Rust's standard library

Bhagyashree R
20 Aug 2019
3 min read
Last week, Stjepan Glavina, a Rust programmer at Ferrous Systems, announced that the ‘async-std’ library has now reached its beta phase. This library has the “looks and feels” of Rust’s standard library but replaces its components with their async counterparts. The current state of asynchronous programming in Rust Asynchronous code facilitates the execution of multiple tasks concurrently on the same OS thread. It can potentially make your application much faster while using fewer resources as compared to a corresponding threaded implementation. Speaking of Rust’s asynchronous ecosystem, we can say that it is still early days. The standard library’s Future trait was recently stabilized and the async/await feature will soon be landing in a future version. Why async-std is introduced Rust’s Future trait is often considered to be difficult to understand, not because it is complex but because it is something that people are not used to. Stating what makes Futures confusing, the book accompanying the ‘async-std’ library states, “Futures have three concepts at their base that seem to be a constant source of confusion: deferred computation, asynchronicity and independence of execution strategy.” The ‘async-std’ library, together with its supporting libraries, aims to make asynchronous programming easier in Rust. It is based on Future and supports a set of traits from the futures library. It is also designed to support the new async programming model that is slated to be stabilized in Rust 1.39. The async-std library serves as an interface to all important primitives including filesystem operations, network operations and concurrency basics like timers. In addition to the async variations of I/O primitives found in std, it comes with async versions of concurrency primitives like Mutex and RwLock. It also ships with a ‘tasks’ module that performs a single allocation per spawned task and awaits the result of the task without the need of an extra channel. Speaking about the learning curve of async-std, Glavina said, “By mimicking standard library's well-understood APIs as closely as possible, we hope users will have an easy time learning how to use async-std and switching from thread-based blocking APIs to asynchronous ones. If you're familiar with Rust's standard library, very little should come as a surprise.” The library received a mixed reaction from the community. A user said, “In fact, Rust does have a great solution for non-blocking code: just use threads! Threads work great, they are very fast on Linux, and solutions such as goroutines are just implementations of threads in userland anyway...People tell me that Rust services scale up to thousands of requests per second on Linux by just using 1:1 threads.” A Rust developer on Reddit commented, “Looks good. I'm hoping we can soon see this project, the futures crate, async WGs crates and Tokio converge to build unified async foundations, reduce duplicated efforts (and avoid seeing dependencies explode when using several crates using async together). It's unclear to me why apparently similar crates are popping up, but I hope this is just temporary explorations of async that will merge together.” Check out the official announcement to know more about the async-std library. Also, check out its book: Async programming in Rust with async-std. Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more Introducing Abscissa, a security-oriented Rust application framework by iqlusion Introducing Ballista, a distributed compute platform based on Kubernetes and Rust  
Read more
  • 0
  • 0
  • 3101

article-image-rust-1-37-0-releases-with-support-for-profile-guided-optimization-built-in-cargo-vendor-and-more
Bhagyashree R
16 Aug 2019
4 min read
Save for later

Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more

Bhagyashree R
16 Aug 2019
4 min read
After releasing version 1.36.0 last month, the team behind Rust announced the release of Rust 1.37.0 yesterday. Among the highlights of this version are support for referring to enum variants via type aliases, built-in cargo vendor, unnamed const items, profile-guided optimization, and more. Key updates in Rust 1.37.0 Referring to enum variants through type aliases Starting with this release, you can refer to enum variants through type aliases in expression and pattern contexts. Since Self behaves like a type alias in implementations, you can also refer to enum variants with Self::Variant. Built-in Cargo support for vendored dependencies Until now the cargo vendor command was available as a separate crate to developers. Starting with Rust 1.37.0, it is integrated directly into Cargo, the Rust package manager, and crate host. This Cargo subcommand fetches all the crates.io and git dependencies for a project into the vendor/ directory. It also shows the configuration necessary to use the vendored code during builds. Using unnamed const items for macros Rust 1.37.0 allows you to create unnamed const items. So, instead of giving an explicit name to your constants, you can name them as ‘_’. This update will enable you to easily create ergonomic and reusable declarative and procedural macros for static analysis purposes. Support for profile-guided optimization Rust’s compiler, rustc now supports Profile-Guided Optimization (PGO) through the -C profile-generate and -C profile-use flags. PGO allows the compiler to optimize your code based on feedback for real workloads. It optimizes a program in the following two steps: The program is first built with instrumentation inserted by the compiler. This is achieved by passing the -C profile-generate flag to rustc. This instrumented program is then run on sample data and the profiling data is written to a file. The program is built again, however, this time the collected profiling data is fed into rustc by using the -C profile-use flag. This build will use the collected data to enable the compiler to make better decisions about code placement, inlining, and other optimizations. Choosing a default binary in Cargo projects The cargo run command allows you to run a binary or example of the local package enabling you to quickly test CLI applications. It often happens that there are multiple binaries present in the same packages. In such cases, developers need to explicitly mention the name of the binary they want to run with the --bin flag. This makes the cargo run command not as ergonomic, especially when we are calling a binary more often than the others. To solve this issue, Rust 1.37.0 introduces a new key in Cargo.toml called default-run. Declaring this key in the [package] section will make the cargo run command default to the chosen binary if the --bin flag is not passed. Developers have already started testing out this new release. A developer who used  profile-guided optimization shared his experience on Hacker News, “The effect is very dependent on program structure and actual code running, but for a suitable application it's reasonable to expect anything from 5-15%, and sometimes much more (see e.g. Firefox reporting 18% here).” Others also speculated that async/await will come in Rust 1.39. “Seems like async/await is going to slip into Rust 1.39 instead.” Another user said, “Congrats! Like many I was looking forward to async/await in this release but I'm happy they've taken some extra time to work through any existing issues before releasing it.” Check out the official announcement by the Rust team to know more in detail. Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more Introducing Vector, a high-performance data router, written in Rust “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices
Read more
  • 0
  • 0
  • 3222

article-image-poetry-a-python-dependency-management-and-packaging-tool-releases-v1-beta-1-with-url-dependency
Vincy Davis
14 Aug 2019
4 min read
Save for later

Poetry, a Python dependency management and packaging tool, releases v1 beta 1 with URL dependency

Vincy Davis
14 Aug 2019
4 min read
Last week, Poetry, a dependency management and packaging tool for Python released their version 1 beta 1. Before venturing into details of this Poetry release, let’s have a brief overview about Python, its issues with dependency management, pipenv and Poetry in general. There’s no doubt that Python is loved by many developers. It is considered as one of the top-rated programming languages with benefits like extensive support library, less-complex syntax, high productivity, excellent integration feature, and many more. Though it has been rated as one of the fastest growing programming languages in 2019, there are some problems with Python, which if rectified, can make it more powerful and accessible to users. Python’s poor dependency management is one such issue. Dependency management helps in managing all the libraries required to make an application work. It becomes extremely necessary when working in a complex project or in a multi-environment. An ideal dependency management tool assists in tracking, updating libraries easier and faster, as well as to solve package dependency issues. Python’s dependency management requires users to make a virtual environment to have separate dependencies, manual addition of version number in every file, inability to parallelize dependency installation and more.  To combat these issues, Python now has two maturing dependency management tools called Pipenv and Poetry. Each of these tools simplify the process of creating a virtual environment and sorting dependencies.  The PyPA-endorsed Pipenv automatically creates and manages a virtualenv for user projects. It also adds/removes packages from the Pipfile as a user installs/uninstalls packages. Its main features include automatically generating a Pipfile and a Pipfile.lock, if one doesn't exist, creating a virtualenv, adding packages to a Pipfile when installed to name a few. On the other hand, Poetry dependency management tool uses only one pyproject.toml file to manage all the dependencies. Poetry allows users to declare the libraries that their project depends on and Poetry will automatically install/update them for the user. It allows projects to be directly published to PyPI, easy tracking of the state of dependencies, and more.  New features in Poetry v1 beta 1 The major highlight in Poetry v1 beta 1 is the new added support for url dependencies in the pull request checklist. This new feature is a significant one for Python users, as it can be added to a current project via the add command or by modifying the pyproject.toml file directly.  Other features in Poetry v1 beta 1 Support for publishing to PyPI using API tokens Licenses can be identified by their full name Settings can be specified with environment variables Settings no longer need to be prefixed by settings, when using the config command.  Users, in general, are quite happy with the Poetry dependency management tool for Python, as can be seen in the user reactions below from Hacker News.  A comment on Hacker News reads, “I like how transparent poetry is about what's happening when you run it, and how well presented that information is. I've come to loathe pipenv's progress bar. Running it in verbose mode isn't much better. I can't be too mad at pipenv, but all in all poetry is a better experience.” Another user says, “Poetry is very good. I think projects should use it. I hope the rest of the ecosystem can catch up quickly. Tox and pip and and pex need full support for PEP 517/518.” Another user comments, “When you run poetry, it activates the virtualenv before it runs whatever you wanted. So `poetry add` (it's version of pip install) doesn't require you to have the virtualenv active. It will activate it, run the install, and update your dependency specifications in pyproject.toml.  You can also do `poetry run` and it will activate the virtualenv before it runs whatever shell command comes after. Or you can do `poetry shell` to run a shell inside the virtualenv. I like the seamless integration, personally.” Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble” PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption
Read more
  • 0
  • 0
  • 3855
article-image-pypy-supports-python-2-7-even-as-major-python-projects-migrate-to-python-3
Fatema Patrawala
14 Aug 2019
4 min read
Save for later

PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3

Fatema Patrawala
14 Aug 2019
4 min read
The switch from Python 2 to Python 3 has been rocky and all signs point to Python 3 pulling firmly into the lead. Python 3 is broadly compatible with several libraries and there's an encouraging rate of adoption by cloud providers for application support too as Python 2 reaches its EOL in 2020. But there are still plenty of efforts to keep Python 2 alive in one form or another. The default implementation of Python is open source, so it can easily be forked and maintained separately. Currently all major open source Python packages support Python 3.x and Python 2.7. Last year Python team updated users that Python 2.7 maintenance will stop in 2020. Originally, there was no official date but in March 2018, the team announced the date to be January 1, 2020. https://twitter.com/ThePSF/status/1160839590967685121 This means that the maintainers of Python 2 will stop supporting it even for security patches. There are many institutions and codebases who have not yet ported their code from Python 2 to Python 3. Hence, Python volunteers have created resources to help publicize and educate, but there's still more work that needs to be done. For which the Python Software Foundation has contracted with Changeset Consulting, to help communicate about the sunsetting of Python 2. The high-level goal for Changeset's involvement is to help users through the end of the transition, help with communication so volunteers are not overwhelmed, and help update public-facing assets so core developers are not overwhelmed. This will also require all the major Python projects to migrate to Python 3 and above. However, PyPy confirmed last week that they do not plan to deprecate Python 2.7 support as long as PyPy exists, according to the official Twitter statement. https://twitter.com/pypyproject/status/1160209907079176192 Apart from this, PyPy runtime is popular among developers due to its built-in JIT which provides major speed boosts to Python code. Pypy has long favored Python 2 over Python 3. This favoritism isn't solely because the first versions of PyPy were Python 2 implementations and Python 3 has only recently entered the picture. It's also due to a key part of PyPy's ecosystem, RPython which is a dynamic language implementation framework has its foundation in Python 2. This is not likely to change, according to PyPy's official FAQ. The page states, “the Python 2 version of PyPy will be around 'forever', i.e. as long as PyPy itself is around.” According to Pypy’s official announcement it will support Python 3 while continuing to support Python 2.7 version. Last year when Python rolled out the announcement that Python 2 will officially end in 2020, users on Hacker News discussed about the most popular packages being compatible with Python 3 while millions of people in the industry still work on Python 2.7. One of the users comments read, “most popular packages are now compatible with Python 3 I often see this but I think it's a perception from the Internet/web world. I work for CGI, all (I'm not kidding) our software (we have many) are 2.7. You will never see them used "on the web/Internet/forum/network" place but the day-to-day job of millions of people in the industry is 2.7. And we are a tiny focused industry. So I'm sure there are many other industries like us which are 2.7 that you never heard of. That's why "most popular" mean nothing once you take how Python is used as a whole. We don't use any of this web/Internet/network "popular" packages. I'm not saying Python shouldn't move on. I'm just trying to argue against this "most popular packages" while millions of us, even if you don't know it, use none of those. GNU Radio 3.8.0.0 releases with new dependencies, Python 2 and 3 compatibility, and much more! NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more
Read more
  • 0
  • 0
  • 5941

article-image-matthew-flatts-proposal-to-change-rackets-s-expressions-based-syntax-to-infix-representation-creates-a-stir-in-the-community
Bhagyashree R
09 Aug 2019
4 min read
Save for later

Matthew Flatt’s proposal to change Racket’s s-expressions based syntax to infix representation creates a stir in the community

Bhagyashree R
09 Aug 2019
4 min read
RacketCon 2019 happened last month from July 13 to 14 bringing together the Racket community to discuss ideas and future plans for the Racket programming language. Matthew Flatt, one of the core developers, graced the stage to give his talk: State of Racket. In his talk, he spoke about the growing community, performance improvements, and much more. He also touched upon his recommendation to change the surface syntax of Racket2, which has sparked a lot of discussion in the Racket community. https://www.youtube.com/watch?v=dnz6y5U0tFs&t=390 Later in July, Greg Hendershott, who has contributed Racket projects like Rackjure and Travis-Racket and has driven a lot of community participation, expressed his concern about this change in a blog post. “I’m concerned the change won’t help grow the community; instead hurt it,“ he added. He further shared that he will shift his focus towards working on other programming languages, which implies that he is stepping down as a Racket contributor. Matthew Flatt recommends surface syntax change for removing technical barriers to entry There is no official proposal about this change yet, but Flatt has discussed it a couple of times. According to Flatt’s recommendation, Racket 2’s ‘lispy’ s-expressions should be changed to something which is not a barrier of entry to new users. He suggests to get rid or reduce the use of parentheses and bring infix operators, which means the operator sign will be written in between the operands, for instance, a + b.  “More significantly, parentheses are certainly an obstacle for some potential users of Racket. Given the fact of that obstacle, it's my opinion that we should try to remove or reduce the obstacle,“ Flatt writes in a mailing list. Racket is a general-purpose, multi-paradigm programming language based on the Scheme dialect of Lisp. It is also an ecosystem for language-oriented programming. Flatt further explained his rationale behind suggesting this change that the current syntax is not only a hindrance to potential users of Racket as a programming language but also to those who want to use it as “a programming-language programming language”. He adds, “The idea of language-oriented programming (LOP) doesn't apply only to languages with parentheses, and we need to demonstrate that.” With this change, he hopes to make Racket2 more familiar and easier-to-accept for users outside the Racket community. Some Racket developers believe changing s-expressions based syntax is not “desirable” Many developers in the Racket community share a similar sentiment as Greg Hendershott. A user on Hacker News added, “Getting rid of s expressions without it being part of a more cohesive improvement (like better supporting a new type system or something) just for mainstream appeal seems like an odd choice to me.” Another user added, “A syntax without s-expressions is not an innovative feature. For me, it's not even desirable, not at all. When I'm using non-Lispy languages like Rust, Ada, Nim, and currently a lot of Go, that's despite their annoying syntactic idiosyncrasies. All of those quirky little curly braces and special symbols to save a few keystrokes. I'd much prefer if all of these languages used s-expressions. That syntax is so simple that it makes you focus on the semantics.” While others are more neutral about this suggested change. “To me, Flatt's proposal for Racket2 smells more like adding tools to better facilitate infix languages than deprecating S-expressions. Given Racket's pedagogical mission, it looks more like a move toward migrating the HtDP series of languages (Beginning Student, Intermediate Student, Intermediate Student with Lambda, and Advanced Student) to infix syntax than anything else. Not really the end of the world or a big change to the larger Racket community. Just another extension of an ecosystem that remains s-expression based despite Algol and Datalog shipping in the box,” a user expressed his opinion. To know more about this change, check out the discussion on Racket’s mailing list. Also, you can share your solutions on Racket2 RFCs. Racket 7.3 releases with improved Racket-on-Chez, refactored IO system, and more Racket 7.2, a descendant of Scheme and Lisp, is now out! Racket v7.0 is out with overhauled internals, updates to DrRacket, TypedRacket among others
Read more
  • 0
  • 0
  • 4729

article-image-is-dark-an-aws-lambda-challenger
Fatema Patrawala
01 Aug 2019
4 min read
Save for later

Is Dark an AWS Lambda challenger?

Fatema Patrawala
01 Aug 2019
4 min read
On Monday, the CEO and Co-founder of Dark, Ellen Chisa, announced the project had raised $3.5 million in funding in a Medium post. Dark is a holistic project that includes a programming language (Darklang), an editor and an infrastructure. The value of this, according to Chisa, is simple: "developers can code without thinking about infrastructure, and have near-instant deployment, which we’re calling deployless." Along with Chisa, Dark is led by CTO, Paul Biggar, who is also the founder of CircleCI, the CI/CD pioneering company. The seed funding is led by Cervin Ventures, in participation with Boldstart, Data Collective, Harrison Metal, Xfactor, Backstage, Nextview, Promus, Correlation, 122 West and Yubari. What are the key features of the Dark programming language? One of the most interesting features in Dark is that deployments take a mere 50 milliseconds. Fast. Chisa says that currently the best teams can manage deployments around 5–10 minutes, but many take considerably longer, sometimes hours. But Dark was designed to change this. It's purpose-built, Chisa seems to suggest, for continuous delivery. “In Dark, you’re getting the benefit of your editor knowing how the language works. So you get really great autocomplete, and your infrastructure is set up for you as soon as you’ve written any code because we know exactly what is required.” She says there are three main benefits to Dark’s approach: An automated infrastructure No need to worry about a deployment pipeline ("As soon as you write any piece of backend code in Dark, it is already hosted for you,” she explains.) Tracing capabilities are built into your code. "Because you’re using our infrastructure, you have traces available in your editor as soon as you’ve written any code. There's undoubtedly a clear sense - whatever users think of the end result - that everything has been engineered with an incredibly clear vision. Dark has been deployed on SaaS platform and project tracking tools Chisa highlights how some customers have already shipped entire products on Dark. Chase Olivieri, who built Altitude, a subscription SaaS providing personalized flight deals, using Drark is cited by Chisa, saying that "as a bootstrapper, Dark has allowed me to move fast and build Altitude without having to worry about infrastructure, scaling, or server management." Downside of Dark is programmers have to learn a new language Speaking to TechCrunch, Chisa admitted their was a downside to Dark - you have to learn a new language. "I think the biggest downside of Dark is definitely that you’re learning a new language, and using a different editor when you might be used to something else, but we think you get a lot more benefit out of having the three parts working together." Chisa acknowledged that it will require evangelizing the methodology to programmers, who may be used to employing a particular set of tools to write their programs. But according to her the biggest selling point is that it will remove the complexity around deployment by bringing an integrated level of automation to the process. Is Darklang basically like AWS Lambda? The community on Hacker News compares Dark with AWS Lambda, with many pessimistic about its prospects. In particular they are skeptical about the efficiency gains Chisa describes. "It only sounds maybe 1 step removed from where aws [sic] lambda’s are now," said one user. "You fiddle with the code in the lambda IDE, and submit for deployment. Is this really that much different?” Dark’s Co-founder, Paul Biggar responded to this in the thread. “Dark founder here. Yes, completely agree with this. To a certain extent, Dark is aimed at being what lambda/serverless should have been." He continues by writing: "The thing that frustrates me about Lambda (and really all of AWS) is that we're just dealing with a bit of code and bit of data. Even in 1999 when I had just started coding I could write something that runs every 10 minutes. But now it's super challenging. Why is it so hard to take a request, munge it, send it somewhere, and then respond to it. That should be trivial! (and in Dark, it is)" The team has planned to roll out the product publicly in September. To find out more more about Dark, read the team's blog posts including What is Dark, How Dark is a functional language, and How Dark allows deploys in 50ms. The V programming language is now open source – is it too good to be true? “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 5303
article-image-numpy-1-17-0-is-here-officially-drops-python-2-7-support-pushing-forward-python-3-adoption
Vincy Davis
31 Jul 2019
5 min read
Save for later

NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption

Vincy Davis
31 Jul 2019
5 min read
Last week, the Python team released NumPy version 1.17.0. This version has many new features, improvements and changes to increase the performance of NumPy. The major highlight of this release includes a new extensible numpy.random module, new radix sort & timsort sorting methods and a NumPy pocketfft FFT implementation for accurate transforms and better handling of datasets of prime length. Overriding of numpy functions has also been made possible by default. NumPy 1.17.0 will support Python versions 3.5 - 3.7. Python 3.8b2 will work with the new release source packages, but may not find support in future releases. The Python team had previously updated users that Python 2.7 maintenance will stop on January 1, 2020. NumPy 1.17.0 officially dropping Python 2.7 is a step towards the adoption of Python 3. Developers who want to port their Python 2 code in Python 3, can check out the official porting guide, released by Python. Read More: NumPy drops Python 2 support. Now you need Python 3.5 or later. What’s new in NumPy 1.17.0? New extensible numpy.random module with selectable random number generators NumPy 1.17.0 has a new extensible numpy.random module. It also includes four selectable random number generators and improved seeding designed for use in parallel processes. PCG64 is the new default numpy.random module while MT19937 is retained for backwards compatibility. Timsort and radix sort have replaced mergesort for stable sorting Both the radix sort and timsort have been implemented and can be used instead of mergesort. The sorting kind options ‘stable’ and ‘mergesort’ have been made aliases of each other with the actual sort implementation for maintaining backward compatibility. Radix sort is used for small integer types of 16 bits or less and timsort is used for all the remaining types of bits. empty_like and related functions now accept a shape argument Functions like empty_like, full_like, ones_like and zeros_like will now accept a shape keyword argument, which can be used to create a new array as the prototype and overriding its shape also. These functions become extremely useful when combined with the __array_function__ protocol, as it allows the creation of new arbitrary-shape arrays from NumPy-like libraries. User-defined LAPACK detection order numpy.distutils now uses an environment variable, comma-separated and case insensitive detection order to determine the detection order for LAPACK libraries. This aims to help users with MKL installation to try different implementations. .npy files support unicode field names A new format version of .npy files has been introduced. This enables structured types with non-latin1 field names. It can be used automatically when needed. New mode “empty” for pad The new mode “empty” pads an array to a desired shape without initializing any new entries. New Deprications in NumPy 1.17.0 numpy.polynomial functions warn when passed float in place of int Previously, functions in numpy.polynomial module used to accept float values. With the latest NumPy version 1.17.0, using float values is deprecated for consistency with the rest of NumPy. In future releases, it will cause a TypeError. Deprecate numpy.distutils.exec_command and temp_file_name The internal use of these functions has been refactored for better alternatives such as replace exec_command with subprocess. Also, replace Popen and temp_file_name <numpy.distutils.exec_command> with tempfile.mkstemp. Writeable flag of C-API wrapped arrays When an array is created from the C-API to wrap a pointer to data, the writeable flag set during creation indicates the read-write nature of the data. In the future releases, it will not be possible to convert the writeable flag to True from python as it is considered dangerous. Other improvements and changes Replacement of the fftpack based fft module by the pocketfft library pocketfft library contains additional modifications compared to fftpack which helps in improving accuracy and performance. If FFT lengths has large prime factors then pocketfft uses Bluestein's algorithm, which maintains O(N log N) run time complexity instead of deteriorating towards O(N*N) for prime lengths. Array comparison assertions include maximum differences Error messages from array comparison tests such as testing.assert_allclos now include “max absolute difference” and “max relative difference” along with previous “mismatch” percentage. This makes it easier to update absolute and relative error tolerances. median and percentile family of functions no longer warn about nan Functions like numpy.median, numpy.percentile, and numpy.quantile are used to emit a RuntimeWarning when encountering a nan. Since these functions return the nan value, the warning is redundant and hence has been removed. timedelta64 % 0 behavior adjusted to return NaT The modulus operation with two np.timedelta64 operands now returns NaT in case of division by zero, rather than returning zero. Though users are happy with NumPy 1.17.0 features, some are upset over the Python version 2.7 being officially dropped. https://twitter.com/antocuni/status/1156236201625624576 For the complete list of updates, head over to NumPy 1.17.0 release notes. Plotly 4.0, popular python data visualization framework, releases with Offline Only, Express first, Displayable anywhere features Python 3.8 new features: the walrus operator, positional-only parameters, and much more Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images
Read more
  • 0
  • 0
  • 4916

article-image-c20-committee-draft-finalized-with-a-new-text-formatting-api-contracts-unanimously-deferred-and-more
Bhagyashree R
30 Jul 2019
4 min read
Save for later

C++20 Committee Draft finalized with a new text formatting API, contracts unanimously deferred, and more

Bhagyashree R
30 Jul 2019
4 min read
The ISO C++ Committee met last week at Cologne, Germany to complete and publish the Committee Draft (CD) of the next C++ standard called C++20. This standard will bring some of the game-changing advancements to C++ including modules, concepts, coroutines, and ranges to C++. Here are some of the changes made to the draft in this meeting: Contracts moved out of C++20 A contract specifies a set of preconditions, postconditions, and assertions that a software component should adhere to. The committee unanimously decided to move contracts out of C++20 and defer it to a later standard because it has recently gone through major design changes. They were unsure of the impact or implications of these changes as they did not have much usage experience with contracts. “In short, contracts were just not ready. It's better for us to ship contracts in a form that better addresses the use cases of interest in a future standard instead of shipping something we are uncertain about in C++20. Notably, this decision was unanimous -- all of the contracts’ co-authors agreed to this approach,” wrote the committee. To continue the work on contracts a new study group is created named SG21. It will be chaired by John Spicer from Edison Design Group and includes all original authors and members who are interested to work on contracts. std::format, a new text formatting API One of the key advantages of the ‘printf’ syntax is its familiarity among developers. However, it does suffer from a few drawbacks. The format specifiers it provides like hh, h, l, and j are redundant in type-safe formatting. They can unnecessarily make specification and parsing complicated. The printf syntax also does not provide a standard way for extending the syntax for user-defined types. C++20 will come with a new text formatting API called ‘std::format’ that aims to offer a flexible, safe, and fast alternative to (s)printf and iostreams. Based on the syntax we see in Python, the .NET family of languages, and Rust, it uses ‘{‘ and ‘}’ as replacement field delimiters instead of %. The C++20 synchronization library This new standard will bring new improved synchronization and thread coordination facilities. It will support efficient ‘atomic’ waiting and semaphores, latches, barriers, atomic_flag::test, lockfree integral types, and more. The next step for the committee is to submit the draft to all the national standard bodies to gather their feedback. The committee plans to address their feedback in the next two meetings and then publish the C++20 standard at the February 2020 meeting in Prague. Developers are excited about the new features C++20 will come with. A Reddit user commented, “Wow, the C++ committee is really doing a great job. There are so many good features coming into the standard (std::format, constexpr features, better threading support, etc, etc). Thank you all for all of your hard work.” Others are not very impressed by the ‘web_view’ proposal. This introduces a facility that aims to enable natural, multimodal user interaction with the help of existing web standards and technologies. Another user added, “Very surprising, I didn't expect that because personally, I think that the proposal is not very good. If we use JS and other technologies to display stuff, why not directly use those languages? Why go through C++? But maybe I don't understand it; I'll make sure to go through the minutes.” You can read the full report posted by the ISO C++ Committee for more details. ISO C++ Committee announces that C++20 design is now feature complete GCC 9.1 releases with improved diagnostics, simpler C++ errors and much more Code completion suggestions via IntelliCode comes to C++ in Visual Studio 2019  
Read more
  • 0
  • 0
  • 1523