Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-introducing-activestate-state-tool-a-cli-tool-to-automate-dev-test-setups-workflows-share-secrets-and-manage-ad-hoc-tasks
Amrata Joshi
29 Aug 2019
3 min read
Save for later

Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks

Amrata Joshi
29 Aug 2019
3 min read
Today, the team at ActiveState, a software-based company known for building Perl, Python and Tcl runtime environments introduced the ActiveState Platform Command Line Interface (CLI), the State Tool. This new CLI tool aims at automating manual tasks such as the setup of development and test systems. With this tool, all instructions in the Readme can easily be reduced to a single command. How can the State Tool benefit the developers? Eases ad-hoc tasks The State Tool can address tasks that cause trouble to developers at project setup or environment setups that don’t work the first time. It also helps developers in managing dependencies, system libraries and other such tasks that affect productivity. These tasks usually end up consuming developers’ coding time. The State Tool can be used to automate all of the ad hoc tasks that developers come across on a daily basis.  Deployment of runtime environment With this tool, developers can now deploy a consistent runtime environment into a virtual environment on their machine and across CI/CD systems with a single command. Sharing secrets and cross-platform scripts Developers can now centrally create secrets that can be securely shared among team members without the need of using a password manager, email, or Slack. They can create and share cross-platform scripts that include secrets for starting off the builds and run tests as well as simplifying and speeding up common development tasks. Developers can incorporate their secrets in the scripts by simply referencing their names. Automation of workflows All the workflows that developers handle can now get centrally automated with this tool. Jeff Rouse, vice president, product management, said in a statement, “Developers are a hardy bunch. They suffer through a thousand annoyances at project startup/restart time, but soldier on anyway. It’s just the way things have always been done. With the State Tool, it doesn’t have to stay that way. The State Tool addresses all the hidden costs in a project that sap developer productivity. This includes automating environment setup to secrets sharing, and even automating the day to day scripts that everyone counts on to get their jobs done. Developers can finally stop solving the same annoying problems over and over again, and just rely on the State Tool so they can spend more time coding.”   To know more about this news, check out the official page.  Podcasting with Linux Command Line Tools and Audacity GitHub’s ‘Hub’ command-line tool makes using git easier Command-Line Tools  
Read more
  • 0
  • 0
  • 3593

article-image-the-julia-team-shares-its-finalized-release-process-with-the-community
Bhagyashree R
29 Aug 2019
4 min read
Save for later

The Julia team shares its finalized release process with the community

Bhagyashree R
29 Aug 2019
4 min read
The discussions regarding the Julia release process started last year when it hit Julia 1.0. Yesterday, Stefan Karpinski, one of Julia's core developers shared its finalized release process giving details on the kind of releases, the stages of the release process, the phases of a release, and more. “This information is collected from a small set of posts on discourse and conversations on Slack, so the information exists “out there”, but this blog post brings it all together in a single place. We may turn this post into an official document if it’s well-received,” Stefan wrote. Types of Julia releases As with most programming languages that follow Semantic Versioning (SemVer), Julia has three types of releases: Patch, Minor, and Major. A patch release will be represented by the last digit of Julia’s version number. It will include things like bug fixes, low-risk performance improvements, and documentation updates. The team plans to release a patch every month for the current active release branches, however, this will depend on the number of bug fixes. The team also plans to run PackageEvaluator (PkgEval) on the backports five days prior to the patch release. PkgEval is used to run tests for every registered package, update the web pages of Julia packages, and create status badges. A minor release will be represented by the middle digit of Julia’s version number. Along with some bug fixes and new features, it will include changes that are unlikely to break your code and the package ecosystem. Any significant refactoring of the internals will also be included in the minor release. Since minor releases are branched every four months, developers can expect three minor releases every year. A major release will be represented by the first digit of Julia’s version number. Typically, major releases consist of breaking changes, but the team assures to introduce them only when there is an absolute need, for instance, fixing API design mistakes. It will also include low-level changes that can end up breaking some libraries but are essential for fundamental improvements to the language. Julia’s release process There are three phases in the Julia release process. The development phase takes up 1-4 months where new features are introduced, bugs are fixed, and more. Before the feature freeze, alpha (early preview) and beta (later preview) versions are released for developers to test them and to share their feedback. After the feature freeze, a new unstable release branch is created. In the development phase, the new features will be merged onto the master branch, while the bug fixes will go on the release branch. The second phase, stabilization, also takes up 1-4 months where all known release-blocking bugs are fixed and release candidates are built. Then they are checked for any more release-blocking bugs for one week and if there are none a final release is announced. After this, starts the maintenance phase where bug fixes are backported to the release branch. This continues till a particular release branch is declared to be unmaintained. To ensure the quality of releases and maintaining a predictable release rate the Julia team overlaps the development and stabilization phases. “The development phase of each release is time-boxed at four months and the development phase of x.(y+1) starts as soon as the development phase for x.y is over. Come rain or shine we have a new feature freeze every four months: we pick a day and you’ve got to get your features merged by that day. If new features aren’t merged, they’re not going in the release. But that’s ok, they’ll go in the next one,” explains Karpinski. Talking about long term support, Karpinski wrote that there will be four active branches. The master branch is where all the new features, bug fixes, and breaking changes will go. The unstable release branch will include all the active bug fixing and performance work that happens prior to the next minor release. The stable release branch is where the most recently released minor or major version exists. The fourth one is the long term support (LTS) branch, which is currently Julia 1.0. This branch continues to get applicable bug fixes until it is announced to be unmaintained. Karpinski also shared the different fault tolerance personas in Julia. Check out his post on the Julia blog to get a better understanding of the Julia release process. Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Julia Angwin fired as Editor-in-Chief of The Markup prompting mass resignations in protest Creating a basic Julia project for loading and saving data [Tutorial]  
Read more
  • 0
  • 0
  • 4351

article-image-typescript-3-6-releases-with-stricter-generators-new-functions-in-typescript-playground-better-unicode-support-for-identifiers-and-more
Vincy Davis
29 Aug 2019
4 min read
Save for later

TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more

Vincy Davis
29 Aug 2019
4 min read
Yesterday, the Program Manager at Typescript, Daniel Rosenwasser announced the release of TypeScript 3.6. This is a major release of TypeScript as it contains many new features in Language and Compiler such as stricter generators, more accurate array spread, improved UX around Promises, better Unicode support for identifiers, and more. TypeScript 3.6 also explores a new TypeScript playground, new Editor features, and many breaking changes. TypeScript 3.6 beta was released last month. Language and Compiler improvements Stricter checking to Iterators and Generators Previously, generator users in TypeScript could not differentiate if a value was yielded or returned from a generator. In TypeScript 3.6, due to changes in the Iterator and IteratorResult type declarations, a new type called the Generator type has been introduced. It is an Iterator that will have both the return and throw methods present. This will allow a stricter generator checker to easily understand the difference between the values from their iterators. TypeScript 3.6 also infers certain uses of yield within the body of a generator function. The yield expression can be used explicitly to enforce the type of values that can be returned, yielded, and evaluated. More accurate array spread In pre-ES2015 targets, TypeScript uses a by default --downlevelIteration flag to use iterative constructs with arrays. However, many users found it undesirable that emits produced by it had no defined property slots. To address this problem, TypeScript 3.6 presents a new __spreadArrays helper. It will “accurately model what happens in ECMAScript 2015 in older targets outside of --downlevelIteration.” Improved UX around Promises TypeScript 3.6 explores new improvements in the Promise API, which is one of the most common ways to work with asynchronous data. TypeScript’s error messages will now inform the user if a then() or await content of a Promise API is not written before passing it to another function. The Promise API will also provide quick fixes in some cases. Better Unicode support for Identifiers TypeScript 3.6 contains better support for Unicode characters in identifiers when emitting to ES2015 and later targets. import.meta support in SystemJS: The new version supports the transformation of import.meta to context.meta when the module target is set to system. get and set accessors are allowed in ambient contexts: The previous versions of TypeScript did not allow the use of get and set accessors in ambient contexts. This feature has been changed in TypeScript 3.6, since the ECMAScript’s class fields proposal have differing behavior from an existing version of TypeScript. The official post also adds, “In TypeScript 3.7, the compiler itself will take advantage of this feature so that generated .d.ts files will also emit get/set accessors.” Read Also: Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript New functions in TypeScript playground The TypeScript playground allows users to compile TypeScript and check the JavaScript output. It has more compiler options than typescriptlang and all the strict options are turned on by default in the playground. Following new functions are added in TypeScript Playground: The target option which allows users to switch out of es5 to es3, es2015, esnext, etc All the strictness flags Support for plain JavaScript files The post also states that in the future versions of TypeScript, more features like JSX support, and polishing automatic type acquisition can be expected. Breaking Changes Class members named "constructor" are now simply constructor functions. DOM updates like the global window will no longer be defined as type Window. Instead, it is defined as type Window & typeof globalThis. In JavaScript files, TypeScript will only consult immediately preceding JSDoc comments to figure out declared types. TypeScript 3.6 will not allow the escape sequences. Developers have liked the new features in TypeScript 3.6. https://twitter.com/zachcodes/status/1166840093849473024 https://twitter.com/joshghent/status/1167005999204638722 https://twitter.com/FlorianRappl/status/1166842492718899200 Interested users can check out TypeScript’s 6-month roadmap. Visit the Microsoft blog for full updates of TypeScript 3.6. Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more Babel 7.5.0 releases with F# pipeline operator, experimental TypeScript namespaces support, and more TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more
Read more
  • 0
  • 0
  • 5413

article-image-npm-install-funding-experiment-to-sustain-open-source-projects-with-ads-on-the-cli-terminal-faces-community-backlash
Fatema Patrawala
28 Aug 2019
5 min read
Save for later

‘Npm install funding’, an experiment to sustain open-source projects with ads on the CLI terminal faces community backlash

Fatema Patrawala
28 Aug 2019
5 min read
Last week, one of the npm open source authors and maintainers, software developer Feross announced an “npm install funding” experiment. Essentially, this enabled sponsors to “advertise on the Npm package install terminals”. In turn, the money raised from these ads would ensure npm maintainers are paid for their important contributions to the project, ensuring that packages remain up to date, reliable, and secure. Feross wrote on the GitHub page, “I think that the current model of sustaining open source is not working and we need more experimentation. This is one such experiment.” He further wrote that if this experiment works, then they can help make all open source healthier, too. For complex reasons, companies are generally hesitant or unwilling to fund OSS directly. When it does happen, it's never enough and it never reaches packages which have transitive dependencies (i.e. packages that no one installs explicitly and therefore no one knows exists). Feross believes that npm essentially is a public good as it is consumed by huge numbers of users, but no one pays for it. And he viewed a funding model that usually works for public goods like this are ads. But how does it work? Read Also: Surprise NPM layoffs raise questions about the company culture How was the project ‘Npm install funding’ planned to work? Feross’s idea was that when developers install the library via the npm JavaScript package manager, they get a giant banner advertisement in their terminal as shown below: Source: GitHub thread Feross asked companies to promote ads on the installation terminals of JavaScript packages that have expressed interest in participating in the funding experiment. The idea behind funding is that companies buy ad space in people's terminals, and the funding project then shares its profits with open-source projects who signed-up to show the ads, as per ZDNet.  Linode and LogRocket agreed to participate in this funding experiment. The experiment did run on a few open source projects that Feross maintains: One of them was StandardJS 14. Feross raised $2000 as Npm install funds Feross had so far earned $2,000 for his time to release Standard 14 which took him five days. If he was able to raise additional funds, his next focus was the TypeScript support in StandardJS (one of the most common feature requests) and modernizing the various text editor plugins (many of which are currently unmaintained). Community did not support promoting ads on CLI and finally it came to a halt As per ZDNet reports, the developer community has been debating on this idea. There are arguments from both sides, one who see it is a good idea to finance their projects. And there are others who are completely against seeing ads on their terminals. Most of the negative comments for this new funding scheme came from developers who are dissatisfied that these post-install ad banners will now be making their way into logs, making app debugging unnecessarily complicated. Robert Hafner, a developer from California commented on a GitHub thread, "I don't want to have to view advertisements in my CI logs, and I hate what this would mean if other packages started doing this. Some JS packages have dozens, hundreds, or even more dependencies- can you imagine what it would look like if every package did this?" Some of the developers took a step further and created the world’s first ad blocker for a command line interface. https://twitter.com/dawnerd/status/1165330723923849216 They also put pressure on Linode and LogRocket to remove showing up the ads, and Linode eventually decide to drop out. https://twitter.com/linode/status/1165421512633016322 Additionally on Hacker News, users are confused about this initiative. They are curious to know about how this will actually work out? One of them commented, “The sponsorship pays directly for maintainer time. That is, writing new features, fixing bugs, answering user questions, and improving documentation. As far as I can tell, this project is literally just a 200 line configuration file for a linter. Not even editor integrations for the linter, just a configuration file for it. Is it truly something that requires funding to 'add new features'? How much time does it take out of your day to add a new line of JSON to a configuration file, or is the sponsorship there to pay for all the bikeshedding that's probably happening in the issues and comments on the project? What sort of bugs are there in a linter configuration file? I'm really confused by all of this. > The funds raised so far ($2,000) have paid for Feross's time to release Standard 14 which has taken around five days. Five days to do what? Five full 8 hour days? Does it take 5 days to cut a GitHub release and push it to NPM? What about the other contributors that give up their time for free, are their contributions worthless? Rather than feeling like a way to support FOSS developers or FOSS projects, it feels like a rather backhanded attempt at monetization by the maintainer where Standard was picked out because it was his most popular project, and therefore would return the greatest advertising revenue. Do JavaScript developers, or people that use this project, have a more nuanced opinion than me? I do zero web development, is this type of stuff normal?” After continuous backlash from developer community, the project has come to a halt and there are no messages promoted on the CLI. It is clear that while open-source funding still remains a major pain point for developers and maintainers, people don't really like ads in their CLI terminals. What's new in tech this week! “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Kotlin 1.3.50 released with ‘duration and time Measurement’ API preview, Dukat for npm dependencies, and much more! React.js: why you should learn the front end JavaScript library and how to get started
Read more
  • 0
  • 0
  • 4249

article-image-kotlin-1-3-50-released-with-duration-and-time-measurement-api-preview-dukat-for-npm-dependencies-and-much-more
Savia Lobo
27 Aug 2019
6 min read
Save for later

Kotlin 1.3.50 released with ‘duration and time Measurement’ API preview, Dukat for npm dependencies, and much more!

Savia Lobo
27 Aug 2019
6 min read
On August 22, the JetBrains team announced the release of Kotlin 1.3.50. Some of the major improvements in this Kotlin version include a preview of the new Duration and Time Measurement API in the standard library, using Dukat for the experimental generation of external declarations for npm dependencies in Gradle Kotlin/JS projects, a separate plugin for debugging Kotlin/Native code in IntelliJ IDEA Ultimate, and much more. The team has also worked on improving Java-to-Kotlin converter and on Java compilation support in multi-platform projects. Let us have a look at these improvements in brief. Major improvements in Kotlin 1.3.50 Changes in the standard library Experimental preview of duration and time measurement API A new duration and time measurement API is available for preview. The researchers say that if the API expects the duration stored as primitive value like Long, one can erroneously pass the value in the wrong unit, and unfortunately, the type system doesn’t help prevent that. Hence, the team created a regular class to store duration solves this problem. However, this brings another problem, i.e., additional allocations. Now the API can use the Duration type, and all the clients will need to specify the time in the desired units explicitly. This release brings support for MonoClock which represents the monotonic clock, which doesn’t depend on the system time. The monotonic clock can only measure the time difference between given time points, but doesn’t know the “current time.” The Clock interface provides a general API for measuring time intervals. MonoClock is an object implementing Clock; it provides the default source of monotonic time on different platforms. When using the Clock interface, the user explicitly marks the time of action start, and later the time elapsed from the start point. It is especially convenient if one wants to start and finish measuring time from different functions. To know more about this feature in detail, read Kotlin/KEEP on GitHub. Experimental API for bit manipulation The standard library now contains an experimental API for bit manipulation. Similar extension functions for Int, Long, Short, Byte, and their unsigned counterparts have also been added. IntelliJ IDEA support in Kotlin 1.3.50 Improvements in Java to Kotlin converter This release includes a preview of Java to Kotlin converter to minimize the amount of “red code” one has to fix manually after the conversion. This improved version of the converter tries to infer nullability more correctly based on the Java type usages in the code. The goal is to decrease the number of compilation errors and to make the produced Kotlin code more convenient to use. The new converter fixes many other known bugs, too; for instance, it now correctly handles implicit Java type casts. This new converter may become the default one in the future. To turn it on, specify the Use New J2K (experimental) flag in settings. Debugging improvements In Kotlin 1.3.50, the team has improved how the Kotlin “Variables” view chooses variables to display. As there’s a lot of additional technical information in the bytecode, the Kotlin “Variables” view highlights only the relevant variables. Local variables inside the lambda, as well as captured variables from the outer context and parameters of the outer function, are correctly displayed: Source: jetbrains.com Kotlin 1.3.50 also adds improved support for the “Evaluate expression” functionality in the debugger for many non-trivial language features, such as local extension functions or accessors of member extension properties. Users can now modify variables via “Evaluate expression”: Source: jetbrains.com Added new intentions and inspections This release includes the addition of new intentions and inspections. One of the goals of intentions is to help users learn how to write idiomatic Kotlin code. The following intention, for instance, suggests using the indices property rather than building a range of indices manually: Source: jetbrains.com Updates to Kotlin/JS Kotlin 1.3.50 adds support for building and running Kotlin/JS Gradle projects using the org.jetbrains.kotlin.js plugin on Windows. Users can now build and run projects using Gradle tasks, dependencies from NPM required in the Gradle configuration are resolved and included. Users can also try out their applications using webpack-dev-server and much more. The team has also added performance improvements for Kotlin/JS by improving the incremental compilation time for projects. With this users expect speedups of up to 30% when compared to 1.3.41. This version also shows an improved integration with NPM, which means that projects are now resolved lazily and in parallel, and support for projects with transitive dependencies between compilations in the same project has been added. Kotlin 1.3.50 also brings changes in the structure and naming of generated artifacts. Generated artifacts are now bundled in the distributions folder, and they include the version number of the project and archiveBaseName (which defaults to the project name), e.g. projectName-1.0-SNAPSHOT.js. Using Dukat for automatic conversion of TypeScript declaration files Dukat allows the automatic conversion of TypeScript declaration files (.d.ts) into Kotlin external declarations. This makes it more comfortable to use libraries from the JavaScript ecosystem in a type-safe manner in Kotlin, thus, reducing the need for manually writing wrappers for JS libraries. Kotlin/JS now ships with experimental support for Dukat integration for Gradle projects. With this integration, by running the build task in Gradle, typesafe wrappers are automatically generated for npm dependencies and can be used from Kotlin. As Dukat is still in a very early stage, its integration is disabled by default. The team has prepared an example project, which demonstrates the use of dukat in Kotlin/JS projects. Updates to Kotlin/ Native Previously, the version of Kotlin/Native differed from the version of Kotlin. However, in this version, schemes for Kotlin and Kotlin/Native are now aligned. This release uses version 1.3.50 for both Kotlin and Kotlin/Native binaries, reducing the complexity. This release brings more pre-imported Apple frameworks for all platforms, including macOS and iOS. The Kotlin/Native compiler now includes actual bitcode in produced frameworks. Several performance improvements have also made in the interop tool. The team has also announced that Null-check optimizations have been planned for Kotlin 1.4. Starting from Kotlin 1.4, "all runtime null checks will throw a java.lang.NullPointerException instead of a KotlinNullPointerException, IllegalStateException, IllegalArgumentException, and TypeCastException. This applies to: the !! operator, parameter null checks in the method preamble, platform-typed expression null checks, and the as operator with a non-null type. This doesn’t apply to lateinit null checks and explicit library function calls like checkNotNull or requireNotNull." Apart from the changes mentioned, Java compilation can now be included in Kotlin/JVM targets of a multiplatform project by calling the newly added withJava() function of the DSL. This release also adds multiple features and improvements in scripting and REPL support. To know more about these changes and other changes in detail, read Kotlin 1.3.50 official blog post. Introducing Coil, an open-source Android image loading library backed by Kotlin Coroutines Introducing Kweb: A Kotlin library for building rich web applications How to avoid NullPointerExceptions in Kotlin [Video]
Read more
  • 0
  • 0
  • 3873

article-image-red-hat-announces-the-general-availability-of-red-hat-openshift-service-mesh
Amrata Joshi
27 Aug 2019
3 min read
Save for later

Red Hat announces the general availability of Red Hat OpenShift Service Mesh

Amrata Joshi
27 Aug 2019
3 min read
Last week, the team at Red Hat, a provider of enterprise open source solutions announced the general availability of Red Hat OpenShift Service Mesh for connecting, managing, observing and simplifying service-to-service communication of Kubernetes applications on Red Hat OpenShift 4.  The OpenShift Service Mesh is based on Istio, Kiali and Jaeger projects and is designed to deliver end-to-end developer experience around microservices-based application architectures. It manages the network connections between the containerized applications and decentralized applications. And eases the complex tasks of implementing bespoke networking services for applications and business logic.  Larry Carvalho, research director, IDC said in a statement to Business Wire, “Service mesh is the next big area of disruption for containers in the enterprise because of the complexity and scale of managing interactions with interconnected microservices. Developers seeking to leverage Service Mesh to accelerate refactoring applications using microservices will find Red Hat’s experience in hybrid cloud and Kubernetes a reliable partner with the Service Mesh solution." Developers can now improve the implementation of microservice architectures by natively integrating service mesh into the OpenShift Kubernetes platform. The OpenShift Service Mesh improves traffic management by including service observability and visualization of the mesh topology.  Ashesh Badani, Red Hat's senior VP of Cloud Platforms, said in a statement, "The addition of Red Hat OpenShift Service Mesh allows us to further enable developers to be more productive on the industry's most comprehensive enterprise Kubernetes platform by helping to remove the burdens of network connectivity and management from their jobs and allowing them to focus on building the next-generation of business applications." Features of Red Hat OpenShift Service Mesh Tracing OpenShift Service Mesh features tracing that uses Jaeger which is an open, distributed tracing system. Tracing helps developers in tracking a request between services. Tracing also helps in providing an insight into the request process right from the start till the end.  Visualization and observability  This Service Mesh also provides an easier way to view its topology and observe how the services interact. Visualization helps in understanding how the services are managed and how the traffic is flowing in near-real time which helps in easier management and troubleshooting. Service Mesh installation and configuration  OpenShift Service Mesh features “One-click” Service Mesh installation and configuration with the help of Service Mesh Operator and an Operator Lifecycle Management framework. The developers can deploy applications into a service mesh more easily. A Service Mesh Operator deploys Istio, Jaeger and Kiali together minimizes management burdens and automates tasks such as installation, service maintenance as well as lifecycle management. Developed with open projects OpenShift Service Mesh is developed with open projects and is built in collaboration with leading members of the Kubernetes community. Increases productivity of the developers The Service Mesh integrates communication policies without making changes to the application code or integrating language-specific libraries. To know more about Red Hat OpenShift Service Mesh, check out the official website. Red Hat joins the RISC-V foundation as a Silver level member Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Red Hat rebrands logo after 20 years; drops Shadowman
Read more
  • 0
  • 0
  • 5824
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-nushell-a-rust-based-shell
Savia Lobo
26 Aug 2019
3 min read
Save for later

Introducing Nushell: A Rust-based shell

Savia Lobo
26 Aug 2019
3 min read
On August 23, Jonathan Turner, an Azure SDK developer introduced a new shell written in Rust, called Nushell or ‘Nu’. This Rust-based shell is inspired by the “classic Unix philosophy of pipelines, the structured data approach of PowerShell, functional programming, systems programming, and more,” Turner writes in his official blog. The idea of Nushell struck when Turner’s friend Yehuda Yatz demonstrated the working of Powershell. Yatz asked Turner if he could join in his project “we could take the ideas of a structured shell and make it more functional (as opposed to object-oriented)? What if, like PowerShell, it worked on Windows, Linux, and macOS? What if it had great error messages?” Turner highlights the fact that “everything in Nu is data”; this means when a user tries other commands and realize that they are using the same commands to filter, to sort, etc. Rather than having the need to remember all the parameters to all the commands, they can just use the same verbs to act over our data, regardless of where the data came from. Nu also understands structured text files like JSON, TOML, YAML, and allows users to manipulate their data, and much more. “You get used to using the verbs, and then you can use them on anything. When you’re ready, you can write it back to disk,” Turner writes. Nu also supports opening and looking at the text and binary data. On opening a source file, users can scroll around in a syntax-highlighted file. Further on opening an xml, they can look at its data. They can even open a binary file and look at what’s inside. Turner mentions that there is a lot one might want to explore with Nushell. Hence, the team has released Nu with the ability to extend it with plugins. Nu will look for these plugins in your path, and load them up on startup. Rust language is the major backbone for this project and Nushell would not have been possible without Rust, Turner exclaims. Nu internally uses async/await, async streams, and employs liberal use of “serde” to manage serializing and deserializing into the common data format and to communicate with plugins. Nushell GitHub page reads, “This project has reached a minimum-viable product level of quality. While contributors dogfood it as their daily driver, it may be instable for some commands. Future releases will work fill out missing features and improve stability. Its design is also subject to change as it matures.” The team will further work towards stability, the ability to use Nu as the main shell, the ability to write functions and scripts in Nu, and much more. Users can also read the book on Nu, available in both English and Spanish language. To know more about this news in detail, head over to Jonathan Turner’s official blog post or visit Nushell’s GitHub page. Announcing ‘async-std’ beta release, an async port of Rust’s standard library Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial]
Read more
  • 0
  • 0
  • 6268

article-image-qt-introduces-qt-for-mcus-a-graphics-toolkit-for-creating-a-fluid-user-interface-on-microcontrollers
Vincy Davis
22 Aug 2019
2 min read
Save for later

Qt introduces Qt for MCUs, a graphics toolkit for creating a fluid user interface on microcontrollers

Vincy Davis
22 Aug 2019
2 min read
Yesterday, the Qt team introduced a new graphics toolkit called Qt for MCUs for creating fluid user interfaces (UIs) on cost-effective microcontrollers (MCUs). The toolkit will enable new and existing users to take advantage of the existing Qt tools and libraries used for Device Creation, thus enabling companies to provide better user experience.  Petteri Holländer, the Senior Vice President of Product Management at Qt said, “With the introduction of Qt for MCUs, customers can now use Qt for almost any software project they’re working on, regardless of target – with the added convenience of using just one technology framework and toolset.” He further adds, “This means that both existing and new Qt customers can pursue the many business growth opportunities offered by connected devices – across a wide and diverse range of industries.” Qt for MCUs utilizes the Qt Modeling Language (QML) and the developer-designing tools for constructing a fast and customized Qt application. “With the frontend defined in declarative QML and the business logic implemented in C/C++, the end result is a fluid graphical UI application running on microcontrollers,”  says the Qt team. Key benefits offered by Qt for MCUs Existing skill sets can be reused for Qt for microcontrollers Same technology can be used in high-end and mass market devices, thus yielding low maintenance cost No compromise on graphics performance, hence reduced hardware costs Users can upgrade to the cross-platform graphical toolkit from a legacy solution Check out the Qt for MCUs website for more information. Qt and LG Electronics partner to make webOS as the platform of choice for embedded smart devices Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more Qt Creator 4.9.0 released with language support, QML support, profiling and much more
Read more
  • 0
  • 0
  • 3450

article-image-julia-v1-2-releases-with-support-for-argument-splatting-unicode-12-new-star-unary-operator-and-more
Vincy Davis
21 Aug 2019
3 min read
Save for later

Julia v1.2 releases with support for argument splatting, Unicode 12, new star unary operator, and more

Vincy Davis
21 Aug 2019
3 min read
Yesterday, the team behind Julia announced the release of Julia v1.2. It is the second minor release in the 1.x series and has new features such as argument splatting, support for Unicode 12 and a new ⋆ (star) unary operator. Julia v1.2 also has many performance improvements with marginal and undisruptive changes. The post states that Julia v1.2 will not have a long term support and “As of this release, 1.1 has been effectively superseded by 1.2, which means there will not likely be any further 1.1.x releases. Our good friend 1.0 is still currently the only long-term support version.” What’s new in Julia v1.2 This version supports Argument splatting (x...). It can be used in calls to the new pseudo-function in constructors. Support for Unicode 12 has been added. A new unary operator ⋆ (star) has been added. New library functions A new argument !=(x), >(x), >=(x), <(x), <=(x) has been added to assist in returning the partially-applied versions of the functions A new getipaddrs() function is added to return all the IP addresses of the local machine with the IPv4 addresses New library function Base.hasproperty and Base.hasfield  Other improvements in Julia v1.2 Multi-threading changes It will now be possible to schedule and switch tasks during @threads loops, and perform limited I/O. A new thread-safe replacement has been added to the Condition type. It can now be accessed as Threads.Condition. Standard library changes The extrema function now accepts a function argument in the same way like minimum and maximum. The hasmethod method can now check for matching keyword argument names. The mapreduce function will accept multiple iterators. Functions that invoke commands like run(::Cmd), will get a ProcessFailedException rather than an ErrorException. A new no-argument constructor for Ptr{T} has been added to construct a null pointer. Jeff Bezanson, Julia co-creator says, “If you maintain any packages, this is a good time to add CI for 1.2, check compatibility, and tag new versions as needed.” Users are happy with the Julia v1.2 release and are all praises for the Julia language. A user on Hacker News comments, “Julia has very well thought syntax and runtime I hope to see it succeed in the server-side web development area.” Another user says, “I’ve recently switched to Julia for all my side projects and I’m loving it so far! For me the killer feature is the seamless GPUs integration.” For more information on Julia v1.2, head over to its release notes. Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment
Read more
  • 0
  • 0
  • 4442

article-image-announcing-async-std-beta-release-an-async-port-of-rusts-standard-library
Bhagyashree R
20 Aug 2019
3 min read
Save for later

Announcing ‘async-std’ beta release, an async port of Rust's standard library

Bhagyashree R
20 Aug 2019
3 min read
Last week, Stjepan Glavina, a Rust programmer at Ferrous Systems, announced that the ‘async-std’ library has now reached its beta phase. This library has the “looks and feels” of Rust’s standard library but replaces its components with their async counterparts. The current state of asynchronous programming in Rust Asynchronous code facilitates the execution of multiple tasks concurrently on the same OS thread. It can potentially make your application much faster while using fewer resources as compared to a corresponding threaded implementation. Speaking of Rust’s asynchronous ecosystem, we can say that it is still early days. The standard library’s Future trait was recently stabilized and the async/await feature will soon be landing in a future version. Why async-std is introduced Rust’s Future trait is often considered to be difficult to understand, not because it is complex but because it is something that people are not used to. Stating what makes Futures confusing, the book accompanying the ‘async-std’ library states, “Futures have three concepts at their base that seem to be a constant source of confusion: deferred computation, asynchronicity and independence of execution strategy.” The ‘async-std’ library, together with its supporting libraries, aims to make asynchronous programming easier in Rust. It is based on Future and supports a set of traits from the futures library. It is also designed to support the new async programming model that is slated to be stabilized in Rust 1.39. The async-std library serves as an interface to all important primitives including filesystem operations, network operations and concurrency basics like timers. In addition to the async variations of I/O primitives found in std, it comes with async versions of concurrency primitives like Mutex and RwLock. It also ships with a ‘tasks’ module that performs a single allocation per spawned task and awaits the result of the task without the need of an extra channel. Speaking about the learning curve of async-std, Glavina said, “By mimicking standard library's well-understood APIs as closely as possible, we hope users will have an easy time learning how to use async-std and switching from thread-based blocking APIs to asynchronous ones. If you're familiar with Rust's standard library, very little should come as a surprise.” The library received a mixed reaction from the community. A user said, “In fact, Rust does have a great solution for non-blocking code: just use threads! Threads work great, they are very fast on Linux, and solutions such as goroutines are just implementations of threads in userland anyway...People tell me that Rust services scale up to thousands of requests per second on Linux by just using 1:1 threads.” A Rust developer on Reddit commented, “Looks good. I'm hoping we can soon see this project, the futures crate, async WGs crates and Tokio converge to build unified async foundations, reduce duplicated efforts (and avoid seeing dependencies explode when using several crates using async together). It's unclear to me why apparently similar crates are popping up, but I hope this is just temporary explorations of async that will merge together.” Check out the official announcement to know more about the async-std library. Also, check out its book: Async programming in Rust with async-std. Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more Introducing Abscissa, a security-oriented Rust application framework by iqlusion Introducing Ballista, a distributed compute platform based on Kubernetes and Rust  
Read more
  • 0
  • 0
  • 4719
article-image-git-2-23-released-with-two-new-commands-git-switch-and-git-restore-a-new-tutorial-and-much-more
Amrata Joshi
19 Aug 2019
4 min read
Save for later

Git 2.23 released with two new commands ‘git switch’ and ‘git restore’, a new tutorial, and much more!

Amrata Joshi
19 Aug 2019
4 min read
Last week, the team behind Git released Git 2.23 that comes with experimental commands, backward compatibility and much more. This release has received contributions from over 77 contributors out of which 26 were new. What’s new in Git 2.23? Experimental commands This release comes with a new pair of experimental commands, git switch and git restore for providing a better interface for the git checkout.  “Two new commands "git switch" and "git restore" are introduced to split "checking out a branch to work on advancing its history" and "checking out paths out of the index and/or a tree-ish to work on advancing the current history" out of the single "git checkout" command,” the official mail thread reads.  Git checkout can be used to change branches with git checkout <branch>. In case if the user doesn’t want to switch branches, git checkout can be used to change individual files, too. These new commands aim to separate the responsibilities of git checkout into two narrower categories that is operations, which change branches and operations that change files.  Backward compatibility  The "--base" option of "format-patch" is now compatible with "git patch-id --stable".  Git fast-export/import pair The "git fast-export/import" pair will be now used to handle commits with log messages in encoding other than UTF-8. git clone --recurse-submodules "git clone --recurse-submodules" has now learned to set up the submodules for ignoring commit object names that are recorded in the superproject gitlink. git diff/grep The pattern "git diff/grep" that is used for extracting funcname and words boundary for Rust has now been added. git fetch" and "git pull The commands "git fetch" and "git pull" are used to report when a fetch results in non-fast-forward updates that lets the user notice unusual situation.    git status With this release, the extra blank lines in "git status" output have been reduced. Developer support This release comes with developer support for emulating unsatisfied prerequisites in tests for ensuring that the remainder of the tests succeeds when tests with prerequisites are skipped. A new tutorial for git-core developers This release comes with a new tutorial that target aspiring git-core developers. This tutorial demonstrates end-to-end workflow of creating a change to the Git tree, for sending it for review, as well as making changes that are based on comments. Bug fixes in Git 2.23 In the earlier version, "git worktree add" used to fail when another worktree that was connected to the same repository was corrupt. This issue has been corrected in this release. An issue with the file descriptor has been fixed. This release comes with an updated parameter validation. The code for parsing scaled numbers out of configuration files has been made more robust and easier to follow with this release. Few users seem to be happy about the new changes made, a user commented on HackerNews, “It's nice to hear that there appears to be progress being made in making git's tooling nicer and more consistent. Git's model itself is pretty simple, but the command line tools for working with it aren't and I feel that this fuels most of the "Git is hard" complaints.” Few others are still skeptical about the new commands, another user commented, “On the one hand I'm happy on the new "switch" and "restore" commands. On the other hand, I wonder if they truly add any value other than the semantic distinction of functions otherwise present in checkout.” To know more about this news in detail, read the official blog post on GitHub. GitHub has blocked an Iranian software developer’s account GitHub services experienced a 41-minute disruption yesterday iPhone can be hacked via a legit-looking malicious lightning USB cable worth $200, DefCon 27 demo shows
Read more
  • 0
  • 0
  • 11516

article-image-rails-6-releases-with-action-mailbox-parallel-testing-action-text-and-more
Vincy Davis
19 Aug 2019
4 min read
Save for later

Rails 6 releases with Action Mailbox, Parallel Testing, Action Text, and more!

Vincy Davis
19 Aug 2019
4 min read
After a long wait, the stable version of Rails 6 is finally available for users. Five days ago, David Hansonn, the Ruby on Rails creator, released the final version, which has many new major features such as Action Mailbox, Action Text, Parallel Testing, and Action Cable Testing. Rails 6 also has many minor changes, fixes, and upgrades in Railties, Action Pack, Action View, and more. This version also requires Ruby 2.5.0+ for running codes. Hansonn says, “While we took a little while longer with the final version than expected, the time was spent vetting that Rails 6 is solid.” He also informs that GitHub, Shopify, and Basecamp and other companies and applications have already been using the pre-release version of Rails 6 in their production. https://twitter.com/dhh/status/1162426045405921282 Read More: The first release candidate of Rails 6.0.0 is now out! Major new features in Rails 6 Action Mailbox This new framework can direct incoming emails to controller like mailboxes, such that user can use it for processing in Rails. Action Mailbox ships with access to Amazon SES, Mailgun, Mandrill, Postmark, and SendGrid. It is also possible to control inbound mails via the built-in Exim, Postfix, and Qmail ingresses. These inbound emails are transformed to InboundEmail records using Active Record. They can also be routed asynchronously using Active Job to one or several dedicated mailboxes. To know more about the basics of Action Mailbox, head over to action mailbox basics. Action Text Action Text includes the Trix editor that can handle formatting, links, quotes, lists, embedded images, and galleries. It also provides rich text content which is saved in the RichText model associated with the existing Active Record model in the chosen application. To get an overview on Action Mailbox, read the action text overview page. Parallel Testing Parallel Testing allows users to parallelize their test suite, thus reducing the time required to run the entire test suite. The forking process is the default method used to do parallel testing. To learn how to do parallel testing with processes, check out the parallel testing page. Action Cable Testing Action Cable testing tools allows users to test their Action Cable functionality at the connections, channels and broadcast levels. For information on connection test case and channel test case, head over to the testing action cable. Other changes in Rails 6 Railties Railties handles the bootstrapping process in a Rails application and also provides the Rails generators core. Multiple database support for rails db:migrate:status command has been added. A new guard has been introduced to protect against DNS rebinding attacks. Action Pack The Action Pack framework is used for handling and responding to web requests. It also provides mechanisms for routing, controllers, and more. Rails 6 allows the use of #rescue_from for handling parameter parsing errors. A new middleware ActionDispatch::HostAuthorization has been added to guard against DNS rebinding attacks. Developers are excited to use the new features introduced in Rails 6, especially the parallel testing feature. A user on Hacker News comments, “Wow, Multiple DB and Parallel Testing is super productive. I hope framework maintainers of other language community should also get inspired by these features.” Another comment reads, “The multiple database support is really exciting. Anything that makes it easier to push database reads to replicas is huge.” Another user says, “Congrats to the Rails team ! I can't praise Rails enough. Such a huge boost in productivity for prototyping or full production app. I use it for both work or side project. I can't imagine a world without it. Long live Rails!” Twitteratis are also praising the Rails 6 release. https://twitter.com/tenderlove/status/1162566272271339521 https://twitter.com/AviShastry/status/1162755780229107713 https://twitter.com/excid3/status/1162426797046284288 To know about the minor changes, fixes, and upgrades in Rails 6, check out the Ruby on Rails 6.0 Release Notes. Head over to the Ruby blog for more details about the release. GitLab considers moving to a single Rails codebase by combining the two existing repositories Rails 6 will be shipping source maps by default in production Ruby on Rails 6.0 Beta 1 brings new frameworks, multiple DBs, and parallel testing
Read more
  • 0
  • 0
  • 4572

article-image-rust-1-37-0-releases-with-support-for-profile-guided-optimization-built-in-cargo-vendor-and-more
Bhagyashree R
16 Aug 2019
4 min read
Save for later

Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more

Bhagyashree R
16 Aug 2019
4 min read
After releasing version 1.36.0 last month, the team behind Rust announced the release of Rust 1.37.0 yesterday. Among the highlights of this version are support for referring to enum variants via type aliases, built-in cargo vendor, unnamed const items, profile-guided optimization, and more. Key updates in Rust 1.37.0 Referring to enum variants through type aliases Starting with this release, you can refer to enum variants through type aliases in expression and pattern contexts. Since Self behaves like a type alias in implementations, you can also refer to enum variants with Self::Variant. Built-in Cargo support for vendored dependencies Until now the cargo vendor command was available as a separate crate to developers. Starting with Rust 1.37.0, it is integrated directly into Cargo, the Rust package manager, and crate host. This Cargo subcommand fetches all the crates.io and git dependencies for a project into the vendor/ directory. It also shows the configuration necessary to use the vendored code during builds. Using unnamed const items for macros Rust 1.37.0 allows you to create unnamed const items. So, instead of giving an explicit name to your constants, you can name them as ‘_’. This update will enable you to easily create ergonomic and reusable declarative and procedural macros for static analysis purposes. Support for profile-guided optimization Rust’s compiler, rustc now supports Profile-Guided Optimization (PGO) through the -C profile-generate and -C profile-use flags. PGO allows the compiler to optimize your code based on feedback for real workloads. It optimizes a program in the following two steps: The program is first built with instrumentation inserted by the compiler. This is achieved by passing the -C profile-generate flag to rustc. This instrumented program is then run on sample data and the profiling data is written to a file. The program is built again, however, this time the collected profiling data is fed into rustc by using the -C profile-use flag. This build will use the collected data to enable the compiler to make better decisions about code placement, inlining, and other optimizations. Choosing a default binary in Cargo projects The cargo run command allows you to run a binary or example of the local package enabling you to quickly test CLI applications. It often happens that there are multiple binaries present in the same packages. In such cases, developers need to explicitly mention the name of the binary they want to run with the --bin flag. This makes the cargo run command not as ergonomic, especially when we are calling a binary more often than the others. To solve this issue, Rust 1.37.0 introduces a new key in Cargo.toml called default-run. Declaring this key in the [package] section will make the cargo run command default to the chosen binary if the --bin flag is not passed. Developers have already started testing out this new release. A developer who used  profile-guided optimization shared his experience on Hacker News, “The effect is very dependent on program structure and actual code running, but for a suitable application it's reasonable to expect anything from 5-15%, and sometimes much more (see e.g. Firefox reporting 18% here).” Others also speculated that async/await will come in Rust 1.39. “Seems like async/await is going to slip into Rust 1.39 instead.” Another user said, “Congrats! Like many I was looking forward to async/await in this release but I'm happy they've taken some extra time to work through any existing issues before releasing it.” Check out the official announcement by the Rust team to know more in detail. Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more Introducing Vector, a high-performance data router, written in Rust “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices
Read more
  • 0
  • 0
  • 4376
article-image-poetry-a-python-dependency-management-and-packaging-tool-releases-v1-beta-1-with-url-dependency
Vincy Davis
14 Aug 2019
4 min read
Save for later

Poetry, a Python dependency management and packaging tool, releases v1 beta 1 with URL dependency

Vincy Davis
14 Aug 2019
4 min read
Last week, Poetry, a dependency management and packaging tool for Python released their version 1 beta 1. Before venturing into details of this Poetry release, let’s have a brief overview about Python, its issues with dependency management, pipenv and Poetry in general. There’s no doubt that Python is loved by many developers. It is considered as one of the top-rated programming languages with benefits like extensive support library, less-complex syntax, high productivity, excellent integration feature, and many more. Though it has been rated as one of the fastest growing programming languages in 2019, there are some problems with Python, which if rectified, can make it more powerful and accessible to users. Python’s poor dependency management is one such issue. Dependency management helps in managing all the libraries required to make an application work. It becomes extremely necessary when working in a complex project or in a multi-environment. An ideal dependency management tool assists in tracking, updating libraries easier and faster, as well as to solve package dependency issues. Python’s dependency management requires users to make a virtual environment to have separate dependencies, manual addition of version number in every file, inability to parallelize dependency installation and more.  To combat these issues, Python now has two maturing dependency management tools called Pipenv and Poetry. Each of these tools simplify the process of creating a virtual environment and sorting dependencies.  The PyPA-endorsed Pipenv automatically creates and manages a virtualenv for user projects. It also adds/removes packages from the Pipfile as a user installs/uninstalls packages. Its main features include automatically generating a Pipfile and a Pipfile.lock, if one doesn't exist, creating a virtualenv, adding packages to a Pipfile when installed to name a few. On the other hand, Poetry dependency management tool uses only one pyproject.toml file to manage all the dependencies. Poetry allows users to declare the libraries that their project depends on and Poetry will automatically install/update them for the user. It allows projects to be directly published to PyPI, easy tracking of the state of dependencies, and more.  New features in Poetry v1 beta 1 The major highlight in Poetry v1 beta 1 is the new added support for url dependencies in the pull request checklist. This new feature is a significant one for Python users, as it can be added to a current project via the add command or by modifying the pyproject.toml file directly.  Other features in Poetry v1 beta 1 Support for publishing to PyPI using API tokens Licenses can be identified by their full name Settings can be specified with environment variables Settings no longer need to be prefixed by settings, when using the config command.  Users, in general, are quite happy with the Poetry dependency management tool for Python, as can be seen in the user reactions below from Hacker News.  A comment on Hacker News reads, “I like how transparent poetry is about what's happening when you run it, and how well presented that information is. I've come to loathe pipenv's progress bar. Running it in verbose mode isn't much better. I can't be too mad at pipenv, but all in all poetry is a better experience.” Another user says, “Poetry is very good. I think projects should use it. I hope the rest of the ecosystem can catch up quickly. Tox and pip and and pex need full support for PEP 517/518.” Another user comments, “When you run poetry, it activates the virtualenv before it runs whatever you wanted. So `poetry add` (it's version of pip install) doesn't require you to have the virtualenv active. It will activate it, run the install, and update your dependency specifications in pyproject.toml.  You can also do `poetry run` and it will activate the virtualenv before it runs whatever shell command comes after. Or you can do `poetry shell` to run a shell inside the virtualenv. I like the seamless integration, personally.” Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble” PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption
Read more
  • 0
  • 0
  • 4826

article-image-pypy-supports-python-2-7-even-as-major-python-projects-migrate-to-python-3
Fatema Patrawala
14 Aug 2019
4 min read
Save for later

PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3

Fatema Patrawala
14 Aug 2019
4 min read
The switch from Python 2 to Python 3 has been rocky and all signs point to Python 3 pulling firmly into the lead. Python 3 is broadly compatible with several libraries and there's an encouraging rate of adoption by cloud providers for application support too as Python 2 reaches its EOL in 2020. But there are still plenty of efforts to keep Python 2 alive in one form or another. The default implementation of Python is open source, so it can easily be forked and maintained separately. Currently all major open source Python packages support Python 3.x and Python 2.7. Last year Python team updated users that Python 2.7 maintenance will stop in 2020. Originally, there was no official date but in March 2018, the team announced the date to be January 1, 2020. https://twitter.com/ThePSF/status/1160839590967685121 This means that the maintainers of Python 2 will stop supporting it even for security patches. There are many institutions and codebases who have not yet ported their code from Python 2 to Python 3. Hence, Python volunteers have created resources to help publicize and educate, but there's still more work that needs to be done. For which the Python Software Foundation has contracted with Changeset Consulting, to help communicate about the sunsetting of Python 2. The high-level goal for Changeset's involvement is to help users through the end of the transition, help with communication so volunteers are not overwhelmed, and help update public-facing assets so core developers are not overwhelmed. This will also require all the major Python projects to migrate to Python 3 and above. However, PyPy confirmed last week that they do not plan to deprecate Python 2.7 support as long as PyPy exists, according to the official Twitter statement. https://twitter.com/pypyproject/status/1160209907079176192 Apart from this, PyPy runtime is popular among developers due to its built-in JIT which provides major speed boosts to Python code. Pypy has long favored Python 2 over Python 3. This favoritism isn't solely because the first versions of PyPy were Python 2 implementations and Python 3 has only recently entered the picture. It's also due to a key part of PyPy's ecosystem, RPython which is a dynamic language implementation framework has its foundation in Python 2. This is not likely to change, according to PyPy's official FAQ. The page states, “the Python 2 version of PyPy will be around 'forever', i.e. as long as PyPy itself is around.” According to Pypy’s official announcement it will support Python 3 while continuing to support Python 2.7 version. Last year when Python rolled out the announcement that Python 2 will officially end in 2020, users on Hacker News discussed about the most popular packages being compatible with Python 3 while millions of people in the industry still work on Python 2.7. One of the users comments read, “most popular packages are now compatible with Python 3 I often see this but I think it's a perception from the Internet/web world. I work for CGI, all (I'm not kidding) our software (we have many) are 2.7. You will never see them used "on the web/Internet/forum/network" place but the day-to-day job of millions of people in the industry is 2.7. And we are a tiny focused industry. So I'm sure there are many other industries like us which are 2.7 that you never heard of. That's why "most popular" mean nothing once you take how Python is used as a whole. We don't use any of this web/Internet/network "popular" packages. I'm not saying Python shouldn't move on. I'm just trying to argue against this "most popular packages" while millions of us, even if you don't know it, use none of those. GNU Radio 3.8.0.0 releases with new dependencies, Python 2 and 3 compatibility, and much more! NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more
Read more
  • 0
  • 0
  • 7174