Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Languages

202 Articles
article-image-microsoft-releases-typescript-3-4-with-an-update-for-faster-subsequent-builds-and-more
Bhagyashree R
01 Apr 2019
3 min read
Save for later

Microsoft releases TypeScript 3.4 with an update for faster subsequent builds, and more

Bhagyashree R
01 Apr 2019
3 min read
Last week, Daniel Rosenwasser, Program Manager for TypeScript, announced the release of TypeScript 3.4. This release comes with faster subsequent builds with the ‘--incremental’ flag, higher order type inference from generic functions, type-checking for globalThis, and more. Following are some of the updates in TypeScript 3.4: Faster subsequent builds TypeScript 3.4 comes with the ‘--incremental’ flag, which records the project graph from the last compilation. So, when TypeScript is invoked with the ‘--incremental’ flag set to ‘true’, it will check for the least costly way to type-check and emit changes to a project by referring to the saved project graph. Higher order type inference from generic functions This release comes with various improvements around inference, one of the main being functions inferring types from other generic functions. At the time of type argument inference, TypeScript will now propagate the type parameters from generic function arguments onto the resulting function type. Updates in ReadonlyArray and readonly tuples Now, using read-only array-like types is much easier. This release introduces a new syntax for ReadonlyArray that uses a new readonly modifier for array types: function foo(arr: readonly string[]) { arr.slice();        // okay arr.push("hello!"); // error! } TypeScript 3.4 also adds support for readonly tuples. To make a tuple readonly, you just have to prefix it with the readonly keyword. Type-checking for globalThis This release supports type-checking ECMAScript’s new globalThis, which is a global variable that refers to the global scope. With globalThis, you can access the global scope that can be used across different environments. The globalThis variable provides a standard way for accessing the global scope which can be used across different environments. Breaking changes As this release introduces few updates in inference, it does come with some breaking changes: TypeScript now uses types that flow into function calls to contextually type function arguments. Now, the type of top-level ‘this’ is typed as ‘typeof globalThis’ instead of ‘any’. As a result, users might get some errors for accessing unknown values on ‘this’ under ‘noImplicitAny’. TypeScript 3.4 correctly measures the variance of types declared with ‘interface’ in all cases. This introduces an observable breaking change for interfaces that used a type parameter only in keyof. To know the full list of updates in TypeScript 3.4, check out the official announcement. An introduction to TypeScript types for ASP.NET core [Tutorial] Typescript 3.3 is finally released! Yarn releases a roadmap for Yarn v2 and beyond; moves from Flow to Typescript
Read more
  • 0
  • 0
  • 1870

article-image-php-8-and-7-4-to-come-with-just-in-time-jit-to-make-most-cpu-intensive-workloads-run-significantly-faster
Bhagyashree R
01 Apr 2019
3 min read
Save for later

PHP 8 and 7.4 to come with Just-in-time (JIT) to make most CPU-intensive workloads run significantly faster

Bhagyashree R
01 Apr 2019
3 min read
Last week, Joe Watkins, a PHP developer, shared that PHP 8 will support the Just-in-Time (JIT) compilation. This decision was the result of voting among the PHP core developers for supporting JIT in PHP 8 and also in PHP 7.4 as an experimental feature. If you don’t know what JIT is, it is a compiling strategy in which a program is compiled on the fly into a form that’s usually faster, typically the host CPU’s native instruction set. To do this the JIT compiler has access to dynamic runtime information whereas a standard compiler doesn’t. How PHP programs are compiled? PHP comes with a virtual machine named the Zend VM. The human-readable scripts are compiled into instructions, which are called opcodes that are understandable to the virtual machine. Opcodes are low-level, and hence faster to translate to machine code as compared to the original PHP code. This stage of execution is called compile time. These opcodes are then executed by the Zend VM in the runtime stage. JIT is being implemented as an almost independent part of OPcache, an extension to cache the opcodes so that compilation happens only when it is required. In PHP, JIT will treat the instructions generated for the Zend VM as the intermediate representation. It will then generate an architecture dependent machine code so that the host of your code is no longer the Zend VM, but the CPU directly. Why JIT is introduced in PHP? PHP hits the brick wall Many improvements have been done to PHP since its 7.0 version including optimizations for HashTable, specializations in the Zend VM for certain opcodes, specializations in the compiler for certain sequences, and many more. After so many improvements, now PHP has reached the extent of its ability to be improved any further. PHP for non-Web scenarios Adding support for JIT in PHP will allow its use in scenarios for which it is not even considered today, i.e., in other non-web, CPU-intensive scenarios, where the performance benefits will be very substantial. Faster innovation and more secure implementations With JIT support, the team will be able to develop built-in functions in PHP instead of C without any huge performance penalty. This will make PHP less susceptible to memory management, overflows, and other similar issues associated with C-based development. We can expect the release of PHP 7.4 later this year, which will debut JIT in PHP.  Though there is no official announcement about the release schedule of PHP 8, many are speculating its release in late 2021. Read Joe Watkins’ announcement on his blog. PEAR’s (PHP Extension and Application Repository) web server disabled due to a security breach Symfony leaves PHP-FIG, the framework interoperability group Google App Engine standard environment (beta) now includes PHP 7.2
Read more
  • 0
  • 0
  • 20509

article-image-winners-for-the-2019-net-foundation-board-of-directors-elections-are-finally-declared
Amrata Joshi
29 Mar 2019
4 min read
Save for later

Winners for the 2019 .NET Foundation Board of Directors elections are finally declared

Amrata Joshi
29 Mar 2019
4 min read
The result for the .NET Foundation Board of Directors 2019 is finally revealed. Out of the 476 voters, 329 casted ballots in this election. After counting the ballots using Scottish STV (Single Transferable Vote), Jon Skeet, Sara Chipps, Phil Haack, Iris Classon, Ben Adams, Oren Novotny, and Beth Massi were declared as winners. In total there were 45 candidates competing for 6 seats. Beth Massi has been appointed by Microsoft while the rest got elected by .NET Foundation Members. Following are the winner profiles Jon Skeet: A Java developer at Google in London and is also C# author and community leader. https://twitter.com/jonskeet/status/1111540160305475584 Sara Chipps: Engineering Manager at Stack Overflow. https://twitter.com/SaraJChipps/status/1111458522418552835 Phill Hacck: A developer and author, and best known for his blog, Haacked. https://twitter.com/haacked/status/1111493618441703427 Iris Classon: Software developer, cloud architect at Konstrukt. She is also a member of MEET (Microsoft Extended Experts Team) Ben Adams: Co-founder and CTO of Illyriad Games. https://twitter.com/jongalloway/status/1111324076981682176 Oren Novotny: Microsoft’s Regional Director, MVP, and chief architect of DevOps & modern software at Insight. https://twitter.com/onovotny/status/1111410983749115905 Beth Massi: Product Marketing Manager for the .NET Platform at Microsoft and has previously worked for the .NET Foundation in 2014. https://twitter.com/BethMassi/status/1108838511069716480 How did the election process go The candidate's votes for a round are calculated by taking the sum of the votes from the previous round and votes received in the current round. The votes received in the current round and votes transferred away in the current round represent “votes being transferred”. The single transferable vote system was opted because it is a type of ranked-choice voting which is used for electing a group of candidates, for instance, a committee or a council. In this type of voting, the votes are transferred from losing candidates to other choices in the ballot. Round 1 The first round considered the count of first choices. Since none of the candidates had surplus votes so the candidates who received the least number of votes or no votes at all got eliminated and votes for other candidates got transferred for the next round. Round 2 Round 2 calculated the count after eliminating Lea Wegner and Robin Krom who received 0 votes. There was a tie between, Lea Wegner and Robin Krom while choosing candidates to eliminate. Though Lea Wegner was later chosen by breaking the tie randomly. Since none of the candidates had surplus votes they got transferred for the next round. Round 3 Round 3 calculated the count after eliminating Robin Krom and transferring votes. Since none of the candidates had surplus votes they got transferred for the next round. Round 4 Round 4 calculated the count after eliminating Nate Barbettini and transferring votes. There was a tie between the candidates Peter Mbanugo, Robert McLaws, Virgile Bello, Nate Barbettini, and Marc Bruins while choosing candidates to eliminate. In this round candidate, Nate Barbettini was chosen by breaking the tie randomly. Since none of the candidates had surplus votes they got transferred for the next round. Round 5 The fifth round considered the count after eliminating Marc Bruins and transferring votes. There was a tie between the candidates, Peter Mbanugo, Robert McLaws, Virgile Bello, and Marc Bruins while choosing the candidates to eliminate. Out of which, Marc Bruins was chosen by breaking the tie randomly. Since none of the candidates had surplus votes they got transferred for the next round. Collectively there were 41 such rounds where each round was an elimination round and then finally the winners were declared. To know more about this news, check out Opavote’s blog post. Fedora 31 will now come with Mono 5 to offer open-source .NET support Inspecting APIs in ASP.NET Core [Tutorial] .NET Core 3 Preview 2 is here!
Read more
  • 0
  • 0
  • 1689
Banner background image

article-image-go-user-survey-2018-results-golang-goes-from-strength-to-strength
Amrata Joshi
29 Mar 2019
5 min read
Save for later

Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work

Amrata Joshi
29 Mar 2019
5 min read
Yesterday, the team at Go announced the results of their user survey for the year 2018. 5,883 users from 103 different countries participated in the survey. Key highlights from the Go User Survey 2018 According to the report, for the first time, half of the survey respondents said that they are currently using Go as part of their daily routine. It seems this year proved to be even better for Go as the graph saw a significant increase in the number of respondents who develop their projects in Go as part of their jobs and also use Go outside of their work responsibilities. Also, a majority of survey respondents said that Go is their most-preferred programming language. Here are some other findings: API/RPC services and CLI tools are the commonly used tools by Go users. VS Code and GoLand have become the most popular code editors among survey respondents. Most Go developers use more than one primary OS for development where Linux and macOS are popular. Automation tasks were declared as the fast-growing area for Go. Web development still remains the most common domain but DevOps has shown the highest year-over-year growth and is also the second most common domain now. Survey respondents have been shifting from on-premise Go deployments to containers and serverless cloud deployments. To simplify the survey report, the team at Go broke the responses down into three groups: The ones who are using Go both in and outside of work The ones who use Go professionally but not outside of work The ones who only use Go outside of their job responsibilities According to the survey, nearly half (46% of respondents) write Go code professionally as well as during their free time because the language appeals to developers who do not view software engineering only as a day job. According to the survey, 85% of respondents would prefer to use Go for their next project. Would you recommend Go to a friend? This year, the team had added a question, "How likely are you to recommend Go to a friend or colleague?" for calculating Net Promoter Score. This score measures the number of "promoters" a product has than "detractors" and it ranges from -100 to 100. A positive value would suggest most people are likely to recommend using a product, while negative values will suggest, most people wouldn’t recommend using it. The latest score (2018) is 61, where 68% are promoters - 7% are detractors. How satisfied are developers with Go? The team also asked many questions about developer satisfaction with Go, in the survey. Majority survey respondents indicated a high level of satisfaction which is consistent with prior year results. Around 89% of the respondents said that they are happy with Go and  66% felt that it is working well for their team. These metrics showed an increase in 2017 and they mostly remained stable this year. The downside About half of the survey respondents work on existing projects that are written in other languages, and ⅓ work on a team or project that prefer a language other than Go. The reason highlighted by the respondents for this is the missing language features and libraries. The team identified the biggest challenges faced by developers while using Go with the help of their machine learning tools. The top three challenges highlighted by the team as per the survey are: Package management is one of the major challenges. A response from the survey reads,“keeping up with vendoring, dependency / packet [sic] management / vendoring is not unified.” There are major differences from more familiar programming languages. A response from the survey reads, “Syntax close to C-languages with slightly different semantics makes me look up references somewhat more than I'd like", Another respondent says, "My coworkers who come from non-Go backgrounds are trying to use Go as a version of their previous language but with channels and Goroutines." Lack of generics is another problem. Another response from the survey reads, “Lack of generics makes it difficult to persuade people who have not tried Go that they would find it efficient. Hard to build richer abstractions (want generics)” Go community Go blog, Reddit's r/golang, Twitter, and Hacker News remain the primary sources for Go news. This year, 55% of survey respondents said they are interested in contributing towards the Go community, though it is slightly lesser than last year (59%). The standard library and official Go tools require interacting with the core Go team which could be one of the reasons for the dip in the percentage. Another reason is the dip in the percentage of participants who are willing to take up the Go project leadership. It was 30% last year and it has become 25% this year. This year only 46% of respondents are confident about taking the leadership of Go, it was 54% last year. You can read the complete results of the survey on Golang’s blog post. Update: The title of this article was amended on 4.1.2019. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch Google Podcasts is transcribing full podcast episodes for improving search results State of Go February 2019 – Golang developments report for this month released  
Read more
  • 0
  • 0
  • 17608

article-image-openjdk-teams-detailed-message-to-nullpointerexception-and-explanation-in-jep-draft
Amrata Joshi
27 Mar 2019
4 min read
Save for later

OpenJDk team’s detailed message to NullPointerException and explanation in JEP draft

Amrata Joshi
27 Mar 2019
4 min read
Developers frequently encounter NullPointerExceptions while developing or maintaining a Java application. They often don't contain a message which makes it difficult for the developers to find the cause of the exception. Java Enhancement Proposal (JEP) proposes to enhance the exception text to notify what was null and which action failed. For instance: a.to_b.to_c = null; a.to_b.to_c.to_d.num = 99; The above code will print java.lang.NullPointerException and which doesn’t highlight what value is null. A message like 'a.to_b.to_c' is null and cannot read field 'to_d' will highlight where the exception is thrown. Basic algorithm to compute the message In case of an exception, the instruction that caused the exception, is known by the virtual machine. The instruction gets stored in the 'backtrace' datastructure of a throwable object which is held in a field private to the jvm implementation. In order to assemble a string as a.to_b.to_c, the bytecodes need to be visited in reverse execution order while starting at the bytecode that raised the exception. If a developer or tester knows which bytecode pushed the null value, then it's easy to print the message. A simple data flow analysis is run on the bytecodes to understand as to which previous instruction pushed the null value. This data flow analysis simulates the execution stack that does not contain the values computed by the bytecodes. Instead, it contains information about which bytecode pushed the value to the stack. The analysis will run until the information for the bytecode that raised the exception becomes available. With this information, it becomes easy to assemble the message. An exception message is usually passed to the constructor of Throwable that writes it to its private field 'detailMessage'. In case the message is computed only on access, then it can’t be passed to the Throwable constructor. Since the field is private, there is no natural way to store the message in it. To overcome this, developers can make detailMessage package-private or can use a shared secret for writing it or can write to the detailMessage field via JNI. How should the message content be displayed The message should only be printed if the NullPointerException is raised by the runtime. In case, the exception is explicitly constructed, it won’t make sense to add the message and it could be misleading as no NullPointerException was encountered. As the original message won’t get regained so the message should try resembling code as possible. This makes it easy to understand and compact. The message should contain information from the code like class names, method names, field names and variable names. Testing done by the OpenJDK team The basic implementation of testing for regressions of the messages is in use in SAP's internal Java virtual machine since 2006. The team at OpenJDK has run all jtreg tests, many jck tests and many other tests on the current implementation. They have found no issues so far. Proposed risks to it This proposal has certain risks which include imposing overhead on retrieving the message of a NullPointerMessage. Though the risk of breaking something in the virtual machine is very low. The implementation needs to be extended in case more bytecodes are added to the Bytecode specification. Another issue that was raised is printing the field names or local names might impose a security problem. To know more about this news, check out OpenJDK’s blog post. Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot The OpenJDK Transition: Things to know and do Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more!
Read more
  • 0
  • 0
  • 1360

article-image-clojurists-together-fund-a-sum-of-9000-each-for-the-open-source-projects-neanderthal-and-aleph
Bhagyashree R
26 Mar 2019
3 min read
Save for later

Clojurists Together fund a sum of $9,000 each for the open source projects, Neanderthal and Aleph

Bhagyashree R
26 Mar 2019
3 min read
Clojurists Together shortlisted two projects namely Neanderthal and Aleph for Q1 of 2019 (February-April) to provide funding for further development of these projects, the details of which they shared yesterday. These projects will get total funding of $9,000, which means $3,000 per month. https://twitter.com/cljtogether/status/1109925960155983872 What is Clojurists Together? Clojurists Together was formed back in 2017 and is run by a board of developers in the Clojure Community. It focuses on keeping the development of open source Clojure software sustainable by raising funds and providing support for infrastructure and documentation. Additionally, it also supports other community initiatives like the Google Summer of Code. The way it works is that open source developers apply for funding, and if the board members think that the project meets the requirements of the Clojurists Together members, their project is selected for funding. Then the developers get paid to work on their project for three months. The funds are raised with the help of Clojure companies and individual developers who can sign up for a monthly contribution or do a one-time donation. This is their fifth funding cycle and previously they have supported datascript, kaocha, cljdoc, Shadow CLJS, clj-http, Figwheel, ClojureScript, and CIDER. Details of the projects funded Neanderthal Neanderthal is a Clojure library for faster matrix and linear algebra computations. It is based on native libraries of BLAS and LAPACK computation routines for both CPU and GPU. On GPU, this library is almost 3000x faster than optimized Clojure/Java libraries and on CPU it is 100x faster than optimized pure Java. This project is being developed by Dragan Djuric, who works on the Uncomplicate suite of libraries and is also the professor of Software Engineering at the University of Belgrade. Within the span of these three months, Djuric plans to work on some of the following updates: Writing an introductory series named Deep Learning from the ground up with Clojure. Integrating Nvidia’s cuSolver into Neanderthal's CUDA GPU engine to provide some key LAPACK functions that are only available on the CPU. Along with these developments, he will also be improving the documentation and tutorials for Neanderthal. Aleph Aleph is a Clojure library for client and server network programming. Based on Netty, it is said to be one of the best options for building high-performance communication systems in Clojure. Oleksii Kachaiev, who is working on Aleph, has planned the following additions for Aleph in the allocated 3 months: Releasing a new version of Aleph with the latest developments Updating internals of the library and interactions with Netty to ease the operational burden and improve performance Implementing missing parts of the websocket protocol To know how far these projects have come, check out this monthly update for February shared by Clojurists Together yesterday. To read the official announcement, visit the official site of Clojurists Together. Clojure 1.10 released with Prepl, improved error reporting and Java compatibility ClojureCUDA 0.6.0 now supports CUDA 10 Clojure 1.10.0-beta1 is out!
Read more
  • 0
  • 0
  • 2027
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-swift-5-for-xcode-10-2-is-here
Natasha Mathur
26 Mar 2019
3 min read
Save for later

Swift 5 for Xcode 10.2 is here!

Natasha Mathur
26 Mar 2019
3 min read
Apple announced Swift 5 for Xcode 10.2 yesterday. The latest Swift 5 update in Xcode 10.2 explores new features and changes to App thinning, Swift Language, Swift Standard Library, Swift Package Manager, and Swift Compiler. Swift is a popular general-purpose, compiled programming language developed by Apple Inc. for iOS, macOS, watchOS, tvOS, and beyond. Writing Swift code is interactive and its syntax is concise yet expressive. Swift code is safe and comprises all modern features. What’s new in Swift 5 for Xcode 10.2? In Xcode 10.2, the Swift command-line tools now require the Swift libraries in macOS. These libraries are included by default in Swift, starting with macOS Mojave 10.14.4. App Thinning Swift apps now no longer include dynamically linked libraries for the Swift standard library as well as the Swift SDK overlays in build variants for devices that run iOS 12.2, watchOS 5.2, and tvOS 12.2. This results in Swift apps getting smaller once they’re shipped in the App Store or thinned in an app archive for development distribution. Swift Language String literals in Swift 5 can be expressed with the help of enhanced delimiters. A string literal consisting of one or more number signs (#) before the opening quote considers backslashes and double-quote characters as literal. Also, Key paths can now support the identity keypath (\.self) i.e. a WritableKeyPath that refers to its entire input value. Swift Standard Library The standard library in Swift now consists of the Result enumeration with Result.success(_:) and Result.failure(_:) cases. Due to the addition of Standard Library, the Error protocol now conforms to itself and makes working with errors easier. The SIMD types and basic operators have now been defined in the standard library. Set and Dictionary also make use of a different hash seed now for each newly created instance. The DictionaryLiteral type has been renamed as KeyValuePairs in Swift 5. The String structure’s native encoding has been switched from UTF-16 to UTF-8. This improves the relative performance of String.UTF8View as compared to String.UTF16View. Swift Package Manager Targets can now declare the commonly used, target-specific build settings while using the Swift 5 Package.swift tools-version. A new dependency mirroring feature in Swift 5 enables the top-level packages to override dependency URLs. Package manager operations have become significantly faster for larger packages. Swift Compiler Size taken up by Swift metadata can be reduced now as the Convenience initializers defined in Swift only allocates an object ahead of time in case its calling a designated initializer defined in Objective-C. C types consisting of alignment greater than 16 bytes are no longer available in Swift. Swift 3 mode has been deprecated. Supported values for the -swift-version flag has become 4, 4.2, and 5.Default arguments can now be printed in SourceKit-generated interfaces for Swift modules. For more information, check out the official Swift 5 for Xcode 10.2 release notes. Swift 5 for Xcode 10.2 beta is here with stable ABI Exclusivity enforcement is now complete in Swift 5 ABI stability may finally come in Swift 5.0
Read more
  • 0
  • 0
  • 4636

article-image-microsoft-introduces-pyright-a-static-type-checker-for-the-python-language-written-in-typescript
Bhagyashree R
25 Mar 2019
2 min read
Save for later

Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript

Bhagyashree R
25 Mar 2019
2 min read
Yesterday, Microsoft released a new static type checker for Python called Pyright to fill in the gaps in existing Python type checkers like mypy. Currently, this type checker supports Python 3.0 and its newer versions. What are the type checking features Pyright brings in? It comes with support for PEP 484 (type hints including generics), PEP 526 (syntax for variable annotations), and PEP 544 (structural subtyping). It supports type inference for function return values, instance variables, class variables, and globals. It provides smart type constraints that can understand conditional code flow constructs like if/else statements. Increased speed Pyright shows 5x speed as compared to mypy and other existing type checkers written in Python. It was built keeping large source bases in mind and can perform incremental updates when files are modified. No need for setting up a Python Environment Since Pyright is written in TypeScript and runs within Node, you do not need to set up a Python environment or import third-party packages for installation. This proves really helpful when using the VS Code editor, which has Node as its extension runtime. Flexible configurability Pyright enables users to have granular control over settings. You can specify different execution environments for different subsets of a source base. For each environment, you can specify different PYTHONPATH settings, Python version, and platform target. To know more in detail about Pyright, check out its GitHub repository. Debugging and Profiling Python Scripts [Tutorial] Python 3.8 alpha 2 is now available for testing Core CPython developer unveils a new project that can analyze his phone’s ‘silent connections’  
Read more
  • 0
  • 0
  • 7377

article-image-r-core-team-releases-r-3-5-3
Natasha Mathur
13 Mar 2019
2 min read
Save for later

R core team releases R 3.5.3

Natasha Mathur
13 Mar 2019
2 min read
The R Core Team released R 3.5.3, last week. R 3.5.3 explores bug fixes to the functions writeLines, setClassUnion, and stopifnot. R 3.5.3 is a minor release and does not consist of many new changes or improvements. What’s new in R 3.5.3? Detection of flags has been improved for C++ 98/11/14/17. There’s a new macro ‘F_VISIBILITY’ chosen as an alternative for ‘F77_VISIBILITY’. This new macro will now become the preferred form in R 3.6.0. The issue in writeLines (readLines (fnam), fnam) has been fixed. It now works as expected. setClassUnion () no longer sends warnings to its users. It instead uses message() on encountering “non local” subclasses in class members. The failure issue in stopifnot (exprs = T) has been fixed. R team usually use the release names that are in references to Peanuts strips/films. The code-name for this release ( R 3.5.3) has been selected as "Great Truth" by the R team, which left its users with a bit of a mystery. R core team also gave a hint to its users, saying that the clue is in the date of the release i.e. 11th March 2019. The code-name has been debunked, with one user tweeting out the reference in one of the Peanuts strips: https://twitter.com/AdelmoFilho42/status/1105079537749184512 For more information, check out the official R 3.5.3 release notes. Android Studio 3.5 Canary 7 releases! LXD 3.11 releases with configurable snapshot expiry, progress reporting, and more GNU Octave 5.1.0 releases with new changes and improvements
Read more
  • 0
  • 0
  • 1902

article-image-gcc-9-will-come-with-improved-diagnostics-simpler-c-errors-and-much-more
Amrata Joshi
11 Mar 2019
2 min read
Save for later

GCC 9.1 releases with improved diagnostics, simpler C++ errors and much more

Amrata Joshi
11 Mar 2019
2 min read
Just two months ago, the team behind GCC (GNU Compiler Collection) made certain changes to GCC 9.1. And Last week, the team released GCC 9.1 with improved diagnostics, location and simpler C++ errors.  What’s new in GCC 9.1? Changes to diagnostics The team added a left-hand margin that shows line numbers. GCC 9.1 now has a new look for the diagnostics. The diagnostics can label regions of the source code in order to show relevant information. The diagnostics come with left-hand and right-hand sides of the “+” operator, so GCC highlights them inline. The team has added a JSON output format such that GCC 9.1 now has a machine-readable output format for diagnostics. C++ errors  The compiler usually has to consider several functions while dealing with C++ at a given call site and reject all of them for different reasons. Also, the g++‘s error messages need to be handled and a specific reason needs to be given for rejecting each function. This makes simple cases difficult to read. This release comes with a  special-casing to simplify g++ errors for common cases. Improved C++ syntax in GCC 9.1 The major issue within GCC’s internal representation is that not every node within the syntax tree has a source location. For GCC 9.1, the team has worked to solve this problem so that most of the places in the C++ syntax tree now retain location information for longer. Users can now emit optimization information GCC 9.1 can now automatically vectorize loops and reorganize them to work on multiple iterations at once. Users will now have an option, -fopt-info, that will help in emitting optimization information. Improved runtime library in GCC 9.1 This release comes with improved experimental support for C++17, including <memory_resource>. There will also be a support for opening file streams with wide character paths on Windows. Arm specific This release comes with support for the deprecated Armv2 and Armv3 architectures and their variants have been removed. Support for the Armv5 and Armv5E architectures has also been removed. To know more about this news, check out RedHat’s blog post. DragonFly BSD 5.4.1 released with new system compiler in GCC 8 and more The D language front-end support finally merged into GCC 9 GCC 8.1 Standards released!
Read more
  • 0
  • 0
  • 11913
article-image-gnu-octave-5-1-0-releases-with-new-changes-and-improvements
Natasha Mathur
04 Mar 2019
3 min read
Save for later

GNU Octave 5.1.0 releases with new changes and improvements

Natasha Mathur
04 Mar 2019
3 min read
GNU Octave team released version 5.1.0 of the popular high-level programming language, last week. GNU Octave 5.1.0 comes with general improvements, dependencies, and other changes. What’s new in GNU Octave 5.1.0? General Improvements The Octave plotting system in GNU Octave 5.1.0 supports high-resolution screens (the ones with greater than 96 DPI such as HiDPI/Retina monitors). There’s a newly added Unicode character support for files and folders in Windows. The fsolve function is modified to use larger step sizes while calculating the Jacobian of a function with finite differences, thereby, leading to faster convergence. The ranks function is recoded for performance and has now become 25X faster. It also supports a third argument that can specify resolving the ranking of tie values. Another function randi has also been recoded to produce an unbiased (all results are equally likely) sample of integers. The function isdefinite now returns true or false instead of -1, 0, or 1. The intmax, intmin, and flintmax functions can now accept a variable as input. There is no longer a need for path handling functions to perform variable or brace expansion on path elements. Also, Octave’s load-path is no longer subject to these expansions. A new printing device is available, "-ddumb", that can produce ASCII art for plots. This device has been made available only with the gnuplot toolkit. Other Changes Dependencies: The GUI now requires Qt libraries in GNU Octave 5.1.0. The minimum Qt4 version that is supported is Qt4.8.The OSMesa library is no longer used. To print invisible figures while using OpenGL graphics, the Qt QOFFSCREENSURFACE feature must be available. The FFTW library should be able to perform FFT calculations. The FFTPACK sources are removed from Octave. Matlab Compatibility: The functions such as issymmetric and ishermitian now accept an option "nonskew" or "skew" for calculating the symmetric or skew-symmetric property of a matrix. The issorted function can now use a direction option of "ascend" or "descend". You can now use clear with no arguments and it will remove only local variables from the current workspace. Global variables will no longer be visible, but will exist in the global workspace. Graphic Objects: Figure graphic objects in GNU Octave 5.1.0 now have a new property "Number" which is read-only and that can return the handle (number) of the figure. But if "IntegerHandle" is set to "off" then the property will return an empty matrix []. Patch and surface graphic objects can now use the "FaceNormals" property for flat lighting. "FaceNormals" and "VertexNormals" can now be calculated only when necessary to improve graphics performance. The "Margin" property of text-objects has a new default of 3 rather than 2. For the complete list of changes, check out the official GNU Octave 5.1.0 release notes. GNU Health Federation message and authentication server drops MongoDB and adopts PostgreSQL Bash 5.0 is here with new features and improvements GNU ed 1.15 released!
Read more
  • 0
  • 0
  • 3392

article-image-the-npm-engineering-team-shares-why-rust-was-the-best-choice-for-addressing-cpu-bound-bottlenecks
Bhagyashree R
04 Mar 2019
3 min read
Save for later

The npm engineering team shares why Rust was the best choice for addressing CPU-bound bottlenecks

Bhagyashree R
04 Mar 2019
3 min read
Last month, the npm engineering team in a white paper shared why they chose Rust to rewrite their authorization service. If you are not already aware, npm is the largest package manager that offers both an open source and enterprise registry. The npm registry boasts of about 1.3 billion package downloads per day. Looking at the huge user base, it is not a surprise that the npm engineering team has to regularly keep a check on any area that causes performance problems. Though most of the network-bound operations were pretty efficient, while looking at the authorization service, the team saw a CPU-bound task that was causing a performance bottleneck. They decided to rewrite its “legacy JavaScript implementation” in Rust to make it modern and performant. Why the npm team chose Rust? C, C++, and Java were rejected by the team as C++ or C requires expertise in memory management and Java requires the deployment of JVM and associated libraries. They were then left with two options as the alternate programming languages: Go and Rust. To narrow down on one programming language that was best suited for their authorization service, the team rewrote the service in Node.js, Go, and Rust. The Node.js rewrite was acting as a baseline to evaluate Go or Rust. While rewriting in Node.js took just an hour, given the team’s expertise in JavaScript, the performance was very similar to the legacy implementation. The team finished the Go rewrite in two days but ruled it out because it did not provide a good dependency management solution. “The prospect of installing dependencies globally and sharing versions across any Go project (the standard in Go at the time they performed this evaluation) was unappealing,” says the white paper. Though the Rust rewrite took the team about a week, they were very impressed by the dependency management Rust offers. The team noted that Rust’s strategy is very much inspired by npm’s strategy. For instance, its Cargo command-line tool is similar to the npm command-line tool. All in all, the team chose Rust because not only it matched their JavaScript-inspired expectations, it also gave better developer experience. The deployment process of the new service was also pretty straightforward, and even after deployment, the team rarely encountered any operational issues. The team also states that one of the main reasons for choosing Rust was its helpful community. “When the engineers encountered problems, the Rust community was helpful and friendly in answering questions. This enabled the team to reimplement the service and deploy the Rust version to production.” What were the downsides of choosing Rust? The team did find the language a little bit difficult to grasp at first. The team shared in the white paper, “The design of the language front-loads decisions about memory usage to ensure memory safety in a different way than other common programming languages.” Rewriting the service in Rust came with an extra burden of maintaining two separate solutions for monitoring, logging, and alerting for the existing JavaScript stack and the new Rust stack. Given that it is quite a new language, Rust currently also lacks industry-standard libraries and best practices for these solutions. Read the white paper shared by npm for more details. Mozilla engineer shares the implications of rewriting browser internals in Rust Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Scroll to understand consumer attitudes for an ad-free experience on the web
Read more
  • 0
  • 0
  • 3439

article-image-the-erlang-ecosystem-foundation-launched-at-the-code-beam-sf-conference
Bhagyashree R
01 Mar 2019
2 min read
Save for later

The Erlang Ecosystem Foundation launched at the Code BEAM SF conference

Bhagyashree R
01 Mar 2019
2 min read
Yesterday, at the ongoing Code BEAM SF event, the formation of Erlang Ecosystem Foundation (EFF) was announced. Its founding members, Jose Valim, Peer Stritzinger, Fred Hebert, Miriam Pena, and Francesco Cesarini spoke about its journey, importance, and goals. The proposal for creating EEF was submitted last year in December to foster the Erlang and Elixir ecosystem. https://twitter.com/CodeBEAMio/status/1101310225804476416 Code BEAM SF, formerly known as Erlang & Elixir Factory, is a two-day event commenced on Feb 28. This conference brings together the best minds in the Erlang and Elixir communities to discuss the future of these technologies. The purpose of the Erlang Ecosystem Foundation EEF is a non-profit organization for driving the further development and adoption of Erlang, Elixir, LFE, and other technologies based on BEAM, the Erlang virtual machine. Backed by companies like Cisco, Erlang solutions, Ericsson, and others, this foundation aims to grow and support a diverse community around the Erlang and Elixir Ecosystem. This foundation will encourage the development of technologies and open source projects based on BEAM languages. “Our goal is to increase the adoption of this sophisticated platform among forward-thinking organizations. With member-supported Working Groups actively contributing to libraries, tools, and documentation used regularly by individuals and companies relying on the stability and versatility of the ecosystem, we actively invest in critical pieces of technical infrastructure to support our users in their efforts to build the next generation of advanced, reliable, real-time applications,” says the official EEF website. EEF will also be responsible for sponsoring the working groups to help them solve the challenges users of BEAM technology might be facing, particularly in areas such as documentation, interoperability, and performance. To know more about Erlang Ecosystem Foundation in detail, visit its official website. Erlang turns 20: Tracing the journey from Ericsson to Whatsapp Elixir 1.7, the programming language for Erlang virtual machine, releases Introducing Mint, a new HTTP client for Elixir
Read more
  • 0
  • 0
  • 3266
article-image-mozilla-engineer-shares-the-implications-of-rewriting-browser-internals-in-rust
Bhagyashree R
01 Mar 2019
2 min read
Save for later

Mozilla engineer shares the implications of rewriting browser internals in Rust

Bhagyashree R
01 Mar 2019
2 min read
Yesterday, Diane Hosfelt, a Research Engineer at Mozilla, shared what she and her team experienced when rewriting Firefox internals in Rust. Taking Quantum CSS as a case study, she touched upon the potential security vulnerabilities that could have been prevented if it was written in Rust from the very beginning. Why Mozilla decided to rewrite Firefox internal in Rust? Quantum CSS is a part of Mozilla’s Project Quantum, under which it is rewriting Firefox internals to make it faster. One of the major parts of this project is Servo, an engine designed to provide better concurrency and parallelism. To achieve these goals Mozilla decided to rewrite Servo in Rust, replacing C++. Rust is very similar to C++ in some ways while being different in terms of the abstractions and data structures it uses. It was created by Mozilla keeping concurrency safety in mind. Its type and memory-safe property make programs written in Rust thread-safe. What type of bugs does Rust prevent? Overall Rust prevents bugs related to memory, bounds, null/uninitialized variables, or integer by default. Hosfelt mentioned in her blog post, “Due to the overlap between memory safety violations and security-related bugs, we can say that Rust code should result in fewer critical CVEs (Common Vulnerabilities and Exposures).” However, there are some types of bugs that Rust does not address like correctness bugs. According to Hosfelt, Rust is a good option in the following cases: When your program involves processing of untrusted input safely When you want to use parallelism for better performance When you are integrating isolated components into an existing codebase You can go through the blog post by Diane Hosfelt on Mozilla’s website. Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Scroll to understand consumer attitudes for an ad-free experience on the web Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 5630

article-image-rust-1-33-0-released-with-improvements-to-const-fn-pinning-and-more
Amrata Joshi
01 Mar 2019
2 min read
Save for later

Rust 1.33.0 released with improvements to Const fn, pinning, and more!

Amrata Joshi
01 Mar 2019
2 min read
Yesterday, the team at Rust announced the stable release, Rust 1.33.0, a programming language that helps in building reliable and efficient software. This release comes with significant improvements to const fns, and the stabilization of a new concept: "pinning." What's new in Rust 1.33.0? https://twitter.com/rustlang/status/1101200862679056385 Const fn It’s now possible to work with irrefutable destructuring patterns (e.g. const fn foo((x, y): (u8, u8)) { ... }). This release also offers let bindings (e.g. let x = 1;). It also comes with mutable let bindings (e.g. let mut x = 1;) Pinning This release comes with a new concept for Rust programs called pinning. Pinning ensures that the pointee of any pointer type for example P has a stable location in memory. This means that it cannot be moved elsewhere and its memory cannot be deallocated until it gets dropped. And the pointee is said to be "pinned". Compiler It is now possible to set a linker flavor for rustc with the -Clinker-flavor command line argument. The minimum required LLVM version is 6.0. This release comes with added support for the PowerPC64 architecture on FreeBSD and x86_64-unknown-uefi target. Libraries In this release, the methods overflowing_{add, sub, mul, shl, shr} are const functions for all numeric types. Now the is_positive and is_negative methods are const functions for all signed numeric types. Even the get method for all NonZero types is now const. Language It now possible to use the cfg(target_vendor) attribute. E.g. #[cfg(target_vendor="apple")] fn main() { println!("Hello Apple!"); }. It is now possible to have irrefutable if let and while let patterns. It is now possible to specify multiple attributes in a cfg_attr attribute. One of the users commented on the HackerNews, “This release also enables Windows binaries to run in Windows nanoserver containers.” Another comment reads, “It is nice to see the const fn improvements!” https://twitter.com/AndreaPessino/status/1101217753682206720 To know more about this news, check out Rust’s official post. Introducing RustPython, a Python 3 interpreter written in Rust How Deliveroo migrated from Ruby to Rust without breaking production Rust 1.32 released with a print debugger and other changes  
Read more
  • 0
  • 0
  • 2864