Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Languages

202 Articles
article-image-go-introduces-generic-codes-and-a-new-contract-draft-design-at-gophercon-2019
Vincy Davis
29 Jul 2019
3 min read
Save for later

Go introduces generic codes and a new contract draft design at GopherCon 2019

Vincy Davis
29 Jul 2019
3 min read
Update: On 31st July, Ian Lance Taylor posted in detail explaining the benefits and costs of including generics in Go. He also briefly discussed the draft design to convey how generics is going to be added to the Go language. Taylor says “Our goal is to arrive at a design that makes it possible to write the kinds of generic code, without making the language too complex to use or making it not feel like Go anymore.” Check out the Goland blog for more details. On 26th July at GopherCon 2019, Ian Lance Taylor introduced generics codes in Go. He briefly explained the need, implementation and benefits from generics for the Go language. Next, Taylor reviewed the Go contract design draft which included addition of optional type parameters to types and functions. https://twitter.com/ymotongpoo/status/1154957680651276288 https://twitter.com/lelenanam/status/1154819005925867520 Taylor also proposed guidelines for implementing generic design in Go. https://twitter.com/chimeracoder/status/1154794627548897280 In all the three years of Go surveys, lack of generics has been listed as one of the three highest priorities for fixing the Go language. Taylor defines generic as “Generic programming which enables the representation of functions and data structures in a generic form, with types factored out.” Generic code is written using types, which are specified later. An unspecified type is called as type parameter. A type parameter offers support only when permitted by contracts. A generic code imparts strong basis for sharing codes and building programs. It can be compiled using an interface-based approach which optimizes time as the package is compiled only once. If a generic code is compiled multiple times, it can carry compile time cost. Some of the many functions that can be written generically in Go include - Image Source: Source graph Go already supports two generic data structures which are built using Slice and Map languages. Go requires data structures to be written only once and then reused after putting it in a package. The contract draft design states that since Go is designed to support programming, a clear contract should be maintained between a generic code and a calling code. With the new changes, users may find the language more complex. However, the Go team expects users to not write generic code themselves, instead use packages that are written by others using generic code. Developers are very happy that the Go generics proposal is simple to understand and enables users to depend on the already written generic packages. This will save them time as users need not rewrite type specific functions in Go. https://twitter.com/lizrice/status/1154802013982449666 https://twitter.com/protolambda/status/1155286562659282952 https://twitter.com/arschles/status/1154793543149375488 https://twitter.com/YvanDaSilva/status/1155432594818969600 https://twitter.com/mickael/status/1154799370610466816 Users have also admired the new contract design draft by the Go team. https://twitter.com/miyagawa/status/1154810546002153473 https://twitter.com/t_colgate/status/1155380984671551488 https://twitter.com/francesc/status/1154796941227646976 Head over to the Google proposal page for more details on the new contract draft design. Read More Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language Is Golang truly community driven and does it really matter? Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work
Read more
  • 0
  • 0
  • 3083

article-image-julia-announces-the-preview-of-multi-threaded-task-parallelism-in-alpha-release-v1-3-0
Vincy Davis
24 Jul 2019
5 min read
Save for later

Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0

Vincy Davis
24 Jul 2019
5 min read
Yesterday, Julia team announced the alpha release of v1.3.0, which is an early preview of Julia version 1.3.0, expected to be out in a couple of months. The alpha release includes a preview of a new threading interface for Julia programs called multi-threaded task parallelism. The task parallelism model allows many programs to be marked in parallel for execution, where a ‘task’ will run all the codes simultaneously on the available thread. This functionality works similar to a GC model (garbage collection) as users can freely release millions of tasks and not worry about how the libraries are implemented. This portable model has been included over all the Julia packages. Read Also: Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial] Jeff Bezanson and Jameson Nash from Julia Computing, and Kiran Pamnany from Intel say the Julia task parallelism is “inspired by parallel programming systems like Cilk, Intel Threading Building Blocks(TBB) and Go”. With multi-threaded task parallelism, Julia model can schedule many parallel tasks that call library functions. This works smoothly as the CPUs are not overcrowded with threads. This acts as an important feature for high-level languages as they require library functions frequently. How to resolve challenges while implementing task parallelism Allocating and switching task stacks Each task requires its own execution stack distinct from the usual process or thread stacks provided by Unix operating systems. Julia has an alternate implementation of stack switching which trades time for memory when a task switches. However, it may not be compatible with foreign code that uses cfunction. This implementation is used when stacks consume large address space. Event loop thread issues an async signal If a thread needs an event loop thread to wake up, it issues an async signal. This may be due to another thread scheduling new work, or a thread which is beginning to run garbage collection, or a thread which wants to take the I/O lock to perform I/O. Task migration across system threads In general, a task may start running on one thread, block for a while, and then restart on another thread. Julia uses thread-local variables every time a memory is allocated internally. Currently, a task always runs on the thread it started running on initially. To support this, Julia is using the concept of a sticky task where a task must run on a given thread and per-thread queues for running tasks associated with each thread. Sleeping idle threads To avoid 100% usage of CPUs all the time, some tasks are made to sleep. This can lead to a synchronization problem as some threads might be scheduled for new work while others have been kept on sleep. Dedicated scheduler task cause overhead problem When a task is blocked, the scheduler is called to pick another task to run. But, on what stack does the code run? It is possible to have a dedicated scheduler task; however, it may cause less overhead if the scheduler code runs in the context of the recently-blocked task. One suggested measure is to pull a task out of the scheduler queue to avoid switch away. Classic bugs The Julia team faced many difficult bugs while implementing multi-threaded functionality. One of the many bug was a mysterious one on Windows which got fixed by flipping a single bit. Future goals for Julia version 1.3.0 increase performance work on task switch and the I/O latency allow task migration use multiple threads in the compiler improved debugging tools provide alternate schedulers Developers are impressed with the new multithreaded parallelism functionality. A user on Hacker News comments “Great to see this finally land - thanks for all the team's work. Looking forward to giving it a whirl. Threading is something of a prerequisite for acceptance as a serious language among many folks. So great to not just check that box, but to stick the pen right through it. The devil is always in the details, but from the doc the interface looks pretty nice.” Another user says, “This is huge! I was testing out the master branch a few days ago and the parallelism improvements were amazing.” Many users are expecting Julia to challenge Python in the future. A comment on Hacker News reads “Not only is this huge for Julia, but they've just thrown down the gauntlet. The status quo has been upset. I expect Julia to start eating everyone's lunch starting with Python. Every language can use good concurrency & parallelism support and this is the biggest news for all dynamic languages.” Another user says, “I worked in a computational biophysics department with lots of python/bash/R and I was the only one who wrote lots of high-performance code in Julia. People were curious about the language but it was still very much unknown. Hope we will see a broader adoption of Julia in the future - it's just that it is much better for the stuff we do on a daily basis.” To learn how to implement Julia using task parallelism, head over to Julia blog. Mozilla is funding a project for bringing Julia to Firefox and the general browser environment Announcing Julia v1.1 with better exception handling and other improvements Julia for machine learning. Will the new language pick up pace?
Read more
  • 0
  • 0
  • 3801

article-image-why-was-rust-chosen-for-libra-us-congressman-questions-facebook-on-libra-security-design-choices
Sugandha Lahoti
22 Jul 2019
6 min read
Save for later

“Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices

Sugandha Lahoti
22 Jul 2019
6 min read
Last month, Facebook announced that it’s going to launch its own cryptocurrency, Libra and Calibra, a payment platform that sits on top of the cryptocurrency, unveiling its plans to develop an entirely new ecosystem for digital transactions. It also developed a new programming language, “Move” for implementing custom transaction logic and “smart contracts” on the Libra Blockchain. The Move language is written entirely in Rust. Although Facebook’s media garnered a massive media attention and had investors and partners from the likes of PayPal, loan platform Kiva, Uber, and Lyft, it had its own share of concerns. The US administration is worried about a non-governmental currency in the hands of big tech companies. Early July, the US congress asked Facebook to suspend the implementation of Libra until the ramifications were investigated. Last week, at the U.S. House Committee on Financial Services hearing, investigating Libra’s security related challenges, Congressman Denver Riggleman posed an important question to David Marcus, head of Calibra, asking why the Rust language was chosen for Libra. Riggleman: I was really surprised about the Rust language. So my first question is, why was the Rust language chosen as the implementation language for Libra? Do you believe it's mature enough to handle the security challenges that will affect these large cryptocurrency transactions? Marcus: The Libra association will own the repository for the code. While there are many flavors and branches being developed by third parties, only safe and verified code will actually be committed to the actual Libra code base which is going to be under the governance of the Libra association. Riggleman: It looks like Libra was built on the nightly build of the Rust programming language. It's interesting because that's not how we did releases at the DoD. What features of Rust are only available in the nightly build that aren't in the official releases of Rust? Does Facebook see it as a concern that they are dependent on unofficially released features of the Rust programming language? Why the nightly releases? Do you see this as a function of the prototyping phase of this? Marcus: Congressman, I don’t have the answers to your very technical questions but I commit that we will get back to you with more details on your questions. Marcus appeared before two US congressional hearing sessions last week where he was constantly grilled by legislators. The grilling led to a dramatic alteration in the strategy of Libra. Marcus has clarified that Facebook won't move forward with Libra until all concerns are addressed. The original vision of Facebook with Libra was to be an open and largely decentralized network which would be beyond the reach of regulators. Instead, regulatory compliance would be the responsibility of exchanges, wallets, and other services called the Libra association. Post the hearing Marcus has stated that the Libra Association would have a deliberately limited role in regulatory matters. Per ArsTechnica, Calibra, would follow US regulations on consumer protection, money laundering, sanctions, and so forth. But Facebook didn't seem to have plans for the Libra Association, Facebook, or any associated entity to police illegal activity on the Libra network as a whole. This video clipping sparked quite the discussion on Hacker News and Reddit with people applauding the QnA session. Some appreciated that legislators are now asking tough questions like these. “It's cool to see a congressman who has this level of software dev knowledge and is asking valid questions.” “Denver Riggleman was an Air Force intelligence officer for 11 years, then he became an NSA contractor. I'm not surprised he's asking reasonable questions.” “I don't think I've ever heard of a Congressman going to GitHub, poking around in some open source code, and then asking very cogent and relevant questions about it. This video is incredible if only because of that.” Others commented on why Congress may have trust issues with using a young programming language like Rust for something like Libra, which requires layers of privacy and security measures. “Traditionally, government people have trust issues with programming languages as the compiler is, itself, an attack vector. If you are using a nightly release of the compiler, it may be assumed by some that the compiler is not vetted for security and could inject unstable or malicious code into another critical codebase. Also, Rust is considered very young for security type work, people rightly assume there are unfound weaknesses due to the newness of the language and related libraries”, reads one comment from Hacker News. Another adds, “Governments have issues with non-stable code because it changes rapidly, is untested and a security risk. Facebook moves fast and break things.” Rust was declared as the most loved programming language by developers in the Stack Overflow survey 2019. This year more or less most major platforms have  jumped on the bandwagon of writing or rewriting its components in the Rust programming language. Last month, post the release of Libra, Calibra tech lead Ben Maurer took to Reddit to explain why Facebook chose the programming language Rust. Per Maurer, “As a project where security is a primary focus, the type-safety and memory-safety of Rust were extremely appealing. Over the past year, we've found that even though Rust has a high learning curve, it's an investment that has paid off. Rust has helped us build a clean, principled blockchain implementation. Part of our decision to choose Rust was based on the incredible momentum this community has achieved. We'll need to work together on challenges like tooling, build times, and strengthening the ecosystem of 3rd-party crates needed by security-sensitive projects like ours.” Not just Facebook, last week, Microsoft announced plans to replace their C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features. In June, Brave ad-blocker also released a new engine written in Rust which gives 69x better performance. Airbnb has introduced PyOxidizer, a Python application packaging and distribution tool written in Rust. “I’m concerned about Libra’s model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector Facebook releases Pythia, a deep learning framework for vision and language multimodal research
Read more
  • 0
  • 0
  • 6565

article-image-facebook-released-hermes-an-open-source-javascript-engine-to-run-react-native-apps-on-android
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Facebook released Hermes, an open source JavaScript engine to run React Native apps on Android

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Facebook released a new JavaScript engine called Hermes under an open source MIT license. According to Facebook, this new engine will speed up start times for native Android apps built with React Native framework. https://twitter.com/reactnative/status/1149347916877901824 Facebook software engineer Marc Horowitz unveiled Hermes at the Chain React 2019 conference held yesterday in Portland, Oregon. Hermes is a new tool for developers to primarily improve app startup performance in the same way Facebook does for its apps, and to make apps more efficient on low-end smartphones. The supposed advantage of Hermes is that developers can target all three mobile platforms with a single code base; but as with any cross-platform framework, there are trade offs in terms of performance, security and flexibility. Hermes is available on GitHub for all developers to use. It has also got its own Twitter account and home page. In a demo, Horowitz showed that a React Native app with Hermes was fully loaded within half the time the same app without Hermes loaded, or about two seconds faster. Check out the video below: Horowitz emphasized on the fact that Hermes cuts the APK size (the size of the app file) to half the 41MB of a stock React Native app, and removes a quarter of the app's memory usage. In other words, with Hermes developers can get users interacting with an app faster with fewer obstacles like slow download times and constraints caused by multiple apps sharing in a limited memory resources, especially on lower-end phones. And these are exactly the phones Facebook is aiming at with Hermes, compared to the fancy high-end phones that well-paid developers typically use themselves. "As developers we tend to carry the latest flagship devices. Most users around the world don't," he said. "Commonly used Android devices have less memory and less storage than the newest phones and much less than a desktop. This is especially true outside of the United States. Mobile flash is also relatively slow, leading to high I/O latency." It's not every day a new JavaScript engine is born, but while there are plenty such engines available for browsers, like Google's V8, Mozilla's SpiderMonkey, Microsoft's Chakra, Horowitz notes Hermes is not aimed at browsers or, for example, how Node.js on the server side. "We're not trying to compete in the browser space or the server space. Hermes could in theory be for those kinds of use cases, that's never been our goal." The Register reports that Facebook has no plan to push Hermes' beyond React Native to Node.js or to turn it into the foundation of a Facebook-branded browser. This is because it's optimized for mobile apps and wouldn't offer advantages over other engines in other usage scenarios. Hermes tries to be efficient through bytecode precompilation – rather than loading JavaScript and then parsing it. Hermes employs ahead-of-time (AOT) compilation during the mobile app build process to allow for more extensive bytecode optimization. Along similar lines, the Fuchsia Dart compiler for iOS is an AOT compiler. There are other ways to squeeze more performance out of JavaScript. The V8 engine, for example, offers a capability called custom snapshots. However, this is a bit more technically demanding than using Hermes. Hermes also abandons the just in time (JIT) compiler used by other JavaScript engines to compile frequently interpreted code into machine code. In the context of React Native, the JIT doesn't do that much to ease mobile app workloads. The reason Hermes exists, as per Facebook, is to make React Native better. "Hermes allows for more optimization on mobile since developers control the build stack," said a Facebook spokesperson in an email to The Register. "For example, we implemented bytecode precompilation to improve performance and developed more efficient garbage collection to reduce memory usage." In a discussion on Hacker News, Microsoft developer Andrew Coates claims that internal testing of Hermes and React Native in conjunction with Microsoft Office for Android shows TTI using Hermes at 1.1s, compared to 1.4s for V8, and with 21.5MB runtime memory impact, compared to 30MB with V8. Hermes is mostly compatible with ES6 JavaScript. To keep the engine small, support for some language features is missing, like with statements and local mode eval(). Facebook’s spokesperson also said to The Register that they are planning to publish benchmark figures in the next week to support its performance claims. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more
Read more
  • 0
  • 0
  • 6908

article-image-openjdk-project-valhallas-lw2-early-access-builds-are-now-available-for-you-to-test
Bhagyashree R
09 Jul 2019
3 min read
Save for later

OpenJDK Project Valhalla’s LW2 early access builds are now available for you to test

Bhagyashree R
09 Jul 2019
3 min read
Last week, the early access builds for OpenJDK Project Valhalla's LW2 phase was released, which was first proposed in October last year. LW2 is the next iteration of the L-World series that brings further language and JDK API support for inline types. https://twitter.com/SimmsUpNorth/status/1147087960212422658 Proposed in 2014, Project Valhalla is an experimental OpenJDK project under which the team is working on major new language features and enhancements for Java 10 and beyond. The new features and enhancements are being done in the following focus areas: Value Types Generic Specialization Reified Generics Improved 'volatile' support The LW2 specifications Javac source support Starting from LW2, the prototype is based on mainline JDK (currently version 14). That is why it requires source-level >= JDK14. To make a class declaration of inline type it uses the “inline class’ modifier or ‘@__inline__’ annotation. Interfaces, annotation types, or enums cannot be declared as inline types. The top-level, inner, or local classes may be inline types. As inline types are implicitly final, they cannot be abstract. Also, all instance fields of an inline class are implicitly final. Inline types implicitly extend ‘java.lang.Object’ similar to enums, annotation types, and interfaces. Supports "Indirect" projections of inline types via the "?" operator. javac now allows using ‘==’ and ‘!=’ operators to compare inline type. Java APIs Among the new or modified APIs include ‘isInlineClass()’, ‘asPrimaryType()’, ‘asIndirectType()’, ‘isIndirectType()’, ‘asNullableType()’, and ‘isNullableType()’. Now the ‘getName()’ method reflects the Q or L type signatures for arrays of inline types. Using ‘newInstance()’ on an inline type will throw ‘NoSuchMethodException’ and ‘setAccessible()’ will throw ‘InaccessibleObjectException’. With LW2, initial core Reflection and VarHandles support are in place. Runtime When attempting to synchronize or call wait(*) or notify*() on an inline type IllegalMonitorException will be thrown. ‘ClassCircularityError’ is thrown if loading an instance field of an inline type which declares its own type either directly ‘NotSerializableException’ will be thrown if you are attempting to serialize an inline type. If you are casting from indirect type to inline type, it may result in ‘NullPointerException’. Download the early access binaries to test this prototype. These were some of the specifications of LW2 iteration. Check out the full list of specification at OpenJDK’s official website. Also, stay tuned with the current happenings in Project Valhalla. Getting started with Z Garbage Collector(ZGC) in Java 11 [Tutorial] Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more Firefox 67 will come with faster and reliable JavaScript debugging tools
Read more
  • 0
  • 0
  • 2332

article-image-rust-1-36-0-releases-with-a-stabilized-future-trait-nll-for-rust-2015-and-more
Bhagyashree R
05 Jul 2019
3 min read
Save for later

Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more

Bhagyashree R
05 Jul 2019
3 min read
Yesterday, the team behind Rust announced the release of Rust 1.36.0. This release brings a stabilized 'Future' trait, NLL for Rust 2015, stabilized Alloc crate as the core allocation and collections library, a new --offline flag for Cargo, and more. Following are some of the updates in Rust 1.36.0: The stabilized 'Future' trait A ‘Future’ in Rust represents an asynchronous value that allows a thread to continue doing useful work while it waits for the value to become available. This trait has been long-awaited by Rust developers and with this release, it has been finally stabilized. “With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async / .await, which we'll tell you more about in the future,” the Rust release team added. The alloc crate is stable The ‘std’ crate of the standard library provides types like Box<T> and OS functionality. But, the problem is it requires a global allocator and other OS capabilities. Beginning with Rust 1.36.0, the parts of std that are dependent on a global allocator will now be available in the ‘alloc’ crate and std will re-export these parts later. Use MaybeUninit<T> instead of mem::uninitialized Previously, the ‘mem::uninitialized’ function allowed you to bypass Rust’s memory-initialization checks by pretending to generate a value of type T without doing anything. Though the function has proven handy while lazily allocating arrays, it can be dangerous in many other scenarios as the Rust compiler just assumes that values are properly initialized. In Rust 1.36.0, the MaybeUninit<T> type has been stabilized to solve this problem. Now, the Rust compiler will understand that it should not assume that a MaybeUninit<T> is a properly initialized T. This will enable you to do gradual initialization more safely and eventually use ‘.assume_init()’. Non-lexical lifetimes (NLL) for Rust 2015 The Rust team introduced NLL in December last year when announcing Rust 1.31.0. It is an improvement to Rust’s static model of lifetimes to make the borrow checker smarter and more user-friendly. When it was first announced, it was only stabilized for Rust 2018. Now the team has backported it to Rust 2015 as well. In the future, we can expect all Rust editions to use NLL. --offline support in Cargo Previously, the Rust package manager, Cargo used to exit with an error if it needed to access the network and the network was not available. Rust 1.36.0 comes with a new flag called ‘--offline’ that makes the dependency resolution algorithm to only use locally cached dependencies, even if there might be a newer version. These were some of the updates in Rust 1.36.0. Read the official announcement to know more in detail. Introducing Vector, a high-performance data router, written in Rust Brave ad-blocker gives 69x better performance with its new engine written in Rust Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust
Read more
  • 0
  • 0
  • 2296
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-npm-inc-after-a-third-try-settles-former-employee-claims-who-were-fired-for-being-pro-union-the-register-reports
Fatema Patrawala
04 Jul 2019
5 min read
Save for later

Npm Inc, after a third try, settles former employee claims, who were fired for being pro-union, The Register reports

Fatema Patrawala
04 Jul 2019
5 min read
Yesterday, reports from The Register confirmed that the Javascript Package registry NMP Inc. and the 3 former employees who were fired, agreed on a settlement.  NPM which stands for Node Package Manager, is the company behind the widely used NPM JavaScript package repository. In March, the company laid off 5 employees in a unprofessional and unethical manner. In April, 3 out of 5 former staffers – Graham Carlson, Audrey Eschright, and Frédéric Harper – had formally accused NPM Inc of union busting in a complaint to the US National Labor Relations Board. https://twitter.com/bram_parsons/status/1146230097617178625 The deal was settled after the third round of negotiations between the two parties as per The Register. The filing posted on the NLRB website, administrative law judge Gerald Etchingham said he had received a letter from one of the attorneys involved in the dispute that both sides had agreed to settle. The terms of the deal were not disclosed but as per the NLRB settlements, such cases usually involve a pay back, job restoration or additional compensation. However, it is highly unlikely that none of the former employees will agree for job restore and will not return to npm. Other than this, NPM Inc is also required to share a letter with current employees accepting the ways in which it violated the laws. But there are no reports of this action yet from Npm inc. https://twitter.com/techworkersco/status/1146255087968239616 Audrey Eschright, one of the plaintiffs  complained on Twitter about the company's behaviour and former rejections to settle on claims. "I'm amazed that NPM has rejected their latest opportunity to settle the NLRB charges and wants to take it to court," she wrote. "Doing so continues the retaliation I and my fellow claimants experienced. We're giving up our own time, making rushed travel plans, and putting in a lot of effort because we believe our rights as workers are that important." According to Eschright, NPM Inc refused to settle because the CEO has taken the legal challenge personally. "Twice their lawyers have spent hours to negotiate an agreement with the NLRB, only to withdraw their offer," she elaborated on Twitter. "The only reason we've heard has been about Bryan Bogensberger's hurt feelings." The Register also mentioned that last week NPM Inc had tried to push back a hearing to be held on 8th July citing the reason that management was traveling for extensive fund raising. But NLRB denied the request and said that the reason is not justified.  NLRB also mentioned that NPM Inc "ignores the seriousness of these cases, which involve three nip-in-the-bud terminations at the onset of an organizing drive." It is indeed true that NPM Inc ignores the seriousness of this case but also oversees the fact that npm registry coordinates the distribution of hundreds of thousands of modules used by some 11 million JavaScript developers around the world. The management of NPM Inc is making irrational decisions and behaving notoriously, due to which the code for the npm command-line interface (CLI) suffers from neglect, unfixed bugs piling up and pull requests languishing.  https://twitter.com/npmstatus/status/1146055266057646080 On Monday, there were reports of Npm 6.9.1 bug which was caused due to .git folder present in the published tarball. The Community architect at the time, Kat Marchán had to release npm 6.9.2 to fix the issue. Shortly after, Marchán, who was a CLI and Community Architect at npm has also quit the company. Marchán made this announcement yesterday on Twitter, adding that she is no longer a maintainer on npm CLI or its components.  https://twitter.com/maybekatz/status/1146208849206005760 Another ex-npm employee noted on Marchán’s resignation, that every modern web framework depends on npm, and npm is inseparable from Kat’s passionate brilliance. https://twitter.com/cowperthwait/status/1146209348135161856 NPM Inc. now not only needs to fix bugs but majorly it also needs to fix its relationship and reputation among the Javascript community. Update on 20th September - NPM Inc. CEO resigns Reports from news sources came about NPM CEO, Bryan Bogensberger to resign effective immediately in order to pursue new opportunities. NPM's Board of directors have commenced a search for a new CEO. The company's leadership will be managed collaboratively by a team comprised of senior npm executives. "I am proud of the complete transformation we have been able to make in such a short period of time," said Bogensberger. "I wish this completely revamped, passionate team monumental success in the years to come!" Before joining npm, Inc., Bogensberger spent three years as CEO and co-founder of Inktank, a leading provider of scale-out, open source storage systems that was acquired by Red Hat, Inc. for $175 million in 2014. He also has served as vice president of business strategy at DreamHost, vice president of marketing at Joyent, and CEO and co-founder of Reasonablysmart, which Joyent acquired in 2009. To know more, check out PR Newswire website. Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems? Surprise NPM layoffs raise questions about the company culture  
Read more
  • 0
  • 0
  • 2920

article-image-the-go-team-shares-new-proposals-planned-to-be-implemented-in-go-1-13-and-1-14
Bhagyashree R
27 Jun 2019
5 min read
Save for later

The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14

Bhagyashree R
27 Jun 2019
5 min read
Yesterday, the Go team shared the details of what all is coming in Go 1.13, the first release that is implemented using the new proposal evaluation process. In this process, feedback is taken from the community on a small number of proposals to reach the final decision. The team also shared what proposals they have selected to implement in Go 1.14 and the next steps. At Gophercon 2017, Russ Cox, Go programming language tech lead at Google, first disclosed the plan behind the implementation of Go 2. This plan was simple: the updates will be done in increments and will have minimal to no effect on everybody else. Updates in Go 1.13 Go 1.13, which marks the first increment towards Go 2, is planned to release in early August this year. A lot of language changes have landed in this release that were shortlisted from the huge list of Go 2 proposals based on the new proposal evaluation process. These proposals are selected under the criteria that they should address a problem, have minimal disruption, and provide a clear and well-understood solution. The team selected “relatively minor and mostly uncontroversial” proposals for this version. These changes are backward-compatible as modules, Go’s new dependency management system is not the default build mode yet. Go 1.11 and Go 1.12 include preliminary support for modules that makes dependency version information explicit and easier to manage. Proposals planned to be implemented in Go 1.13 The proposals that were initially planned to be implemented in Go 1.13 were: General Unicode identifiers based on Unicode TR31: This proposes to add support for enabling programmers using non-Westen alphabets to combine characters in identifiers and export uncased identifiers. Binary integer literals and support for_ in number literals: Go comes with support for octal, hexadecimal, and standard decimal literals. However, unlike other mainstream languages like Java 7, Python 3, and Ruby, it does not have support for binary integer literals. This proposes adding support for binary integer literals as a new prefix to integer literals like 0b or 0B. Another minor update is adding support for a blank (_) as a separator in number literals to improve the readability of complex numbers. Permit signed integers as shift counts: This proposes to change the language spec such that the shift count can be a signed or unsigned integer, or any non-negative constant value that can be represented as an integer. Out of these shortlisted proposals the binary integer literals, separators for number literals, and signed integer shift counts are implemented. The general Unicode identifiers proposal was not implemented as there was no “concrete design document in place in time.“ The proposal to support binary integer literals was significantly expanded, which led to an overhauled and modernized Go’s number literal syntax. Updates in Go 1.14 After the relatively minor updates in Go 1.13, the team has plans to take it up a notch with Go 1.14. With the new major version Go 2, their overarching goal is to provide programmers improved scalability. To achieve this, the team has to tackle the three biggest hurdles: package and version management, better error handling support, and generics. The first hurdle, package and version management will be addressed by the modules feature, which is growing stronger with each release. For the other two, the team has presented draft designs at last year’s GopherCon in Denver. https://youtu.be/6wIP3rO6On8 Proposals planned to be implemented in Go 1.14 Following are the proposals that are shortlisted for Go 1.14: A built-in Go error check function, ‘try’: This proposes a new built-in function named ‘try’ for error handling. It is designed to remove the boilerplate ‘if’ statements typically associated with error handling in Go. Allow embedding overlapping interfaces: This is a backward-compatible proposal to make interface embedding more tolerant. Diagnose ‘string(int)’ conversion in ‘go vet’: This proposes to remove the explicit type conversion string(i) where ‘i’ has an integer type other than ‘rune’. The team is making this backward-incompatible change as it was introduced in the early days of Go and now has become quite confusing to comprehend. Adopt crypto principles: This proposes to implement design principles for cryptographic libraries outlined in the Cryptography Principles document. The team is now seeking community feedback on these proposals. “We are especially interested in fact-based evidence illustrating why a proposal might not work well in practice or problematic aspects we might have missed in the design. Convincing examples in support of a proposal are also very helpful,” the blog post reads. While developers are confident that Go 2 will bring a lot of exciting features and enhancements, not many are a fan of some of the proposed features, for instance, the try function. “I dislike the try implementation, one of Go's strengths for me after working with Scala is the way it promotes error handling to a first class citizen in writing code, this feels like its heading towards pushing it back to an afterthought as tends to be the case with monadic operations,” a developer commented on Hacker News. Some Twitter users also expressed their dislike towards the proposed try function: https://twitter.com/nicolasparada_/status/1144005409755357186 https://twitter.com/dullboy/status/1143934750702362624 These were some of the updates proposed for Go 1.13 and Go 1.14. To know more about this news, check out the Go Blog. Go 1.12 released with support for TLS 1.3, module support among other updates Go 1.12 Release Candidate 1 is here with improved runtime, assembler, ports and more State of Go February 2019 – Golang developments report for this month released  
Read more
  • 0
  • 0
  • 3527

article-image-elixir-1-9-is-now-out-with-built-in-releases-a-new-streamlined-configuration-api-and-more
Bhagyashree R
25 Jun 2019
4 min read
Save for later

Elixir 1.9 is now out with built-in ‘releases’, a new streamlined configuration API, and more

Bhagyashree R
25 Jun 2019
4 min read
After releasing Elixir 1.8 in January, the team behind Elixir announced the release of Elixir 1.9 yesterday. This comes with a new ‘releases’ feature, the Config API for streamlined configuration, plus many other enhancements and bug fixes. Elixir is a functional, concurrent, general-purpose programming language that runs on the Erlang VM. Releases, a single unit for code and the runtime Releases are the most important feature that has landed in this version. A release is a “self-contained directory” that encapsulates not only your application code and its dependencies but also the whole Erlang VM and runtime. So, basically, it allows you to precompile and package your code and runtime in a single unit. You can then deploy this single unit to a target that is running on the same OS distribution and version as the machine running the ‘mix release’ command. Following are some of the benefits ‘releases’ provide: Code preloading: As releases run in embedded mode for loading code it loads all the modules beforehand. This makes your system ready for handling requests right after booting. Configuration and customization: It gives you “fine-grained control” over system configuration and the VM flags for starting the system. Multiple releases: It allows you to assemble different releases of the same application with different configurations. Management scripts: It provides management scripts to start, restart, connect to the running system remotely, execute RPC calls, run in daemon mode, run in Windows service mode, and more. Releases are also the last planned feature for Elixir and the team is not planning to add any other user-facing feature in the near future. The Elixir team shared in the announcement, “Of course, it does not mean that v1.9 is the last Elixir version. We will continue shipping new releases every 6 months with enhancements, bug fixes, and improvements.” A streamlined configuration API This version comes with a more streamlined Elixir’s configuration API in the form of a new ‘Config’ module. Previously, the ‘Mix.Config’ configuration API was part of the Mix build tool. Beginning Elixir 1.9, the runtime configuration is now taken care of by ‘releases’ and Mix is no longer included in ‘releases’, this API is now ported to Elixir. “In other words, ‘use Mix.Config’ has been soft-deprecated in favor of import Config,” the announcement reads. Another crucial change in configuration is that starting from this release the ‘mix new’ command will not generate a ‘config/config.exs’ file. The ‘mix new --umbrella’ will also not generate a configuration for each child app as the configuration is now moved from individual umbrella application to the root of the umbrella. Many developers are excited about the ‘releases’ support. One user praised the feature saying, “Even without the compilation and configuration stuff, it's easier to put the release bundle in something basic like an alpine image, rather than keep docker image versions and app in sync.” However, as many of them currently rely on the Distillery tool for deployment they have some reservations about using releases as it lacks some of the features Distillery provides. “Elixir's `mix release` is intended to replace (or remove the need for) third-party packages like Distillery. However, it's not there yet, and Distillery is strictly more powerful at the moment. Notably, Elixir's release implementation does not support hot code upgrades. I use upgrades all the time, and won't be trying out Elixir's releases until this shortcoming is addressed,” a Hacker News user commented. Public opinion on Twitter was also positive: https://twitter.com/C3rvajz/status/1140351455691444225 https://twitter.com/rrrene/status/1143443465549897733 Why Ruby developers like Elixir How Change.org uses Flow, Elixir’s library to build concurrent data pipelines that can handle a trillion messages Introducing Mint, a new HTTP client for Elixir
Read more
  • 0
  • 0
  • 2367

article-image-rusts-original-creator-graydon-hoare-on-the-current-state-of-system-programming-and-safety
Bhagyashree R
20 Jun 2019
4 min read
Save for later

Rust’s original creator, Graydon Hoare on the current state of system programming and safety

Bhagyashree R
20 Jun 2019
4 min read
Back in July 2010, Graydon Hoare showcased the Rust programming language for the very first time at Mozilla Annual Summit. Rust is an open-source system programming language that was created with speed, memory safety, and parallelism in mind. Looking at Rust’s memory and thread safety guarantees, a supportive community, a quickly evolving toolchain, many major projects are being rewritten in Rust. And, one of the major ones was Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Mozilla is also using Rust for rewriting many other key parts of Firefox under Project Quantum. Fastly chose Rust to implement Lucet, its native WebAssembly compiler and runtime. More recently, Facebook also chose Rust to implement its controversial Libra blockchain. As the 9th anniversary of the day when Hoare first presented Rust in front of a large audience is approaching, The New Stack took a very interesting interview with him. In the interview, he talked about the current state of system programming, how safe he considers our current complex systems are, how they can be made safer, and more. Here are the key highlights from the interview: Hoare on a brief history of Rust Hoare started working on Rust as a side-project in 2006. Mozilla, his employer at that time, got interested in the project and provided him a team of engineers to help him in the further development of the language. In 2013, he experienced burnout and decided to step down as a technical lead. After working on some less-time-sensitive projects, he quit Mozilla and worked for the payment network, Stellar. In 2016, he got a call from Apple to work on the Swift programming language. Rust is now being developed by the core teams and an active community of volunteer coders. This programming language that he once described as “spare-time kinda thing” is being used by many developers to create a wide range of new software applications from operating systems to simulation engines for virtual reality. It was also "the most loved programming language" in the Stack Overflow Developer Survey for four years in a row (2016-2019). Hoare was very humble about the hard work and dedication he has put into creating the Rust programming language. When asked to summarize Rust’s history he simply said that “we got lucky”.  He added, “that Mozilla was willing to fund such a project for so long; that Apple, Google, and others had funded so much work on LLVM beforehand that we could leverage; that so many talented people in academia, industry and just milling about on the internet were willing to volunteer to help out.” The current state of system programming and safety Hoare considers the state of system programming language “healthy” as compared to the starting couple of decades in his career. Now, it is far easier to sell a language that is focused on performance and correctness. We are seeing more good languages coming into the market because of the increasing interaction between academia and industry. When asked about safety, Hoare believes that though we are slowly taking steps towards better safety, the overall situation is not getting better. He attributes building a number of new complex computing systems is making it worse. He said, “complexity beyond comprehension means we often can’t even define safety, much less build mechanisms that enforce it.” Another reason according to him is the huge number of vulnerable software present in the field that can be exploited anytime by a bad actor. For instance, on Tuesday, a zero-day vulnerability was fixed in Firefox that was being “exploited in the wild” by attackers. “Like much of the legacy of the 20th century, there’s just a tremendous mess in software that’s going to take generations to clean up, assuming humanity even survives that long,” he adds. How system programming can be made safer Hoare designed Rust with safety in mind. Its rich type system and ownership model ensures memory and thread safety. However, he suggests that we can do a lot better when it comes to safety in system programming. He listed a bunch of new improvements that we can implement, “information flow control systems, effect systems, refinement types, liquid types, transaction systems, consistency systems, session types, unit checking, verified compilers and linkers, dependent types.” Hoare believes that there are already many features suggested by academia. The main challenge for us is to implement these features “in a balanced, niche-adapted language that’s palatable enough to industrial programmers to be adopted and used.” You can read Hoare’s full interview on The New Stack. Rust 1.35.0 released Rust shares roadmap for 2019 Rust 1.34 releases with alternative cargo registries, stabilized TryFrom and TryInto, and more
Read more
  • 0
  • 0
  • 9142
article-image-python-3-8-beta-1-is-now-ready-for-you-to-test
Bhagyashree R
11 Jun 2019
2 min read
Save for later

Python 3.8 beta 1 is now ready for you to test

Bhagyashree R
11 Jun 2019
2 min read
Last week, the team behind Python announced the release of Python 3.8.0b1, which is the first out of the four planned beta release previews of Python 3.8. This release marks the beginning of the beta phase where you can test new features and make your applications ready for the new release. https://twitter.com/ThePSF/status/1137797764828553222 These are some of the features that you will see in the upcoming Python 3.8 version: Assignment expressions Assignment expressions were proposed in PEP 572, which was accepted after an extensive discussion among the Python developers. This feature introduces a new operator (:=) with which you will be able to assign variables within an expression. Positional-only arguments In Python, you can pass an argument to a function by position, keyword, or both. API designers may sometimes want to restrict passing the arguments by position only. To easily implement this, Python 3.8 will come with a new marker (/) to indicate that the arguments to its left are positional only. This is similar to * that indicates the arguments to its right are keyword only. Python Initialization Configuration Python is highly configurable, but the configurations are scattered all around the code. This version introduces new functions and structures to the Python Initialization C API to provide Python developers a “straightforward and reliable way” to configure Python. The Vectorcall protocol for CPython The calling convention impacts the flexibility and performance of your code considerably. To optimize the calling of objects, this release introduces Vectorcall protocol and a calling convention that is already being used internally for Python and built-in functions. Runtime audit hooks Python 3.8 will come with two new APIs: Audit Hook and Verified Open Hook to give you insights into a running Python application. These will facilitate both application developers and system administrators to integrate Python into their existing monitoring systems. As this is a beta release, developers should refrain from using it in production environments. The next beta release is currently planned to release on July 1st. To know more about Python 3.8.0b1, check out the official announcement. Which Python framework is best for building RESTful APIs? Django or Flask? PyCon 2019 highlights: Python Steering Council discusses the changes in the current Python governance structure Python 3.8 alpha 2 is now available for testing
Read more
  • 0
  • 0
  • 2862

article-image-nim-0-20-0-1-0-rc1-released-with-many-new-features-library-additions-language-changes-and-bug-fixes
Vincy Davis
07 Jun 2019
6 min read
Save for later

Nim 0.20.0 (1.0 RC1) released with many new features, library additions, language changes and bug fixes

Vincy Davis
07 Jun 2019
6 min read
Yesterday, the Nim team announced the release of Nim version 0.20.0. Nim is a statically typed compiled systems programming language, which successfully combines the concepts from mature languages like Python, Ada and Modula. This is a massive release from Nim, with more than 1,000 commits. Nim version 0.20.0 is effectively Nim 1.0 RC1. The team has also mentioned that the stable release 1.0 will either be, the Nim 0.20.0 being promoted to 1.0 status or another release candidate, as there will be no more breaking changes. Also, version1.0 will be a long-term supported stable release and will only receive bug fixes and new features in the future, as long as it doesn't break backwards compatibility. New Features not is always a unary operator Stricter compile time checks for integer and float conversions Tuple unpacking for constant and for loop variables Hash sets and tables are initialized by default Better error message for case-statements The length of a table must not change during iteration Better error message for index out of bounds Changelog The Nim version 0.20.0 includes many changes affecting backwards compatibility. One of the changes is that strutils.editDistance has been deprecated, instead editdistance.editDistance or editdistance.editDistanceAscii to be used instead. One of the breaking changes in the standard library includes osproc.execProcess now also takes a workingDir parameter and std/sha1.secureHash will now accept openArray[char], and not string. There are few breaking changes in the compiler too. One of the main changes is that the compiler now implements the “generic symbol prepass” for when statements in generics. Library additions There are many new library additions in this release. Some of them are mentioned below: stdlib module std/editdistance as a replacement for the deprecated strutils.editDistance. stdlib module std/wordwrap as a replacement for the deprecated strutils.wordwrap. Added split, splitWhitespace, size, alignLeft, align, strip, repeat procs and iterators to unicode.nim. Added or for NimNode in macros Added system.typeof for more control over how type expressions can be deduced. Library changes Many changes have been made in the library. Some of them are mentioned below: The string output of macros.lispRepr proc has been tweaked slightly. The dumpLisp macro in this module now outputs an indented proper Lisp, devoid of commas. Added macros.signatureHash that returns a stable identifier derived from the signature of a symbol. In strutils empty strings now no longer match as substrings The Complex type is now a generic object and not a tuple anymore. The ospaths module is now deprecated, use os instead. Note that os is available in a NimScript environment but unsupported operations produce a compile-time error. Language additions There have been new additions to the language as well. Some of them are mentioned below: Vm support for float32<->int32 and float64<->int64 casts was added. There is a new pragma block noSideEffect that works like the gcsafe pragma block User defined pragmas are now allowed in the pragma blocks Pragma blocks are no longer eliminated from the typed AST tree to preserve pragmas for further analysis by macros. Language changes The standard extension for SCF (source code filters) files was changed from .tmpl to .nimf. Pragma syntax is now consistent. Previous syntax where type pragmas did not follow the type name is now deprecated. Also pragma before generic parameter list is deprecated to be consistent with how pragmas are used with a proc. Hash sets and tables are initialized by default. The explicit initHashSet, initTable, etc. are not needed anymore. Tool changes jsondoc now includes a moduleDescription field with the module description. jsondoc0 shows comments as its own objects as shown in the documentation. nimpretty: –backup now defaults to off instead of on and the flag was undocumented; use git instead of relying on backup files. koch now defaults to build the latest stable Nimble version unless you explicitly ask for the latest master version via --latest. Compiler changes The deprecated fmod proc is now unavailable on the VM A new --outdir option was added The compiled JavaScript file for the project produced by executing nim js will no longer be placed in the nimcache directory. The --hotCodeReloading has been implemented for the native targets. The compiler also provides a new more flexible API for handling the hot code reloading events in the code. The compiler now supports a --expandMacro:macroNameHere switch for easy introspection into what a macro expands into. The -d:release switch now does not disable runtime checks anymore. For a release build that also disables runtime checks use -d:release -d:danger or simply -d:danger. The Nim version 0.20.0 also contains many bug fixes. Most developers are quite delighted with the release of Nim version 0.20.0. A user on Hacker News states that “It's impressive. 1000 commits! Great job Nim team!” Another user comments, “I've been full steam on the Nim train for the past year. It really hits a sweet spot between semantic complexity and language power. If you've used any mainstream language and understand types, you already understand 80% of the semantics you need to be productive. But more advanced features (generics, algebraic data types, hygienic macros) are available when needed. Now that the language is approaching 1.0, the only caveat is a small ecosystem and community. Nim has completely replaced Node as my language of choice for side projects and prototyping.” While there are some users who still prefer Python, for its strong command language, a user says that “It seems to me that the benefits of Nim over Python are far smaller than the benefits of Python's library ecosystem. I'm pretty happy with Python though. It seems like Nim's benefits couldn't be that big. I consider Python "great", so the best Nim could be is "great-er", as a core language I mean. I've been rooting for Nim, but haven't actually tried it. And my use case is pretty small, I admit.” These are select few updates. More information on the Nim blog. Containers and Python are in demand, but Blockchain is all hype, says Skill Up developer survey Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more
Read more
  • 0
  • 0
  • 1764

article-image-researchers-highlight-impact-of-programming-languages-on-code-quality-and-reveal-flaws-in-the-original-fse-study
Amrata Joshi
07 Jun 2019
7 min read
Save for later

Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study

Amrata Joshi
07 Jun 2019
7 min read
Researchers from Northeastern University, University of Massachusetts, Amherst and Czech Technical University in Prague, published a paper on the Impact of Programming Languages on Code Quality which is a reproduction of work by Ray et al published in 2014 at the Foundations of Software Engineering (FSE) conference. This work claims to reveal an association between 11 programming languages and software defects in projects that are hosted on GitHub. The original paper by Ray et al was well-regarded in the software engineering community because it was nominated as the Communication of the ACM (CACM) research highlight. And the above mentioned researchers conducted a study to validate the claims from the original work. They used the experimental repetition method, which was partially successful and found out that association of 10 programming languages with defects is true. Then they conducted an independent reanalysis which revealed a number of flaws in the original study. And finally the results suggested that only four languages are significantly associated with the defects, and the effect size (correlation between two variables) for them is extremely small. Let us take a look at all the 3 researches in brief: 2014 FSE paper: Does programming language choice affect software quality? The question that arises from the study by Ray et al. published at the 2014 Foundations of Software Engineering (FSE) conference is, What is the effect of programming language on software quality? The results reported in the FSE paper and later repeated in the followup works are based on an observational study of a corpus of 729 GitHub projects that are written in 17 programming languages. For measuring the quality of code, the authors have identified, annotated, and tallied commits that are supposed to indicate bug fixes. Then the authors fitted a Negative Binomial regression against the labeled data for answering the following research questions: RQ1 Are some languages more defect prone than others? The original paper concluded that “Some languages have a greater association with defects than others, although the effect is small.” The conclusion was that Haskell, Clojure, TypeScript, Scala and Ruby were less error-prone whereas C, JavaScript, C++, Objective-C, PHP, and Python were more error-prone. RQ2 Which language properties relate to defects? The original study concluded that “There is a small but significant relationship between language class and defects. Functional languages have a smaller relationship to defects than either procedural or scripting languages.” It could be concluded that functional and strongly typed languages showed fewer errors, whereas the procedural, unmanaged languages and weakly typed induced more errors. RQ3 Does language defect proneness depend on domain? A mix of automatic and manual methods has been used for classifying projects into six application domains. The paper concluded that “There is no general relationship between domain and language defect proneness”. It can be noted that the variation in defect proneness comes from the languages themselves which makes the domain a less indicative factor. RQ4 What’s the relation between language & bug category? The study concluded that “Defect types are strongly associated with languages. Some defect type like memory error, concurrency errors also depend on language primitives. Language matters more for specific categories than it does for defects overall.” It can be concluded that for memory, languages with manual memory management have more errors. Also, Java stands out as the only garbage collecting language that is associated with more memory errors. Whereas for concurrency, Python, JavaScript, etc have fewer errors than languages with concurrency primitives. Experimental repetition method performed to obtain similar results from the original study The original study used methods of data acquisition, data cleaning, and statistical modeling. The researchers then planned for experimental repetition. Their first objective was to repeat the analyses of the FSE paper and obtain the same results. They used an artifact coming from the original authors that had 3.45 GB of processed data and 696 lines of R code for loading the data and performing statistical modeling. According to a repetition process, a script generates results and match the results in the published paper. The researchers wrote new R scripts for mimicking all of the steps from the original manuscript. They then found out that it is essential to automate the production of all tables, numbers, and graphs to iterate multiple times. Researchers concluded that the repetition was partly successful and according to them: RQ1 produced small differences but had qualitatively similar conclusions. The researchers were mostly able to replicate RQ1 based on the artifact provided by the authors. The researchers found 10 languages with a statistically significant association with errors, instead of the eleven reported RQ2 could have been repeated but they noted issues with language classification. They noted that RQ3 could not be repeated as the code was missing and also because their reverse engineering attempts failed. RQ4 could not be repeated because of irreconcilable differences in the data.  Another reason was that the artifact didn't contain the code which implemented the NBR for bug types. The Reanalysis method confirms flaws in the FSE study Their second objective was to carry out a reanalysis of RQ1 of the FSE paper. i.e., Whether some languages are more defect prone than others? The reanalysis differs from repetition as it proposes alternative data processing and statistical analyses for addressing methodological weaknesses of the original work. The researchers then used methods such as data processing, data cleaning, statistical modeling. According to the researchers, the p-values for Objective-C, JavaScript, C, TypeScript, PHP, and Python fall outside of the “significant” range of values. Thus, 6 of the original 11 claims have been discarded at this stage. Controlling the FDR increased the p-values slightly, but did not invalidate additional claims. The p-value for one additional language, Ruby, lost its significance and even Scala is out of the statistically significant set. And a smaller p-value (≤ 0.05) indicates stronger evidence against the null hypothesis, so the null hypothesis can be rejected in that case. In the table below, grey cells are indicating disagreement with the conclusion of the original work and which include, C, Objective-C, JavaScript, TypeScript, PHP, and Python. So, the reanalysis has failed to validate most of the claims and the multiple steps of data cleaning and improved statistical modeling have also invalidated the significance of 7 out of 11 languages. Image source: Impact of Programming Languages on Code Quality The researchers conclude that the work by the Ray et al. has aimed to provide evidence for one of the fundamental assumptions in programming language research, that is language design matters. But the researchers have identified numerous problems in the FSE study that has invalidated its key result. The paper reads, “Our intent is not to blame, performing statistical analysis of programming languages based on large-scale code repositories is hard. We spent over 6 months simply to recreate and validate each step of the original paper.” The researchers’ contribution provides thorough analysis and discussion of the downfalls associated with statistical analysis of large code bases. According to them, statistical analysis combined with large data corpora is a powerful tool that might even answer the hardest research questions but the possibility of errors—is enormous. The researchers further state that “It is only through careful re-validation of such studies that the broader community may gain trust in these results and get better insight into the problems and solutions associated with such studies.” Check out the paper On the Impact of Programming Languages on Code Quality for more in-depth analysis. Samsung AI lab researchers present a system that can animate heads with one-shot learning Researchers from China introduced two novel modules to address challenges in multi-person pose estimation AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural sounding speech
Read more
  • 0
  • 0
  • 3743
article-image-storm-2-0-0-releases-with-java-enabled-architecture-new-core-and-streams-api-and-more
Vincy Davis
03 Jun 2019
4 min read
Save for later

Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more

Vincy Davis
03 Jun 2019
4 min read
Last week, Apache Storm PMC announced the release of Storm 2.0.0. The major highlight of this release is that Storm has been re-architected in pure Java. Previously a large part of Storm's core functionality was implemented in Clojure. This release also includes significant improvements in terms of performance, a new stream API, windowing enhancements, and Kafka integration changes. New Architecture Implemented in Java With this release, Storm has been re-architected, with its core functionality implemented in pure Java. This new implementation has improved its performance significantly and also has made internal APIs more maintainable and extensible. The previous language Clojure often posed a barrier for entry to new contributors. Storm's codebase will be now more accessible to developers who don't want to learn Clojure in order to contribute. New High-Performance Core Storm 2.0.0 has a new core featuring a leaner threading model, a blazing fast messaging subsystem and a lightweight back pressure model. This has been designed to push boundaries on throughput, latency, and energy consumption while maintaining backward compatibility. Also, this makes Storm 2.0, the first streaming engine to break the 1-microsecond latency barrier. New Streams API This version has a new typed API, which will express streaming computations more easily, using functional style operations. It builds on top of the Storm's core spouts and bolt APIs and automatically fuses multiple operations together. This will help in optimizing the pipeline. Windowing Enhancements Storm 2.0.0's windowing API can now save/restore the window state to the configured state backend. This will enable larger continuous windows to be supported. Also, the window boundaries can now be accessed via the APIs. Improvements in Kafka Kafka Integration Changes Removal of Storm-Kafka Due to Kafka's deprecation of the underlying client library, the storm-kafka module has been removed. Users will have to move, to the storm-kafka-client module. This uses Kafka's ‘kafka-clients’ library for integration. Move to Using the KafkaConsumer.assign API Kafka's own mechanism which was used in Storm 1.x has been removed entirely in 2.0.0. The storm-kafka-client subscription interface has also been removed, due to the limited control it offered over the subscription behavior. It has been replaced with the ‘TopicFilter’ and ‘ManualPartitioner’ interfaces. For custom subscription users, head over to the storm-kafka-client documentation, which describes how to customize assignment. Other Kafka Highlights The KafkaBolt now allows you to specify a callback that will be called when a batch is written to Kafka. The FirstPollOffsetStrategy behavior has been made consistent between the non-Trident and Trident spouts. Storm-kafka-client now has a transactional non-opaque Trident spout. Users have also been notified that the 1.0.x version line will no longer be maintained and have strongly encouraged users to upgrade to a more recent release. The Java 7 support has also been dropped, and Storm 2.0.0 requires Java 8. There has been a mixed reaction from users over the changes, in Storm 2.0.0. Few users are not happy with Apache dropping the Clojure language. As a user on Hacker News comments, “My team has been using Clojure for close to a decade, and we found the opposite to be the case. While the pool of applicants is smaller, so is the noise ratio. Clojure being niche means that you get people who are willing to look outside the mainstream, and are typically genuinely interested in programming. In case of Storm, Apache commons is run by Java devs who have zero interest in learning Clojure. So, it's not surprising they would rewrite Storm in their preferred language.” Some users think that this move of dropping Clojure language shows that developers nowadays are unwilling to learn new things As a user on Hacker News comments, “There is a false cost assigned to learning a language. Developers are too unwilling to even try stepping beyond the boundaries of the first thing they learned. The cost is always lower than they may think, and the benefits far surpassing what they may think. We've got to work at showing developers those benefits early; it's as important to creating software effectively as any other engineer's basic toolkit.” Others are quite happy with Storm getting Java enabled. A user on Reddit said, “To me, this makes total sense as the project moved to Apache. Obviously, much more people will be able to consider contributing when it's in Java. Apache goal is sustainability and long-term viability, and Java would work better for that.” To download the Storm 2.0.0 version, visit the Storm downloads page. Walkthrough of Storm UI Storing Apache Storm data in Elasticsearch Getting started with Storm Components for Real Time Analytics
Read more
  • 0
  • 0
  • 2503

article-image-ian-lance-taylor-golang-team-member-adds-another-perspective-to-go-being-googles-language
Sugandha Lahoti
28 May 2019
6 min read
Save for later

Ian Lance Taylor, Golang team member, adds another perspective to Go being Google's language

Sugandha Lahoti
28 May 2019
6 min read
Earlier this month, the Hacker News community got into a heated debate on whether “Go is Google’s language, and not the community’s”. The thread was first started by Chris Siebenmann who works at the Department of Computer Science, University of Toronto. His blog post reads, “Go has community contributions but it is not a community project. It is Google’s project.” In response to his statements, last Thursday, Ian Lance Taylor, a googler and member of the Golang team added his own views on a Google group mailing list, that don't necessarily contradict Chris’s blog post but add some nuance. Ian begins with a disclaimer: “I'm speaking exclusively for myself here, not for Google nor for the Go team.” He then reminds us that Go is an open source language considering all the source code, including for all the infrastructure support, is freely available and may be reused and changed by anyone. Go provides all developers the freedom to fork and take an existing project in a new direction.  He further explains how there are 59 Googlers and 51 non-Googlers on the committers list which includes the set of people who can commit changes to the project. He says, “so while Google is the majority, it's not an overwhelming one.” Golang has a small core committee which makes decisions Contradicting Chris’s opinion of how Golang is only run by a small set of people which prevents it from becoming the community’s language, he says, “All successful languages have a small set of people who make the final decisions. Successful languages pay attention to what people want, but to change the language according to what most people want is, I believe, a recipe for chaos and incoherence.  I believe that every successful language must have a coherent vision that is shared by a relatively small group of people.” He then adds, “Since Go is a successful language, and hopes to remain successful, it too must be open to community input but must have a small number of people who make final decisions about how the language will change over time.” This makes sense. The core team’s full-time job is to take care of the language instead of taking errant decisions based on community backlash. Google will not make or block a change in a way that kills an entire project. But this does not mean they should sit idly, ignoring the community response. Ideally, the more than a project genuinely belongs to its community, the more it will reflect what the community wants and needs. Ian defends Google as a company being a member of the Golang team, saying they are doing more work with Go at a higher level, supporting efforts like the Go Cloud Development Kit and support for Go in Google Cloud projects like Kubernetes. He also assures that executives, and upper management in general, have never made any attempt to affect how the Go language and tools and standard library are developed. “Google, apart from the core Go team, does not make decisions about the language.” What if Golang is killed? He is unsure of what will happen if someone on the core Go team decides to leave Google but wants to continue working on Go. He says, “many people who want to work on Go full time wind up being hired by Google,  so it would not be particularly surprising if the core Go team continues to be primarily or exclusively Google employees.” This reaffirms our original argument of Google having a propensity to kill its own products. While Google’s history shows that many of their dead products are actually an important step towards something better and more successful, why and how much of that logic would be directly relevant to an open source project is something worth thinking about. He further adds, “ It's also possible that someday it will become appropriate to create some sort of separate Go Foundation to manage the language.”  But did not specify what such a foundation would look like, who its members will be, and how the governance model will be like. Will Google leave it to the community to figure out the governance model suddenly by pulling off the original authors into some other exciting new project? Or would they let the authors only work on Golang in their spare time at home or at the weekends? Another common argument is on what Google has invested to keep Go thriving and if, the so-called Go foundation will be able to sustain a healthy development cycle with low monetary investments (although GitHub sponsors can, maybe, change that). A comment on Hacker News reads, “ Like it or not, Google is probably paying around $10 million a year to keep senior full-time developers around that want to work on the language. That could be used as a benchmark to calculate how much of an investment is required to have a healthy development cycle. If a community-maintained fork is created, it would need time and monetary investment similar to what Google is doing just to maintain and develop non-controversial features. Question is: Is this assessment sensible and if so, is the community able or willing to make this kind of investment?” In general, though, most people/developers agreed with Ian. Here are a few responses from the same mailing list: “I just want to thank Ian for taking the time to write this. I've already got the idea that it worked that way, but my own deduction process, but it's good to have a confirmation from inside.” “Thank you for writing your reply Ian. Since it's a rather long post I don't want to go through it point by point, but suffice it to say that I agree with most of what you've written.” Read Ian’s post on Google Forums. Is Golang truly community driven and does it really matter? Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch
Read more
  • 0
  • 0
  • 4073